# Zephyr 内核服务(1)

Zephyr内核的调试、中断与同步

• Zephyr内核的调试、中断与同步

# Kernel Services 内核服务¶

The Zephyr kernel lies at the heart of every Zephyr application. It provides a low footprint, high performance, multi-threaded execution environment with a rich set of available features. The rest of the Zephyr ecosystem, including device drivers, networking stack, and application-specific code, uses the kernel’s features to create a complete application.

Zephyr 内核位于每一个 Zephyr 应用的核心。它提供了一个低占用空间、高性能、多线程的执行环境，并具有丰富的可用特性。Zephyr 生态系统的其余部分，包括设备驱动程序、网络堆栈和特定于应用程序的代码，使用内核的特性来创建一个完整的应用程序。

The configurable nature of the kernel allows you to incorporate only those features needed by your application, making it ideal for systems with limited amounts of memory (as little as 2 KB!) or with simple multi-threading requirements (such as a set of interrupt handlers and a single background task). Examples of such systems include: embedded sensor hubs, environmental sensors, simple LED wearable, and store inventory tags.

Applications requiring more memory (50 to 900 KB), multiple communication devices (like Wi-Fi and Bluetooth Low Energy), and complex multi-threading, can also be developed using the Zephyr kernel. Examples of such systems include: fitness wearables, smart watches, and IoT wireless gateways.

## Scheduling, Interrupts, and Synchronization 调度、中断和同步¶

These pages cover basic kernel services related to thread scheduling and synchronization.

This section describes kernel services for creating, scheduling, and deleting independently executable threads of instructions.

A thread is a kernel object that is used for application processing that is too lengthy or too complex to be performed by an ISR.

Any number of threads can be defined by an application (limited only by available RAM). Each thread is referenced by a thread id that is assigned when the thread is spawned.

A thread has the following key properties:

• A stack area, which is a region of memory used for the thread’s stack. The size of the stack area can be tailored to conform to the actual needs of the thread’s processing. Special macros exist to create and work with stack memory regions.

堆栈区域，是用于线程堆栈的内存区域。堆栈区域的大小可以根据线程处理的实际需要进行调整。存在特殊的宏来创建和处理堆栈内存区域。

• A thread control block for private kernel bookkeeping of the thread’s metadata. This is an instance of type k_thread.

• An entry point function, which is invoked when the thread is started. Up to 3 argument values can be passed to this function.

一个入口点函数，在线程启动时调用。最多可以将3个参数值传递给这个函数。

• A scheduling priority, which instructs the kernel’s scheduler how to allocate CPU time to the thread. (See Scheduling.)

调度优先级，指示内核的调度程序如何为线程分配 CPU 时间

• A set of thread options, which allow the thread to receive special treatment by the kernel under specific circumstances. (See Thread Options.)

一组线程选项，允许线程在特定情况下接受内核的特殊处理。(见线程选项。)

• A start delay, which specifies how long the kernel should wait before starting the thread.

开始延迟，指定内核在启动线程之前应该等待多长时间

• An execution mode, which can either be supervisor or user mode. By default, threads run in supervisor mode and allow access to privileged CPU instructions, the entire memory address space, and peripherals. User mode threads have a reduced set of privileges. This depends on the CONFIG_USERSPACE option. See User Mode.

执行模式，可以是管理器模式或用户模式。默认情况下，线程以管理器模式运行，并允许访问特权 CPU 指令、整个内存地址空间和外围设备。用户模式线程拥有一组减少的特权。这取决于 config_userspace 选项。参见用户模式。

## Lifecycle 生命周期¶

A thread must be created before it can be used. The kernel initializes the thread control block as well as one end of the stack portion. The remainder of the thread’s stack is typically left uninitialized.

Specifying a start delay of K_NO_WAIT instructs the kernel to start thread execution immediately. Alternatively, the kernel can be instructed to delay execution of the thread by specifying a timeout value – for example, to allow device hardware used by the thread to become available.

The kernel allows a delayed start to be canceled before the thread begins executing. A cancellation request has no effect if the thread has already started. A thread whose delayed start was successfully canceled must be re-spawned before it can be used.

Once a thread is started it typically executes forever. However, a thread may synchronously end its execution by returning from its entry point function. This is known as termination.

A thread that terminates is responsible for releasing any shared resources it may own (such as mutexes and dynamically allocated memory) prior to returning, since the kernel does not reclaim them automatically.

In some cases a thread may want to sleep until another thread terminates. This can be accomplished with the k_thread_join() API. This will block the calling thread until either the timeout expires, the target thread self-exits, or the target thread aborts (either due to a k_thread_abort() call or triggering a fatal error).

Once a thread has terminated, the kernel guarantees that no use will be made of the thread struct. The memory of such a struct can then be re-used for any purpose, including spawning a new thread. Note that the thread must be fully terminated, which presents race conditions where a thread’s own logic signals completion which is seen by another thread before the kernel processing is complete. Under normal circumstances, application code should use k_thread_join() or k_thread_abort() to synchronize on thread termination state and not rely on signaling from within application logic.

A thread may asynchronously end its execution by aborting. The kernel automatically aborts a thread if the thread triggers a fatal error condition, such as dereferencing a null pointer.

A thread can also be aborted by another thread (or by itself) by calling k_thread_abort(). However, it is typically preferable to signal a thread to terminate itself gracefully, rather than aborting it.

As with thread termination, the kernel does not reclaim shared resources owned by an aborted thread.

Note 注意

The kernel does not currently make any claims regarding an application’s ability to respawn a thread that aborts.

A thread can be prevented from executing for an indefinite period of time if it becomes suspended. The function k_thread_suspend() can be used to suspend any thread, including the calling thread. Suspending a thread that is already suspended has no additional effect.

Once suspended, a thread cannot be scheduled until another thread calls k_thread_resume() to remove the suspension.

Note 注意

A thread can prevent itself from executing for a specified period of time using k_sleep(). However, this is different from suspending a thread since a sleeping thread becomes executable automatically when the time limit is reached.

A thread that has no factors that prevent its execution is deemed to be ready, and is eligible to be selected as the current thread.

A thread that has one or more factors that prevent its execution is deemed to be unready, and cannot be selected as the current thread.

• The thread has not been started.

线程尚未启动。

• The thread is waiting for a kernel object to complete an operation. (For example, the thread is taking a semaphore that is unavailable.)

线程正在等待内核对象完成一个操作。(例如，线程正在使用不可用的信号量。)

• The thread is waiting for a timeout to occur.

线程正在等待超时。

• The thread has been suspended.

线程已被挂起。

• The thread has terminated or aborted.

线程已终止或中止。

Note 注意

Although the diagram above may appear to suggest that both Ready and Running are distinct thread states, that is not the correct interpretation. Ready is a thread state, and Running is a schedule state that only applies to Ready threads.

Every thread requires its own stack buffer for the CPU to push context. Depending on configuration, there are several constraints that must be met:

• There may need to be additional memory reserved for memory management structures

可能需要为内存管理结构保留额外的内存

• If guard-based stack overflow detection is enabled, a small write- protected memory management region must immediately precede the stack buffer to catch overflows.

如果启用了基于保护的堆栈溢出检测，则必须在堆栈缓冲区之前立即有一个小的写保护内存管理区域来捕获溢出。

• If userspace is enabled, a separate fixed-size privilege elevation stack must be reserved to serve as a private kernel stack for handling system calls.

如果启用了 userspace，则必须保留一个单独的固定大小的权限提升堆栈，作为处理系统调用的私有内核堆栈。

• If userspace is enabled, the thread’s stack buffer must be appropriately sized and aligned such that a memory protection region may be programmed to exactly fit.

如果启用了用户空间，线程的堆栈缓冲区必须适当调整大小和对齐，以便可以对内存保护区域进行精确编程。

The alignment constraints can be quite restrictive, for example some MPUs require their regions to be of some power of two in size, and aligned to its own size.

Because of this, portable code can’t simply pass an arbitrary character buffer to k_thread_create(). Special macros exist to instantiate stacks, prefixed with K_KERNEL_STACK and K_THREAD_STACK.

### Kernel-only Stacks 只有内核的栈¶

If it is known that a thread will never run in user mode, or the stack is being used for special contexts like handling interrupts, it is best to define stacks using the K_KERNEL_STACK macros.

These stacks save memory because an MPU region will never need to be programmed to cover the stack buffer itself, and the kernel will not need to reserve additional room for the privilege elevation stack, or memory management data structures which only pertain to user mode threads.

Attempts from user mode to use stacks declared in this way will result in a fatal error for the caller.

If CONFIG_USERSPACE is not enabled, the set of K_THREAD_STACK macros have an identical effect to the K_KERNEL_STACK macros.

If it is known that a stack will need to host user threads, or if this cannot be determined, define the stack with K_THREAD_STACK macros. This may use more memory but the stack object is suitable for hosting user threads.

If CONFIG_USERSPACE is not enabled, the set of K_THREAD_STACK macros have an identical effect to the K_KERNEL_STACK macros.

A thread’s priority is an integer value, and can be either negative or non-negative. Numerically lower priorities takes precedence over numerically higher values. For example, the scheduler gives thread A of priority 4 higher priority over thread B of priority 7; likewise thread C of priority -2 has higher priority than both thread A and thread B.

The scheduler distinguishes between two classes of threads, based on each thread’s priority.

• A cooperative thread has a negative priority value. Once it becomes the current thread, a cooperative thread remains the current thread until it performs an action that makes it unready.

合作线程具有负的优先级值。一旦它成为当前线程，协作线程将保持当前线程，直到它执行一个操作，使其未就绪为止。

• A preemptible thread has a non-negative priority value. Once it becomes the current thread, a preemptible thread may be supplanted at any time if a cooperative thread, or a preemptible thread of higher or equal priority, becomes ready.

可抢占线程具有非负的优先级值。一旦它成为当前线程，如果合作线程或具有更高或同等优先级的可抢占线程就绪，那么可抢占线程随时可能被取代。

A thread’s initial priority value can be altered up or down after the thread has been started. Thus it is possible for a preemptible thread to become a cooperative thread, and vice versa, by changing its priority.

Note 注意

The scheduler does not make heuristic decisions to re-prioritize threads. Thread priorities are set and changed only at the application’s request.

The kernel supports a virtually unlimited number of thread priority levels. The configuration options CONFIG_NUM_COOP_PRIORITIES and CONFIG_NUM_PREEMPT_PRIORITIES specify the number of priority levels for each class of thread, resulting in the following usable priority ranges:

For example, configuring 5 cooperative priorities and 10 preemptive priorities results in the ranges -5 to -1 and 0 to 9, respectively.

### Meta-IRQ Priorities

When enabled (see CONFIG_NUM_METAIRQ_PRIORITIES), there is a special subclass of cooperative priorities at the highest (numerically lowest) end of the priority space: meta-IRQ threads. These are scheduled according to their normal priority, but also have the special ability to preempt all other threads (and other meta-IRQ threads) at lower priorities, even if those threads are cooperative and/or have taken a scheduler lock. Meta-IRQ threads are still threads, however, and can still be interrupted by any hardware interrupt.

This behavior makes the act of unblocking a meta-IRQ thread (by any means, e.g. creating it, calling k_sem_give(), etc.) into the equivalent of a synchronous system call when done by a lower priority thread, or an ARM-like “pended IRQ” when done from true interrupt context. The intent is that this feature will be used to implement interrupt “bottom half” processing and/or “tasklet” features in driver subsystems. The thread, once woken, will be guaranteed to run before the current CPU returns into application code.

Unlike similar features in other OSes, meta-IRQ threads are true threads and run on their own stack (which must be allocated normally), not the per-CPU interrupt stack. Design work to enable the use of the IRQ stack on supported architectures is pending.

Note that because this breaks the promise made to cooperative threads by the Zephyr API (namely that the OS won’t schedule other thread until the current thread deliberately blocks), it should be used only with great care from application code. These are not simply very high priority threads and should not be used as such.

The kernel supports a small set of thread options that allow a thread to receive special treatment under specific circumstances. The set of options associated with a thread are specified when the thread is spawned.

A thread that does not require any thread option has an option value of zero. A thread that requires a thread option specifies it by name, using the | character as a separator if multiple options are needed (i.e. combine options using the bitwise OR operator).

The following thread options are supported.

• K_ESSENTIAL

This option tags the thread as an essential thread. This instructs the kernel to treat the termination or aborting of the thread as a fatal system error.

此选项将线程标记为一个基本线程。这指示内核将线程的终止或终止视为一个致命的系统错误。

By default, the thread is not considered to be an essential thread.

默认情况下，该线程不被认为是一个基本线程。

• K_SSE_REGS

This x86-specific option indicate that the thread uses the CPU’s SSE registers. Also see K_FP_REGS.

这个 x86特定的选项指示线程使用 CPU 的 SSE 寄存器。

By default, the kernel does not attempt to save and restore the contents of these registers when scheduling the thread.

默认情况下，内核在调度线程时不会尝试保存和恢复这些寄存器的内容。

• K_FP_REGS

This option indicate that the thread uses the CPU’s floating point registers. This instructs the kernel to take additional steps to save and restore the contents of these registers when scheduling the thread. (For more information see Floating Point Services.

此选项指示线程使用 CPU 的浮点寄存器。这指示内核在调度线程时采取其他步骤保存和恢复这些寄存器的内容。(有关更多信息，请参见浮点服务。)

By default, the kernel does not attempt to save and restore the contents of this register when scheduling the thread.

默认情况下，内核在调度线程时不会尝试保存和恢复此寄存器的内容。

• K_USER

If CONFIG_USERSPACE is enabled, this thread will be created in user mode and will have reduced privileges. See User Mode. Otherwise this flag does nothing.

如果启用了 config_userspace，则此线程将以用户模式创建，并将拥有减少的特权。参见用户模式。否则这面旗子什么也不会做。

• K_INHERIT_PERMS

If CONFIG_USERSPACE is enabled, this thread will inherit all kernel object permissions that the parent thread had, except the parent thread object. See User Mode.

如果启用 CONFIG_USERSPACE，则此线程将继承父线程拥有的所有内核对象权限，但父线程对象除外。参见用户模式。

Every thread has a 32-bit custom data area, accessible only by the thread itself, and may be used by the application for any purpose it chooses. The default custom data value for a thread is zero.

Note 注意

Custom data support is not available to ISRs because they operate within a single shared kernel interrupt handling context.

By default, thread custom data support is disabled. The configuration option CONFIG_THREAD_CUSTOM_DATA can be used to enable support.

The k_thread_custom_data_set() and k_thread_custom_data_get() functions are used to write and read a thread’s custom data, respectively. A thread can only access its own custom data, and not that of another thread.

The following code uses the custom data feature to record the number of times each thread calls a specific routine.

Note 注意

Obviously, only a single routine can use this technique, since it monopolizes the use of the custom data feature.

int call_tracking_routine(void)
{
uint32_t call_count;

if (k_is_in_isr()) {
/* ignore any call made by an ISR */
} else {
call_count++;
}

/* do rest of routine's processing */
...
}


Use thread custom data to allow a routine to access thread-specific information, by using the custom data as a pointer to a data structure owned by the thread.

## Implementation

A thread is spawned by defining its stack area and its thread control block, and then calling k_thread_create().

The stack area must be defined using K_THREAD_STACK_DEFINE or K_KERNEL_STACK_DEFINE to ensure it is properly set up in memory.

The size parameter for the stack must be one of three values:

• The original requested stack size passed to K_THREAD_STACK or K_KERNEL_STACK family of stack instantiation macros.

• For a stack object defined with the K_THREAD_STACK family of macros, the return value of K_THREAD_STACK_SIZEOF() for that’ object.

• For a stack object defined with the K_KERNEL_STACK family of macros, the return value of K_KERNEL_STACK_SIZEOF() for that object.

对于用 k_kernel_stack 宏家族定义的堆栈对象，返回该对象的 k_kernel_stacksizeof()值。

The following code spawns a thread that starts immediately.

#define MY_STACK_SIZE 500
#define MY_PRIORITY 5

extern void my_entry_point(void *, void *, void *);

my_entry_point,
NULL, NULL, NULL,
MY_PRIORITY, 0, K_NO_WAIT);


Alternatively, a thread can be declared at compile time by calling K_THREAD_DEFINE. Observe that the macro defines the stack area, control block, and thread id variables automatically.

The following code has the same effect as the code segment above.

#define MY_STACK_SIZE 500
#define MY_PRIORITY 5

extern void my_entry_point(void *, void *, void *);

my_entry_point, NULL, NULL, NULL,
MY_PRIORITY, 0, 0);


Note

The delay parameter to k_thread_create() is a k_timeout_t value, so K_NO_WAIT means to start the thread immediately. The corresponding parameter to K_THREAD_DEFINE is a duration in integral milliseconds, so the equivalent argument is 0.

#### User Mode Constraints 用户模式约束¶

This section only applies if CONFIG_USERSPACE is enabled, and a user thread tries to create a new thread. The k_thread_create() API is still used, but there are additional constraints which must be met or the calling thread will be terminated:

• The calling thread must have permissions granted on both the child thread and stack parameters; both are tracked by the kernel as kernel objects.

调用线程必须对子线程和堆栈参数都授予权限；两者都被内核作为内核对象进行跟踪。

• The child thread and stack objects must be in an uninitialized state, i.e. it is not currently running and the stack memory is unused.

子线程和堆栈对象必须处于未初始化状态，即它当前没有运行，堆栈内存未使用。

• The stack size parameter passed in must be equal to or less than the bounds of the stack object when it was declared.

传入的堆栈大小参数必须等于或小于声明时的堆栈对象的边界。

• The K_USER option must be used, as user threads can only create other user threads.

必须使用 k_user 选项，因为用户线程只能创建其他用户线程。

• The K_ESSENTIAL option must not be used, user threads may not be considered essential threads.

不能使用 K_ESSENTIAL 选项，用户线程可能不被视为基本线程

• The priority of the child thread must be a valid priority value, and equal to or lower than the parent thread.

子线程的优先级必须是有效的优先级值，并且等于或低于父线程。

### Dropping Permissions 删除权限¶

If CONFIG_USERSPACE is enabled, a thread running in supervisor mode may perform a one-way transition to user mode using the k_thread_user_mode_enter() API. This is a one-way operation which will reset and zero the thread’s stack memory. The thread will be marked as non-essential.

A thread terminates itself by returning from its entry point function.

The following code illustrates the ways a thread can terminate.

void my_entry_point(int unused1, int unused2, int unused3)
{
while (1) {
...
if (<some condition>) {
return; /* thread terminates from mid-entry point function */
}
...
}

/* thread terminates at end of entry point function */
}


If CONFIG_USERSPACE is enabled, aborting a thread will additionally mark the thread and stack objects as uninitialized so that they may be re-used.

## Runtime Statistics 运行时统计¶

Thread runtime statistics can be gathered and retrieved if CONFIG_THREAD_RUNTIME_STATS is enabled, for example, total number of execution cycles of a thread.

By default, the runtime statistics are gathered using the default kernel timer. For some architectures, SoCs or boards, there are timers with higher resolution available via timing functions. Using of these timers can be enabled via CONFIG_THREAD_RUNTIME_STATS_USE_TIMING_FUNCTIONS.

Here is an example:

k_thread_runtime_stats_t rt_stats_thread;



## Suggested Uses 建议的用途¶

Use threads to handle processing that cannot be handled in an ISR.

Use separate threads to handle logically distinct processing operations that can execute in parallel.

## Configuration Options 配置选项¶

Related configuration options:

# Scheduling 调度¶

The kernel’s priority-based scheduler allows an application’s threads to share the CPU.

## Concepts 概念¶

The scheduler determines which thread is allowed to execute at any point in time; this thread is known as the current thread.

There are various points in time when the scheduler is given an opportunity to change the identity of the current thread. These points are called reschedule points. Some potential reschedule points are:

A thread sleeps when it voluntarily initiates an operation that transitions itself to a suspended or waiting state.

Whenever the scheduler changes the identity of the current thread, or when execution of the current thread is replaced by an ISR, the kernel first saves the current thread’s CPU register values. These register values get restored when the thread later resumes execution.

### Scheduling Algorithm 调度算法¶

The kernel’s scheduler selects the highest priority ready thread to be the current thread. When multiple ready threads of the same priority exist, the scheduler chooses the one that has been waiting longest.

A thread’s relative priority is primarily determined by its static priority. However, when both earliest-deadline-first scheduling is enabled (CONFIG_SCHED_DEADLINE) and a choice of threads have equal static priority, then the thread with the earlier deadline is considered to have the higher priority. Thus, when earliest-deadline-first scheduling is enabled, two threads are only considered to have the same priority when both their static priorities and deadlines are equal. The routine k_thread_deadline_set() is used to set a thread’s deadline.

Note 注意

Execution of ISRs takes precedence over thread execution, so the execution of the current thread may be replaced by an ISR at any time unless interrupts have been masked. This applies to both cooperative threads and preemptive threads.

ISR 的执行优先于线程的执行，因此当前线程的执行可以在任何时候被 ISR 替换，除非中断被屏蔽。这适用于协作线程和抢占线程。

The kernel can be built with one of several choices for the ready queue implementation, offering different choices between code size, constant factor runtime overhead and performance scaling when many threads are added.

• Simple linked-list ready queue (CONFIG_SCHED_DUMB)

简单的链表就绪队列(CONFIG_SCHED_DUMB)

The scheduler ready queue will be implemented as a simple unordered list, with very fast constant time performance for single threads and very low code size. This implementation should be selected on systems with constrained code size that will never see more than a small number (3, maybe) of runnable threads in the queue at any given time. On most platforms (that are not otherwise using the red/black tree) this results in a savings of ~2k of code size.

调度程序就绪队列将作为一个简单的无序列表实现，对于单个线程具有非常快的常量时间性能和非常低的代码大小。应该在代码大小受限的系统上选择此实现，这些系统在任何给定时间都不会在队列中看到超过少量(可能是3个)的可运行线程。在大多数平台(没有使用红/黑树的平台)上，这将节省大约2k 的代码大小。

• Red/black tree ready queue (CONFIG_SCHED_SCALABLE)

红/黑树准备队列(CONFIG_SCHED_SCALABLE)

The scheduler ready queue will be implemented as a red/black tree. This has rather slower constant-time insertion and removal overhead, and on most platforms (that are not otherwise using the red/black tree somewhere) requires an extra ~2kb of code. The resulting behavior will scale cleanly and quickly into the many thousands of threads.

调度程序就绪队列将作为红/黑树实现。这会有相当慢的常量时间插入和删除开销，而且在大多数平台上(在其他地方没有使用红/黑树)需要额外的2kb 代码。由此产生的行为将清晰而快速地扩展到成千上万的线程中。

Use this for applications needing many concurrent runnable threads (> 20 or so). Most applications won’t need this ready queue implementation.

对于需要多个并发可运行线程(大约20个)的应用程序，可以使用此方法。大多数应用程序不需要这个就绪队列实现。

• Traditional multi-queue ready queue (CONFIG_SCHED_MULTIQ)

传统的多队列就绪队列(CONFIG_SCHED_MULTIQ)

When selected, the scheduler ready queue will be implemented as the classic/textbook array of lists, one per priority (max 32 priorities).

当选中时，调度程序就绪队列将作为经典的/教科书式的列表数组实现，每个优先级一个(最多32个优先级)。

This corresponds to the scheduler algorithm used in Zephyr versions prior to 1.12.

这与1.12之前 Zephyr 版本中使用的调度器算法相对应。

It incurs only a tiny code size overhead vs. the “dumb” scheduler and runs in O(1) time in almost all circumstances with very low constant factor. But it requires a fairly large RAM budget to store those list heads, and the limited features make it incompatible with features like deadline scheduling that need to sort threads more finely, and SMP affinity which need to traverse the list of threads.

与 “哑巴 “调度器相比，它只产生了很小的代码大小的开销，而且几乎在所有情况下都能以O(1)的时间运行，常数非常低。但它需要相当大的RAM预算来存储这些列表头，而且有限的功能使它与需要对线程进行更精细排序的截止日期调度以及需要遍历线程列表的SMP亲和性等功能不兼容。

Typical applications with small numbers of runnable threads probably want the DUMB scheduler.

具有少量可运行线程的典型应用程序可能需要 DUMB 调度程序。

The wait_q abstraction used in IPC primitives to pend threads for later wakeup shares the same backend data structure choices as the scheduler, and can use the same options.

IPC原语中使用的wait_q抽象(用于暂缓线程以便稍后唤醒)，与调度器共享相同的后端数据结构选择，并可以使用相同的选项。

• Scalable wait_q implementation (CONFIG_WAITQ_SCALABLE)

可伸缩的 wait_q 实现(CONFIG_waitq_Scalable)

When selected, the wait_q will be implemented with a balanced tree. Choose this if you expect to have many threads waiting on individual primitives. There is a ~2kb code size increase over CONFIG_WAITQ_DUMB (which may be shared with CONFIG_SCHED_SCALABLE) if the red/black tree is not used elsewhere in the application, and pend/unpend operations on “small” queues will be somewhat slower (though this is not generally a performance path).

选择后，wait_q将用平衡树来实现。如果你期望有许多线程等待单个原语，请选择此选项。如果应用程序中的其他地方没有使用红/黑树，那么与CONFIG_WAITQ_DUMB（可与CONFIG_SCHED_SCALABLE共享）相比，代码大小会增加~2kb，而且 “小 “队列上的pend/unpend操作会稍慢一些（尽管这通常不是性能路径）。

• Simple linked-list wait_q (CONFIG_WAITQ_DUMB)

简单链表 wait_q (CONFIG_waitq_dumb)

When selected, the wait_q will be implemented with a doubly-linked list. Choose this if you expect to have only a few threads blocked on any single IPC primitive.

当选中时，wait_q 将通过双向链接列表实现。如果您希望在任何一个 IPC 原语上只阻塞少量线程，请选择此选项。

### Cooperative Time Slicing 协同时间切片¶

Once a cooperative thread becomes the current thread, it remains the current thread until it performs an action that makes it unready. Consequently, if a cooperative thread performs lengthy computations, it may cause an unacceptable delay in the scheduling of other threads, including those of higher priority and equal priority.

To overcome such problems, a cooperative thread can voluntarily relinquish the CPU from time to time to permit other threads to execute. A thread can relinquish the CPU in two ways:

• Calling k_yield() puts the thread at the back of the scheduler’s prioritized list of ready threads, and then invokes the scheduler. All ready threads whose priority is higher or equal to that of the yielding thread are then allowed to execute before the yielding thread is rescheduled. If no such ready threads exist, the scheduler immediately reschedules the yielding thread without context switching.

调用k_yield()会将线程放在调度程序的已排列优先级的就绪线程列表的后面，然后调用调度程序。所有优先级高于或等于让渡线程的就绪线程被允许在让渡线程被重新安排之前执行。如果没有这样的就绪线程，调度器会立即重新安排让渡线程的工作，而不进行上下文切换。

• Calling k_sleep() makes the thread unready for a specified time period. Ready threads of all priorities are then allowed to execute; however, there is no guarantee that threads whose priority is lower than that of the sleeping thread will actually be scheduled before the sleeping thread becomes ready once again.

调用 k_sleep()会使线程在指定的时间段内处于未就绪状态。然后允许执行所有优先级的就绪线程; 但是，不能保证优先级低于睡眠线程的线程在睡眠线程再次就绪之前实际得到调度。

### Preemptive Time Slicing 抢占式时间切片¶

Once a preemptive thread becomes the current thread, it remains the current thread until a higher priority thread becomes ready, or until the thread performs an action that makes it unready. Consequently, if a preemptive thread performs lengthy computations, it may cause an unacceptable delay in the scheduling of other threads, including those of equal priority.

To overcome such problems, a preemptive thread can perform cooperative time slicing (as described above), or the scheduler’s time slicing capability can be used to allow other threads of the same priority to execute.

The scheduler divides time into a series of time slices, where slices are measured in system clock ticks. The time slice size is configurable, but this size can be changed while the application is running.

At the end of every time slice, the scheduler checks to see if the current thread is preemptible and, if so, implicitly invokes k_yield() on behalf of the thread. This gives other ready threads of the same priority the opportunity to execute before the current thread is scheduled again. If no threads of equal priority are ready, the current thread remains the current thread.

Threads with a priority higher than specified limit are exempt from preemptive time slicing, and are never preempted by a thread of equal priority. This allows an application to use preemptive time slicing only when dealing with lower priority threads that are less time-sensitive.

Note 注意

The kernel’s time slicing algorithm does not ensure that a set of equal-priority threads receive an equitable amount of CPU time, since it does not measure the amount of time a thread actually gets to execute. However, the algorithm does ensure that a thread never executes for longer than a single time slice without being required to yield.

### Scheduler Locking 调度器锁定¶

A preemptible thread that does not wish to be preempted while performing a critical operation can instruct the scheduler to temporarily treat it as a cooperative thread by calling k_sched_lock(). This prevents other threads from interfering while the critical operation is being performed.

Once the critical operation is complete the preemptible thread must call k_sched_unlock() to restore its normal, preemptible status.

If a thread calls k_sched_lock() and subsequently performs an action that makes it unready, the scheduler will switch the locking thread out and allow other threads to execute. When the locking thread again becomes the current thread, its non-preemptible status is maintained.

Note

Locking out the scheduler is a more efficient way for a preemptible thread to prevent preemption than changing its priority level to a negative value.

A thread can call k_sleep() to delay its processing for a specified time period. During the time the thread is sleeping the CPU is relinquished to allow other ready threads to execute. Once the specified delay has elapsed the thread becomes ready and is eligible to be scheduled once again.

A sleeping thread can be woken up prematurely by another thread using k_wakeup(). This technique can sometimes be used to permit the secondary thread to signal the sleeping thread that something has occurred without requiring the threads to define a kernel synchronization object, such as a semaphore. Waking up a thread that is not sleeping is allowed, but has no effect.

### Busy Waiting 忙碌的等待¶

A thread can call k_busy_wait() to perform a busy wait that delays its processing for a specified time period without relinquishing the CPU to another ready thread.

A busy wait is typically used instead of thread sleeping when the required delay is too short to warrant having the scheduler context switch from the current thread to another thread and then back again.

## Suggested Uses 建议的用途¶

Use cooperative threads for device drivers and other performance-critical work.

Use cooperative threads to implement mutually exclusion without the need for a kernel object, such as a mutex.

Use preemptive threads to give priority to time-sensitive processing over less time-sensitive processing.

# CPU Idling 空闲CPU¶

Although normally reserved for the idle thread, in certain special applications, a thread might want to make the CPU idle.

## Concepts 概念¶

Making the CPU idle causes the kernel to pause all operations until an event, normally an interrupt, wakes up the CPU. In a regular system, the idle thread is responsible for this. However, in some constrained systems, it is possible that another thread takes this duty.

## Implementation 实施¶

### Making the CPU idle 使 CPU 空闲¶

Making the CPU idle is simple: call the k_cpu_idle() API. The CPU will stop executing instructions until an event occurs. Most likely, the function will be called within a loop. Note that in certain architectures, upon return, k_cpu_idle() unconditionally unmasks interrupts.

static k_sem my_sem;

void my_isr(void *unused)
{
k_sem_give(&my_sem);
}

void main(void)
{
k_sem_init(&my_sem, 0, 1);

/* 等待ISR的信号，然后做相关工作 */

for (;;) {

/*  等待ISR触发工作来执行 */
if (k_sem_take(&my_sem, K_NO_WAIT) == 0) {

/* ... do processing */

}

/* 让CPU进入睡眠状态以节省电力 */
k_cpu_idle();
}
}


### Making the CPU idle in an atomic fashion以原子的方式使 CPU 空闲¶

It is possible that there is a need to do some work atomically before making the CPU idle. In such a case, k_cpu_atomic_idle() should be used instead.

In fact, there is a race condition in the previous example: the interrupt could occur between the time the semaphore is taken, finding out it is not available and making the CPU idle again. In some systems, this can cause the CPU to idle until another interrupt occurs, which might be never, thus hanging the system completely. To prevent this, k_cpu_atomic_idle() should have been used, like in this example.

static k_sem my_sem;

void my_isr(void *unused)
{
k_sem_give(&my_sem);
}

void main(void)
{
k_sem_init(&my_sem, 0, 1);

for (;;) {

unsigned int key = irq_lock();

/*
* 等待来自ISR的信号；如果获得了信号，就做相关的工作，然后进入下一个循环迭代（信号可能已经再次给出）；否则，让CPU空闲。
*/

if (k_sem_take(&my_sem, K_NO_WAIT) == 0) {

irq_unlock(key);

/* ... do processing */

} else {
/* put CPU to sleep to save power */
k_cpu_atomic_idle(key);
}
}
}


## Suggested Uses 建议的用途¶

Use k_cpu_atomic_idle() when a thread has to do some real work in addition to idling the CPU to wait for an event. See example above.

Use k_cpu_idle() only when a thread is only responsible for idling the CPU, i.e. not doing any real work, like in this example below.

void main(void)
{
/* ... do some system/application initialization */

/* thread is only used for CPU idling from this point on */
for (;;) {
k_cpu_idle();
}
}


Note 注意

Do not use these APIs unless absolutely necessary. In a normal system, the idle thread takes care of power management, including CPU idling.

A system thread is a thread that the kernel spawns automatically during system initialization.

The kernel spawns the following system threads:

This thread performs kernel initialization, then calls the application’s main() function (if one is defined).

这个线程执行内核初始化，然后调用应用程序的 main()函数(如果定义了 main()函数)

By default, the main thread uses the highest configured preemptible thread priority (i.e. 0). If the kernel is not configured to support preemptible threads, the main thread uses the lowest configured cooperative thread priority (i.e. -1).

默认情况下，主线程使用最高配置的可抢占线程优先级(即0)。如果内核没有配置为支持抢占线程，则主线程使用配置的最低协作线程优先级(即 -1)。

The main thread is an essential thread while it is performing kernel initialization or executing the application’s main() function; this means a fatal system error is raised if the thread aborts. If main() is not defined, or if it executes and then does a normal return, the main thread terminates normally and no error is raised.

在执行内核初始化或执行应用程序的 main()函数时，主线程是一个必不可少的线程; 这意味着如果线程中止，将引发严重的系统错误。如果没有定义 main() ，或者如果它执行然后进行正常返回，那么主线程将正常终止并且没有引发错误。

This thread executes when there is no other work for the system to do. If possible, the idle thread activates the board’s power management support to save power; otherwise, the idle thread simply performs a “do nothing” loop. The idle thread remains in existence as long as the system is running and never terminates.

此线程在系统没有其他工作可做时执行。如果可能的话，空闲线程激活板的电源管理支持以节省电源; 否则，空闲线程只是执行一个“什么也不做”循环。只要系统在运行，空闲线程就会一直存在，并且永远不会终止。

The idle thread always uses the lowest configured thread priority. If this makes it a cooperative thread, the idle thread repeatedly yields the CPU to allow the application’s other threads to run when they need to.

空闲线程总是使用最低配置的线程优先级。如果这使它成为一个协作线程，那么空闲线程将重复地产生 CPU，以允许应用程序的其他线程在需要时运行。

The idle thread is an essential thread, which means a fatal system error is raised if the thread aborts.

空闲线程是一个必不可少的线程，这意味着如果线程中止，将引发严重的系统错误。

Additional system threads may also be spawned, depending on the kernel and board configuration options specified by the application. For example, enabling the system workqueue spawns a system thread that services the work items submitted to it. (See Workqueue Threads.)

## Implementation 实施¶

### Writing a main() function 编写 main()函数¶

An application-supplied main() function begins executing once kernel initialization is complete. The kernel does not pass any arguments to the function.

The following code outlines a trivial main() function. The function used by a real application can be as complex as needed.

void main(void)
{
/* initialize a semaphore */
...

/* register an ISR that gives the semaphore */
...

/* monitor the semaphore forever */
while (1) {
/* wait for the semaphore to be given by the ISR */
...
/* do whatever processing is now needed */
...
}
}


## Suggested Uses 建议的用途¶

A workqueue is a kernel object that uses a dedicated thread to process work items in a first in, first out manner. Each work item is processed by calling the function specified by the work item. A workqueue is typically used by an ISR or a high-priority thread to offload non-urgent processing to a lower-priority thread so it does not impact time-sensitive processing.

Any number of workqueues can be defined (limited only by available RAM). Each workqueue is referenced by its memory address.

A workqueue has the following key properties:

• A queue of work items that have been added, but not yet processed.

已添加但尚未处理的工作项的队列。

• A thread that processes the work items in the queue. The priority of the thread is configurable, allowing it to be either cooperative or preemptive as required.

处理队列中工作项的线程。线程的优先级是可配置的，允许它根据需要进行协作或抢占。

Regardless of workqueue thread priority the workqueue thread will yield between each submitted work item, to prevent a cooperative workqueue from starving other threads.

A workqueue must be initialized before it can be used. This sets its queue to empty and spawns the workqueue’s thread. The thread runs forever, but sleeps when no work items are available.

Note 注意

The behavior described here is changed from the Zephyr workqueue implementation used prior to release 2.6. Among the changes are:

• Precise tracking of the status of cancelled work items, so that the caller need not be concerned that an item may be processing when the cancellation returns. Checking of return values on cancellation is still required.

精确跟踪已取消工作项的状态，以便调用方不必担心在取消返回时可能正在处理某个项。仍然需要在取消时检查返回值。

• Direct submission of delayable work items to the queue with K_NO_WAIT rather than always going through the timeout API, which could introduce delays.

用K_NO_WAIT直接向队列提交可延迟的工作项，而不是总是通过超时的API，这可能会带来延迟。

• The ability to wait until a work item has completed or a queue has been drained.

能够等到一个工作项完成或一个队列被耗尽

• Finer control of behavior when scheduling a delayable work item, specifically allowing a previous deadline to remain unchanged when a work item is scheduled again.

在调度可延迟的工作项目时，对行为进行更精细的控制，特别是在再次安排工作项目时，允许以前的最后期限保持不变。

• Safe handling of work item resubmission when the item is being processed on another workqueue.

当项目正在另一个工作队列中处理时，安全地处理工作项目的重新提交。

Using the return values of k_work_busy_get() or k_work_is_pending(), or measurements of remaining time until delayable work is scheduled, should be avoided to prevent race conditions of the type observed with the previous implementation. See also Workqueue Best Practices.

## Work Item Lifecycle 工作项目生命周期¶

Any number of work items can be defined. Each work item is referenced by its memory address.

A work item is assigned a handler function, which is the function executed by the workqueue’s thread when the work item is processed. This function accepts a single argument, which is the address of the work item itself. The work item also maintains information about its status.

A work item must be initialized before it can be used. This records the work item’s handler function and marks it as not pending.

A work item may be queued (K_WORK_QUEUED) by submitting it to a workqueue by an ISR or a thread. Submitting a work item appends the work item to the workqueue’s queue. Once the workqueue’s thread has processed all of the preceding work items in its queue the thread will remove the next work item from the queue and invoke the work item’s handler function. Depending on the scheduling priority of the workqueue’s thread, and the work required by other items in the queue, a queued work item may be processed quickly or it may remain in the queue for an extended period of time.

A delayable work item may be scheduled (K_WORK_DELAYED) to a workqueue; see Delayable Work.

A work item will be running (K_WORK_RUNNING) when it is running on a work queue, and may also be canceling (K_WORK_CANCELING) if it started running before a thread has requested that it be cancelled.

A work item can be in multiple states; for example it can be:

• running on a queue;

在队列中运行;

• marked canceling (because a thread used k_work_cancel_sync() to wait until the work item completed);

标记为取消(因为线程使用 k_work_cancel_sync()等待工作项完成) ;

• queued to run again on the same queue;

在同一队列上排队以便再次运行;

• scheduled to be submitted to a (possibly different) queue

预定提交给一个（可能是不同的）队列

all simultaneously. A work item that is in any of these states is pending (k_work_is_pending()) or busy (k_work_busy_get()).

A handler function can use any kernel API available to threads. However, operations that are potentially blocking (e.g. taking a semaphore) must be used with care, since the workqueue cannot process subsequent work items in its queue until the handler function finishes executing.

The single argument that is passed to a handler function can be ignored if it is not required. If the handler function requires additional information about the work it is to perform, the work item can be embedded in a larger data structure. The handler function can then use the argument value to compute the address of the enclosing data structure with CONTAINER_OF, and thereby obtain access to the additional information it needs.

A work item is typically initialized once and then submitted to a specific workqueue whenever work needs to be performed. If an ISR or a thread attempts to submit a work item that is already queued the work item is not affected; the work item remains in its current place in the workqueue’s queue, and the work is only performed once.

A handler function is permitted to re-submit its work item argument to the workqueue, since the work item is no longer queued at that time. This allows the handler to execute work in stages, without unduly delaying the processing of other work items in the workqueue’s queue.

Important 重要事项

A pending work item must not be altered until the item has been processed by the workqueue thread. This means a work item must not be re-initialized while it is busy. Furthermore, any additional information the work item’s handler function needs to perform its work must not be altered until the handler function has finished executing.

## Delayable Work 可推迟的工作¶

An ISR or a thread may need to schedule a work item that is to be processed only after a specified period of time, rather than immediately. This can be done by scheduling a delayable work item to be submitted to a workqueue at a future time.

ISR 或线程可能需要安排只在指定时间段之后进行处理的工作项，而不是立即进行处理。这可以通过调度一个可延迟的工作项来实现，该工作项将在未来某个时间提交给工作队列。

A delayable work item contains a standard work item but adds fields that record when and where the item should be submitted.

A delayable work item is initialized and scheduled to a workqueue in a similar manner to a standard work item, although different kernel APIs are used. When the schedule request is made the kernel initiates a timeout mechanism that is triggered after the specified delay has elapsed. Once the timeout occurs the kernel submits the work item to the specified workqueue, where it remains queued until it is processed in the standard manner.

Note that work handler used for delayable still receives a pointer to the underlying non-delayable work structure, which is not publicly accessible from k_work_delayable. To get access to an object that contains the delayable work object use this idiom:

static void work_handler(struct k_work *work)
{
struct k_work_delayable *dwork = k_work_delayable_from_work(work);
struct work_context *ctx = CONTAINER_OF(dwork, struct work_context,
timed_work);
...


## Triggered Work 触发式工作¶

The k_work_poll_submit() interface schedules a triggered work item in response to a poll event (see Polling API), that will call a user-defined function when a monitored resource becomes available or poll signal is raised, or a timeout occurs. In contrast to k_poll(), the triggered work does not require a dedicated thread waiting or actively polling for a poll event.

K_ work_poll_submit()接口调度一个触发的工作项，以响应一个 poll 事件(请参阅 Polling API) ，该事件将在受监视的资源变得可用、提出轮询信号或出现超时时调用用户定义的函数。与 k_poll()相反，触发的工作不需要专门的线程等待或主动轮询轮询轮询事件。

A triggered work item is a standard work item that has the following added properties:

• A pointer to an array of poll events that will trigger work item submissions to the workqueue

指向将触发工作项提交到工作队列的轮询事件数组的指针

• A size of the array containing poll events.

包含轮询事件的数组的大小。

A triggered work item is initialized and submitted to a workqueue in a similar manner to a standard work item, although dedicated kernel APIs are used. When a submit request is made, the kernel begins observing kernel objects specified by the poll events. Once at least one of the observed kernel object’s changes state, the work item is submitted to the specified workqueue, where it remains queued until it is processed in the standard manner.

Important 重要事项

The triggered work item as well as the referenced array of poll events have to be valid and cannot be modified for a complete triggered work item lifecycle, from submission to work item execution or cancellation.

An ISR or a thread may cancel a triggered work item it has submitted as long as it is still waiting for a poll event. In such case, the kernel stops waiting for attached poll events and the specified work is not executed. Otherwise the cancellation cannot be performed.

ISR 或线程可以取消已提交的已触发工作项，只要它仍在等待轮询事件。在这种情况下，内核停止等待附加的轮询事件，并且不执行指定的工作。否则无法执行取消操作。

## System Workqueue 系统工作队列¶

The kernel defines a workqueue known as the system workqueue, which is available to any application or kernel code that requires workqueue support. The system workqueue is optional, and only exists if the application makes use of it.

Important 重要事项

Additional workqueues should only be defined when it is not possible to submit new work items to the system workqueue, since each new workqueue incurs a significant cost in memory footprint. A new workqueue can be justified if it is not possible for its work items to co-exist with existing system workqueue work items without an unacceptable impact; for example, if the new work items perform blocking operations that would delay other system workqueue processing to an unacceptable degree.

## How to Use Workqueues 如何使用工作队列¶

### Defining and Controlling a Workqueue 定义和控制工作队列¶

A workqueue is defined using a variable of type k_work_q. The workqueue is initialized by defining the stack area used by its thread, initializing the k_work_q, either zeroing its memory or calling k_work_queue_init(), and then calling k_work_queue_start(). The stack area must be defined using K_THREAD_STACK_DEFINE to ensure it is properly set up in memory.

The following code defines and initializes a workqueue:

#define MY_STACK_SIZE 512
#define MY_PRIORITY 5

struct k_work_q my_work_q;

k_work_queue_init(&my_work_q);

k_work_queue_start(&my_work_q, my_stack_area,
NULL);


In addition the queue identity and certain behavior related to thread rescheduling can be controlled by the optional final parameter; see k_work_queue_start() for details.

The following API can be used to interact with a workqueue:

• k_work_queue_drain() can be used to block the caller until the work queue has no items left. Work items resubmitted from the workqueue thread are accepted while a queue is draining, but work items from any other thread or ISR are rejected. The restriction on submitting more work can be extended past the completion of the drain operation in order to allow the blocking thread to perform additional work while the queue is “plugged”. Note that draining a queue has no effect on scheduling or processing delayable items, but if the queue is plugged and the deadline expires the item will silently fail to be submitted.

可以使用 k_work_queue_drain()阻塞调用方，直到工作队列中没有任何项目为止。当一个队列正在耗尽时，可以接受从工作队列线程重新提交的工作项，但是拒绝来自任何其他线程或 ISR 的工作项。对于提交更多工作的限制可以扩展到排出操作完成之后，以便允许阻塞线程在队列“被堵塞”时执行额外的工作。请注意，排空队列对调度或处理可延迟的项目没有影响，但是如果插入队列并且截止日期过期，则该项目将无声地无法提交。

• k_work_queue_unplug() removes any previous block on submission to the queue due to a previous drain operation.

K_work_queue_unplug()删除提交到队列上的任何以前的块，这是由于以前的排出操作。

### Submitting a Work Item 提交工作项¶

A work item is defined using a variable of type k_work. It must be initialized by calling k_work_init(), unless it is defined using K_WORK_DEFINE in which case initialization is performed at compile-time.

An initialized work item can be submitted to the system workqueue by calling k_work_submit(), or to a specified workqueue by calling k_work_submit_to_queue().

The following code demonstrates how an ISR can offload the printing of error messages to the system workqueue. Note that if the ISR attempts to resubmit the work item while it is still queued, the work item is left unchanged and the associated error message will not be printed.

struct device_info {
struct k_work work;
char name[16]
} my_device;

void my_isr(void *arg)
{
...
if (error detected) {
k_work_submit(&my_device.work);
}
...
}

void print_error(struct k_work *item)
{
struct device_info *the_device =
CONTAINER_OF(item, struct device_info, work);
printk("Got error on device %s\n", the_device->name);
}

/* initialize name info for a device */
strcpy(my_device.name, "FOO_dev");

/* initialize work item for printing device's error messages */
k_work_init(&my_device.work, print_error);

/* install my_isr() as interrupt handler for the device (not shown) */
...


The following API can be used to check the status of or synchronize with the work item:

• k_work_busy_get() returns a snapshot of flags indicating work item state. A zero value indicates the work is not scheduled, submitted, being executed, or otherwise still being referenced by the workqueue infrastructure.

K_work_busy_get()返回表示工作项状态的标志的快照。零值表示工作没有被计划、提交、执行，或者工作队列基础结构仍然引用。

• k_work_is_pending() is a helper that indicates true if and only if the work is scheduled, queued, or running.

K_ work_is_pending()是一个帮助器，它指示当且仅当工作被调度、排队或运行时为 true。

• k_work_flush() may be invoked from threads to block until the work item has completed. It returns immediately if the work is not pending.

可以从线程调用 k_work_flush()以阻止工作项，直到工作项完成为止。如果工作没有挂起，它立即返回。

• k_work_cancel() attempts to prevent the work item from being executed. This may or may not be successful. This is safe to invoke from ISRs.

K_work_cancel()尝试阻止执行工作项。这可能会成功，也可能不会成功。这对于从 ISRs 调用是安全的。

• k_work_cancel_sync() may be invoked from threads to block until the work completes; it will return immediately if the cancellation was successful or not necessary (the work wasn’t submitted or running). This can be used after k_work_cancel() is invoked (from an ISR) to confirm completion of an ISR-initiated cancellation.

可以从线程调用 k_work_cancel_sync()以阻止工作完成; 如果取消成功或不必要(工作没有提交或运行) ，它将立即返回。这可以在调用 k_work_cancel()(从 ISR 调用)之后使用，以确认 ISR 启动的取消的完成。

### Scheduling a Delayable Work Item 安排可推迟的工作项¶

A delayable work item is defined using a variable of type k_work_delayable. It must be initialized by calling k_work_init_delayable().

For delayed work there are two common use cases, depending on whether a deadline should be extended if a new event occurs. An example is collecting data that comes in asynchronously, e.g. characters from a UART associated with a keyboard. There are two APIs that submit work after a delay:

• k_work_schedule() (or k_work_schedule_for_queue()) schedules work to be executed at a specific time or after a delay. Further attempts to schedule the same item with this API before the delay completes will not change the time at which the item will be submitted to its queue. Use this if the policy is to keep collecting data until a specified delay since the first unprocessed data was received;

K_work_schedule()(或者 k_work_schedule_for_queue())调度在特定时间或延迟后执行的工作。在延迟完成之前，进一步尝试使用此 API 调度相同的项目，并不会改变将该项目提交到其队列的时间。如果策略是继续收集数据，直到收到第一个未处理的数据后的指定延迟，则使用此策略;

• k_work_reschedule() (or k_work_reschedule_for_queue()) unconditionally sets the deadline for the work, replacing any previous incomplete delay and changing the destination queue if necessary. Use this if the policy is to keep collecting data until a specified delay since the last unprocessed data was received.

K_work_redatation()(或者 k_work_redatation_for_queue())无条件地设置工作的截止日期，替换以前任何不完全延迟，并在必要时更改目标队列。如果策略是继续收集数据，直到收到最后一个未处理的数据后的指定延迟，则使用此策略。

If the work item is not scheduled both APIs behave the same. If K_NO_WAIT is specified as the delay the behavior is as if the item was immediately submitted directly to the target queue, without waiting for a minimal timeout (unless k_work_schedule() is used and a previous delay has not completed).

Both also have variants that allow control of the queue used for submission.

The helper function k_work_delayable_from_work() can be used to get a pointer to the containing k_work_delayable from a pointer to k_work that is passed to a work handler function.

The following additional API can be used to check the status of or synchronize with the work item:

### Synchronizing with Work Items 与工作项同步¶

While the state of both regular and delayable work items can be determined from any context using k_work_busy_get() and k_work_delayable_busy_get() some use cases require synchronizing with work items after they’ve been submitted. k_work_flush(), k_work_cancel_sync(), and k_work_cancel_delayable_sync() can be invoked from thread context to wait until the requested state has been reached.

These APIs must be provided with a k_work_sync object that has no application-inspectable components but is needed to provide the synchronization objects. These objects should not be allocated on a stack if the code is expected to work on architectures with CONFIG_KERNEL_COHERENCE.

## Workqueue Best Practices 工作队列最佳实践¶

### Avoid Race Conditions 避免竞争条件¶

Sometimes the data a work item must process is naturally thread-safe, for example when it’s put into a k_queue by some thread and processed in the work thread. More often external synchronization is required to avoid data races: cases where the work thread might inspect or manipulate shared state that’s being accessed by another thread or interrupt. Such state might be a flag indicating that work needs to be done, or a shared object that is filled by an ISR or thread and read by the work handler.

For simple flags Atomic Services may be sufficient. In other cases spin locks (k_spinlock_t) or thread-aware locks (k_sem, k_mutex , …) may be used to ensure data races don’t occur.

If the selected lock mechanism can sleep then allowing the work thread to sleep will starve other work queue items, which may need to make progress in order to get the lock released. Work handlers should try to take the lock with its no-wait path. For example:

static void work_handler(struct work *work)
{
struct work_context *parent = CONTAINER_OF(work, struct work_context,
work_item);

if (k_mutex_lock(&parent->lock, K_NO_WAIT) != 0) {
/* NB: Submit will fail if the work item is being cancelled. */
(void)k_work_submit(work);
return;
}

/* do stuff under lock */
k_mutex_unlock(&parent->lock);
/* do stuff without lock */
}


Be aware that if the lock is held by a thread with a lower priority than the work queue the resubmission may starve the thread that would release the lock, causing the application to fail. Where the idiom above is required a delayable work item is preferred, and the work should be (re-)scheduled with a non-zero delay to allow the thread holding the lock to make progress.

Note that submitting from the work handler can fail if the work item had been cancelled. Generally this is acceptable, since the cancellation will complete once the handler finishes. If it is not, the code above must take other steps to notify the application that the work could not be performed.

Work items in isolation are self-locking, so you don’t need to hold an external lock just to submit or schedule them. Even if you use external state protected by such a lock to prevent further resubmission, it’s safe to do the resubmit as long as you’re sure that eventually the item will take its lock and check that state to determine whether it should do anything. Where a delayable work item is being rescheduled in its handler due to inability to take the lock some other self-locking state, such as an atomic flag set by the application/driver when the cancel is initiated, would be required to detect the cancellation and avoid the cancelled work item being submitted again after the deadline.

### Check Return Values 检查返回值¶

All work API functions return status of the underlying operation, and in many cases it is important to verify that the intended result was obtained.

• Submitting a work item (k_work_submit_to_queue()) can fail if the work is being cancelled or the queue is not accepting new items. If this happens the work will not be executed, which could cause a subsystem that is animated by work handler activity to become non-responsive.

如果工作被取消或者队列不接受新项目，则提交工作项(k_work_submit_to_queue())可能会失败。如果发生这种情况，工作将不会被执行，这可能导致由工作处理程序活动生成的子系统变成非响应性的。

• Asynchronous cancellation (k_work_cancel() or k_work_cancel_delayable()) can complete while the work item is still being run by a handler. Proceeding to manipulate state shared with the work handler will result in data races that can cause failures.

异步取消(k_work_cancel()或 k_work_cancel_delayable())可以在处理程序仍在运行工作项时完成。继续操作与工作处理程序共享的状态将导致可能导致失败的数据竞争。

Many race conditions have been present in Zephyr code because the results of an operation were not checked.

There may be good reason to believe that a return value indicating that the operation did not complete as expected is not a problem. In those cases the code should clearly document this, by (1) casting the return value to void to indicate that the result is intentionally ignored, and (2) documenting what happens in the unexpected case. For example:

/* If this fails, the work handler will check pub->active and
* exit without transmitting.
*/
(void)k_work_cancel_delayable(&pub->timer);


However in such a case the following code must still avoid data races, as it cannot guarantee that the work thread is not accessing work-related state.

### Don’t Optimize Prematurely 不要过早优化¶

The workqueue API is designed to be safe when invoked from multiple threads and interrupts. Attempts to externally inspect a work item’s state and make decisions based on the result are likely to create new problems.

So when new work comes in, just submit it. Don’t attempt to “optimize” by checking whether the work item is already submitted by inspecting snapshot state with k_work_is_pending() or k_work_busy_get(), or checking for a non-zero delay from k_work_delayable_remaining_get(). Those checks are fragile: a “busy” indication can be obsolete by the time the test is returned, and a “not-busy” indication can also be wrong if work is submitted from multiple contexts, or (for delayable work) if the deadline has completed but the work is still in queued or running state.

A general best practice is to always maintain in shared state some condition that can be checked by the handler to confirm whether there is work to be done. This way you can use the work handler as the standard cleanup path: rather than having to deal with cancellation and cleanup at points where items are submitted, you may be able to have everything done in the work handler itself.

A rare case where you could safely use k_work_is_pending() is as a check to avoid invoking k_work_flush() or k_work_cancel_sync(), if you are certain that nothing else might submit the work while you’re checking (generally because you’re holding a lock that prevents access to state used for submission).

## Suggested Uses 建议的用途¶

Use the system workqueue to defer complex interrupt-related processing from an ISR to a shared thread. This allows the interrupt-related processing to be done promptly without compromising the system’s ability to respond to subsequent interrupts, and does not require the application to define and manage an additional thread to do the processing.

## Configuration Options 配置选项¶

Related configuration options:

Thread support is not necessary in some applications:

• Simple event-driven applications

简单事件驱动的应用程序

• Examples intended to demonstrate core functionality

用于演示核心功能的示例

Thread support can be disabled by setting CONFIG_MULTITHREADING to n. Since this configuration has a significant impact on Zephyr’s functionality and testing of it has been limited, there are conditions on what can be expected to work in this configuration.

## 什么是可以期待的工作

These core capabilities shall function correctly when CONFIG_MULTITHREADING is disabled:

• The build system

构建系统

• The ability to boot the application to main()

将应用程序引导到 main()的能力

• Interrupt management

中断管理

• The system clock including k_uptime_get()

系统时钟包括 k_uptime_get()

• Timers, i.e. k_timer()

计时器，即 k_timer()

• Non-sleeping delays e.g. k_busy_wait().

非睡眠延迟，例如 k_busy_wait()。

• Sleeping k_cpu_idle().

正在睡眠的 k_cpu_idle() 。

• Pre main() drivers and subsystems initialization e.g. SYS_INIT.

预主()驱动程序和子系统初始化，例如 SYS_init。

• Memory Management

内存管理

• Specifically identified drivers in certain subsystems, listed below.

特定子系统中特别标识的驱动程序，如下所示。

The expectations above affect selection of other features; for example CONFIG_SYS_CLOCK_EXISTS cannot be set to n.

## 不能期望什么工作

Functionality that will not work with CONFIG_MULTITHREADING includes majority of the kernel API:

## Subsystem Behavior Without Thread Support 没有线程支持的子系统行为¶

The sections below list driver and functional subsystems that are expected to work to some degree when CONFIG_MULTITHREADING is disabled. Subsystems that are not listed here should not be expected to work.

Some existing drivers within the listed subsystems do not work when threading is disabled, but are within scope based on their subsystem, or may be sufficiently isolated that supporting them on a particular platform is low-impact. Enhancements to add support to existing capabilities that were not originally implemented to work with threads disabled will be considered.

### Flash¶

The Flash is expected to work for all SoC flash peripheral drivers. Bus-accessed devices like serial memories may not be supported.

List/table of supported drivers to go here

### GPIO ¶

The GPIO is expected to work for all SoC GPIO peripheral drivers. Bus-accessed devices like GPIO extenders may not be supported.

GPIO 预计将为所有 SoC GPIO 外设驱动程序工作。可能不支持像 GPIO 扩展程序这样的总线访问设备。

List/table of supported drivers to go here

### UART 异步收发器¶

A subset of the UART is expected to work for all SoC UART peripheral drivers.

UART 的一个子集应该适用于所有 SoC UART 外围驱动程序。

List/table of supported drivers to go here, including which API options are supported

# Interrupts 中断¶

An interrupt service routine (ISR) is a function that executes asynchronously in response to a hardware or software interrupt. An ISR normally preempts the execution of the current thread, allowing the response to occur with very low overhead. Thread execution resumes only once all ISR work has been completed.

Interrupt handler 中断(ISR)是一个响应硬件或软件中断异步执行的函数。ISR 通常会抢占当前线程的执行，允许以非常低的开销执行响应。线程执行只有在所有 ISR 工作完成后才恢复。

## Concepts 概念¶

Any number of ISRs can be defined (limited only by available RAM), subject to the constraints imposed by underlying hardware.

An ISR has the following key properties:

ISR 具有以下关键属性:

• An interrupt request (IRQ) signal that triggers the ISR.

触发 ISR 的中断请求(IRQ)信号。

• A priority level associated with the IRQ.

与 IRQ 相关的优先级。

• An interrupt handler function that is invoked to handle the interrupt.

一个用来处理中断的 interrupt handler 函数。

• An argument value that is passed to that function.

传递给该函数的参数值。

An IDT or a vector table is used to associate a given interrupt source with a given ISR. Only a single ISR can be associated with a specific IRQ at any given time.

IDT 或向量表用于将给定中断源与给定 ISR 关联起来。在任何给定时间，只有一个 ISR 可以与特定的 IRQ 关联。

Multiple ISRs can utilize the same function to process interrupts, allowing a single function to service a device that generates multiple types of interrupts or to service multiple devices (usually of the same type). The argument value passed to an ISR’s function allows the function to determine which interrupt has been signaled.

The kernel provides a default ISR for all unused IDT entries. This ISR generates a fatal system error if an unexpected interrupt is signaled.

The kernel supports interrupt nesting. This allows an ISR to be preempted in mid-execution if a higher priority interrupt is signaled. The lower priority ISR resumes execution once the higher priority ISR has completed its processing.

An ISR’s interrupt handler function executes in the kernel’s interrupt context. This context has its own dedicated stack area (or, on some architectures, stack areas). The size of the interrupt context stack must be capable of handling the execution of multiple concurrent ISRs if interrupt nesting support is enabled.

ISR 的 interrupt handler/值函数在内核的中断上下文中执行。这个上下文有它自己的专用堆栈区域(或者，在某些架构中，堆栈区域)。如果启用了中断嵌套支持，中断上下文堆栈的大小必须能够处理多个并发 isr 的执行。

Important

Many kernel APIs can be used only by threads, and not by ISRs. In cases where a routine may be invoked by both threads and ISRs the kernel provides the k_is_in_isr() API to allow the routine to alter its behavior depending on whether it is executing as part of a thread or as part of an ISR.

### Multi-level Interrupt handling 多级中断处理¶

A hardware platform can support more interrupt lines than natively-provided through the use of one or more nested interrupt controllers. Sources of hardware interrupts are combined into one line that is then routed to the parent controller.

If nested interrupt controllers are supported, CONFIG_MULTI_LEVEL_INTERRUPTS should be set to 1, and CONFIG_2ND_LEVEL_INTERRUPTS and CONFIG_3RD_LEVEL_INTERRUPTS configured as well, based on the hardware architecture.

A unique 32-bit interrupt number is assigned with information embedded in it to select and invoke the correct Interrupt Service Routine (ISR). Each interrupt level is given a byte within this 32-bit number, providing support for up to four interrupt levels using this arch, as illustrated and explained below:

        9             2   0
_ _ _ _ _ _ _ _ _ _ _ _ _        (LEVEL 1)
5     |         A   |
_ _ _ _ _ _ _        _ _ _ _ _ _ _  (LEVEL 2)
|   C                      B
_ _ _ _ _ _ _                         (LEVEL 3)
D


There are three interrupt levels shown here.

• ‘-’ means interrupt line and is numbered from 0 (right most).

‘-’表示中断行，从0开始编号(最右边)。

• LEVEL 1 has 12 interrupt lines, with two lines (2 and 9) connected to nested controllers and one device ‘A’ on line 4.

LEVEL 1有12条中断线，其中两条线(2和9)连接到嵌套控制器，设备“a”在线4上

• One of the LEVEL 2 controllers has interrupt line 5 connected to a LEVEL 3 nested controller and one device ‘C’ on line 3.

其中一个 LEVEL 2控制器的中断线路5连接到 LEVEL 3嵌套控制器和第3线上的一个设备 c。

• The other LEVEL 2 controller has no nested controllers but has one device ‘B’ on line 2.

另一个 LEVEL 2控制器没有嵌套的控制器，但在第2行有一个设备 b。

• The LEVEL 3 controller has one device ‘D’ on line 2.

LEVEL 3控制器在2号线上有一个 d 设备。

Here’s how unique interrupt numbers are generated for each hardware interrupt. Let’s consider four interrupts shown above as A, B, C, and D:

A -> 0x00000004
B -> 0x00000302
C -> 0x00000409
D -> 0x00030609


Note 注意

The bit positions for LEVEL 2 and onward are offset by 1, as 0 means that interrupt number is not present for that level. For our example, the LEVEL 3 controller has device D on line 2, connected to the LEVEL 2 controller’s line 5, that is connected to the LEVEL 1 controller’s line 9 (2 -> 5 -> 9). Because of the encoding offset for LEVEL 2 and onward, device D is given the number 0x00030609.

### Preventing Interruptions 防止干扰¶

In certain situations it may be necessary for the current thread to prevent ISRs from executing while it is performing time-sensitive or critical section operations.

A thread may temporarily prevent all IRQ handling in the system using an IRQ lock. This lock can be applied even when it is already in effect, so routines can use it without having to know if it is already in effect. The thread must unlock its IRQ lock the same number of times it was locked before interrupts can be once again processed by the kernel while the thread is running.

Important 重要事项

The IRQ lock is thread-specific. If thread A locks out interrupts then performs an operation that puts itself to sleep (e.g. sleeping for N milliseconds), the thread’s IRQ lock no longer applies once thread A is swapped out and the next ready thread B starts to run.

IRQ 锁是线程特定的。如果线程 a 锁定了中断，然后执行一个使自己处于休眠状态的操作(例如，休眠 n 毫秒) ，一旦线程 a 被换出，下一个准备好的线程 b 开始运行，线程的 IRQ 锁就不再适用。

This means that interrupts can be processed while thread B is running unless thread B has also locked out interrupts using its own IRQ lock. (Whether interrupts can be processed while the kernel is switching between two threads that are using the IRQ lock is architecture-specific.)

When thread A eventually becomes the current thread once again, the kernel re-establishes thread A’s IRQ lock. This ensures thread A won’t be interrupted until it has explicitly unlocked its IRQ lock.

If thread A does not sleep but does make a higher-priority thread B ready, the IRQ lock will inhibit any preemption that would otherwise occur. Thread B will not run until the next reschedule point reached after releasing the IRQ lock.

Alternatively, a thread may temporarily disable a specified IRQ so its associated ISR does not execute when the IRQ is signaled. The IRQ must be subsequently enabled to permit the ISR to execute.

Important 重要事项

Disabling an IRQ prevents all threads in the system from being preempted by the associated ISR, not just the thread that disabled the IRQ.

#### Zero Latency Interrupts 零延迟中断¶

Preventing interruptions by applying an IRQ lock may increase the observed interrupt latency. A high interrupt latency, however, may not be acceptable for certain low-latency use-cases.

The kernel addresses such use-cases by allowing interrupts with critical latency constraints to execute at a priority level that cannot be blocked by interrupt locking. These interrupts are defined as zero-latency interrupts. The support for zero-latency interrupts requires CONFIG_ZERO_LATENCY_IRQS to be enabled. In addition to that, the flag IRQ_ZERO_LATENCY must be passed to IRQ_CONNECT or IRQ_DIRECT_CONNECT macros to configure the particular interrupt with zero latency.

Zero-latency interrupts are expected to be used to manage hardware events directly, and not to interoperate with the kernel code at all. They should treat all kernel APIs as undefined behavior (i.e. an application that uses the APIs inside a zero-latency interrupt context is responsible for directly verifying correct behavior). Zero-latency interrupts may not modify any data inspected by kernel APIs invoked from normal Zephyr contexts and shall not generate exceptions that need to be handled synchronously (e.g. kernel panic).

Important 重要事项

Zero-latency interrupts are supported on an architecture-specific basis. The feature is currently implemented in the ARM Cortex-M architecture variant.

An ISR should execute quickly to ensure predictable system operation. If time consuming processing is required the ISR should offload some or all processing to a thread, thereby restoring the kernel’s ability to respond to other interrupts.

ISR 应该快速执行，以确保可预测的系统操作。如果需要耗时的处理，ISR 应该将部分或全部处理转移到一个线程中，从而恢复内核响应其他中断的能力。

• An ISR can signal a helper thread to do interrupt-related processing using a kernel object, such as a FIFO, LIFO, or semaphore.

ISR 可以通过使用内核对象(如 FIFO、 LIFO 或信号量)向辅助线程发出信号，让它执行与中断相关的处理。

• An ISR can instruct the system workqueue thread to execute a work item. (See Workqueue Threads.)

ISR 可以指示系统工作队列线程执行工作项

When an ISR offloads work to a thread, there is typically a single context switch to that thread when the ISR completes, allowing interrupt-related processing to continue almost immediately. However, depending on the priority of the thread handling the offload, it is possible that the currently executing cooperative thread or other higher-priority threads may execute before the thread handling the offload is scheduled.

## Implementation 实现¶

### Defining a regular ISR 定义常规 ISR¶

An ISR is defined at runtime by calling IRQ_CONNECT. It must then be enabled by calling irq_enable().

ISR 在运行时通过调用 IRQ_CONNECT 来定义，然后必须通过调用 irq_enable()来启用。

Important 重要事项

IRQ_CONNECT() is not a C function and does some inline assembly magic behind the scenes. All its arguments must be known at build time. Drivers that have multiple instances may need to define per-instance config functions to configure each instance of the interrupt.

IRQ_CONNECT()不是一个 c 函数，它在幕后执行一些内联汇编魔术。在构建时必须知道它的所有参数。具有多个实例的驱动程序可能需要定义每个实例的配置函数来配置中断的每个实例。

The following code defines and enables an ISR.

#define MY_DEV_IRQ  24       /* device uses IRQ 24 */
#define MY_DEV_PRIO  2       /* device uses interrupt priority 2 */
/* argument passed to my_isr(), in this case a pointer to the device */
#define MY_ISR_ARG  DEVICE_GET(my_device)
#define MY_IRQ_FLAGS 0       /* IRQ flags */

void my_isr(void *arg)
{
... /* ISR code */
}

void my_isr_installer(void)
{
...
IRQ_CONNECT(MY_DEV_IRQ, MY_DEV_PRIO, my_isr, MY_ISR_ARG, MY_IRQ_FLAGS);
irq_enable(MY_DEV_IRQ);
...
}


Since the IRQ_CONNECT macro requires that all its parameters be known at build time, in some cases this may not be acceptable. It is also possible to install interrupts at runtime with irq_connect_dynamic(). It is used in exactly the same way as IRQ_CONNECT:

void my_isr_installer(void)
{
...
irq_connect_dynamic(MY_DEV_IRQ, MY_DEV_PRIO, my_isr, MY_ISR_ARG,
MY_IRQ_FLAGS);
irq_enable(MY_DEV_IRQ);
...
}


Dynamic interrupts require the CONFIG_DYNAMIC_INTERRUPTS option to be enabled. Removing or re-configuring a dynamic interrupt is currently unsupported.

### Defining a ‘direct’ ISR 定义“直接”ISR¶

Regular Zephyr interrupts introduce some overhead which may be unacceptable for some low-latency use-cases. Specifically:

• The argument to the ISR is retrieved and passed to the ISR

检索到 ISR 的参数并将其传递给 ISR

• If power management is enabled and the system was idle, all the hardware will be resumed from low-power state before the ISR is executed, which can be very time-consuming

如果启用了电源管理并且系统处于空闲状态，那么在执行 ISR 之前，所有的硬件都将从低功耗状态恢复，这可能非常耗时

• Although some architectures will do this in hardware, other architectures need to switch to the interrupt stack in code

尽管一些体系结构在硬件上可以做到这一点，但是其他体系结构需要在代码中切换到中断堆栈

• After the interrupt is serviced, the OS then performs some logic to potentially make a scheduling decision.

在中断服务之后，操作系统执行一些逻辑来潜在地做出调度决策。

Zephyr supports so-called ‘direct’ interrupts, which are installed via IRQ_DIRECT_CONNECT. These direct interrupts have some special implementation requirements and a reduced feature set; see the definition of IRQ_DIRECT_CONNECT for details.

Zephyr 支持所谓的“直接”中断，它是通过 IRQ_direct_connect 安装的。这些直接中断有一些特殊的实现需求和一个简化的特性集; 详细信息请参阅 IRQ_direct_connect 的定义。

The following code demonstrates a direct ISR:

#define MY_DEV_IRQ  24       /* device uses IRQ 24 */
#define MY_DEV_PRIO  2       /* device uses interrupt priority 2 */
/* argument passed to my_isr(), in this case a pointer to the device */
#define MY_IRQ_FLAGS 0       /* IRQ flags */

ISR_DIRECT_DECLARE(my_isr)
{
do_stuff();
ISR_DIRECT_PM(); /* PM done after servicing interrupt for best latency */
return 1; /* We should check if scheduling decision should be made */
}

void my_isr_installer(void)
{
...
IRQ_DIRECT_CONNECT(MY_DEV_IRQ, MY_DEV_PRIO, my_isr, MY_IRQ_FLAGS);
irq_enable(MY_DEV_IRQ);
...
}


Installation of dynamic direct interrupts is supported on an architecture-specific basis. (The feature is currently implemented in ARM Cortex-M architecture variant. Dynamic direct interrupts feature is exposed to the user via an ARM-only API.)

### Implementation Details 实施细节¶

Interrupt tables are set up at build time using some special build tools. The details laid out here apply to all architectures except x86, which are covered in the x86 Details section below.

Any invocation of IRQ_CONNECT will declare an instance of struct _isr_list which is placed in a special .intList section:

struct _isr_list {
/** IRQ line number */
int32_t irq;
/** Flags for this IRQ, see ISR_FLAG_* definitions */
int32_t flags;
/** ISR to call */
void *func;
/** Parameter for non-direct IRQs */
void *param;
};


Zephyr is built in two phases; the first phase of the build produces ${ZEPHYR_PREBUILT_EXECUTABLE}.elf which contains all the entries in the .intList section preceded by a header: ZEPHYR 分两个阶段构建; 构建的第一阶段生成 ${ZEPHYR_prebuilt_executable}.elf。包含了所有的条目。在 intList 部分前面有一个头:

struct {
void *spurious_irq_handler;
void *sw_irq_handler;
uint32_t num_isrs;
uint32_t num_vectors;
struct _isr_list isrs[];  <- of size num_isrs
};


This data consisting of the header and instances of struct _isr_list inside ${ZEPHYR_PREBUILT_EXECUTABLE}.elf is then used by the gen_isr_tables.py script to generate a C file defining a vector table and software ISR table that are then compiled and linked into the final application. 该数据包含${ ZEPHYR_prebuilt_executable }中 struct_isr_list 的头部和实例。然后，gen_ISR_tables.py 脚本使用 elf 生成一个定义向量表和软件 ISR 表的 c 文件，然后编译并链接到最终应用程序中。

The priority level of any interrupt is not encoded in these tables, instead IRQ_CONNECT also has a runtime component which programs the desired priority level of the interrupt to the interrupt controller. Some architectures do not support the notion of interrupt priority, in which case the priority argument is ignored.

#### Vector Table 矢量表¶

A vector table is generated when CONFIG_GEN_IRQ_VECTOR_TABLE is enabled. This data structure is used natively by the CPU and is simply an array of function pointers, where each element n corresponds to the IRQ handler for IRQ line n, and the function pointers are:

1. For ‘direct’ interrupts declared with IRQ_DIRECT_CONNECT, the handler function will be placed here.

对于用 IRQ_direct_connect 声明的“直接”中断，处理程序函数将放在这里。

2. For regular interrupts declared with IRQ_CONNECT, the address of the common software IRQ handler is placed here. This code does common kernel interrupt bookkeeping and looks up the ISR and parameter from the software ISR table.

对于使用 IRQ_CONNECT 声明的常规中断，通用软件 IRQ 处理程序的地址放在这里。这段代码执行常见的内核中断簿记，并从软件 ISR 表中查找 ISR 和参数。

3. For interrupt lines that are not configured at all, the address of the spurious IRQ handler will be placed here. The spurious IRQ handler causes a system fatal error if encountered.

对于根本没有配置的中断行，假的 IRQ 处理程序的地址将放在这里。如果遇到虚假的 IRQ 处理程序，将导致系统致命错误。

Some architectures (such as the Nios II internal interrupt controller) have a common entry point for all interrupts and do not support a vector table, in which case the CONFIG_GEN_IRQ_VECTOR_TABLE option should be disabled.

Some architectures may reserve some initial vectors for system exceptions and declare this in a table elsewhere, in which case CONFIG_GEN_IRQ_START_VECTOR needs to be set to properly offset the indices in the table.

#### SW ISR Table ¶

This is an array of struct _isr_table_entry:

struct _isr_table_entry {
void *arg;
void (*isr)(void *);
};


This is used by the common software IRQ handler to look up the ISR and its argument and execute it. The active IRQ line is looked up in an interrupt controller register and used to index this table.

#### x86 Details 86 Details¶

The x86 architecture has a special type of vector table called the Interrupt Descriptor Table (IDT) which must be laid out in a certain way per the x86 processor documentation. It is still fundamentally a vector table, and the arch/x86/gen_idt.py tool uses the .intList section to create it. However, on APIC-based systems the indexes in the vector table do not correspond to the IRQ line. The first 32 vectors are reserved for CPU exceptions, and all remaining vectors (up to index 255) correspond to the priority level, in groups of 16. In this scheme, interrupts of priority level 0 will be placed in vectors 32-47, level 1 48-63, and so forth. When the arch/x86/gen_idt.py tool is constructing the IDT, when it configures an interrupt it will look for a free vector in the appropriate range for the requested priority level and set the handler there.

X86体系结构具有一种特殊类型的向量表，称为中断描述符表(Interrupt Descriptor Table，IDT) ，根据 x86处理器文档，必须以某种方式对其进行布局。它基本上仍然是一个向量表，arch/x86/gen_idt.py 工具使用。来创建它。但是，在基于 apic 的系统中，向量表中的索引不对应于 IRQ 行。前32个向量是为 CPU 异常保留的，所有剩下的向量(直到索引255)都对应于优先级，以16为一组。在这个方案中，优先级0的中断将被放置在向量32-47、148-63等等中。当 arch/x86/gen_IDT.py 工具构造 IDT 时，当它配置一个中断时，它会在请求的优先级的适当范围内寻找一个空闲向量，并在那里设置处理程序。

On x86 when an interrupt or exception vector is executed by the CPU, there is no foolproof way to determine which vector was fired, so a software ISR table indexed by IRQ line is not used. Instead, the IRQ_CONNECT call creates a small assembly language function which calls the common interrupt code in _interrupt_enter() with the ISR and parameter as arguments. It is the address of this assembly interrupt stub which gets placed in the IDT. For interrupts declared with IRQ_DIRECT_CONNECT the parameterless ISR is placed directly in the IDT.

On systems where the position in the vector table corresponds to the interrupt’s priority level, the interrupt controller needs to know at runtime what vector is associated with an IRQ line. arch/x86/gen_idt.py additionally creates an _irq_to_interrupt_vector array which maps an IRQ line to its configured vector in the IDT. This is used at runtime by IRQ_CONNECT to program the IRQ-to-vector association in the interrupt controller.

For dynamic interrupts, the build must generate some 4-byte dynamic interrupt stubs, one stub per dynamic interrupt in use. The number of stubs is controlled by the CONFIG_X86_DYNAMIC_IRQ_STUBS option. Each stub pushes an unique identifier which is then used to fetch the appropriate handler function and parameter out of a table populated when the dynamic interrupt was connected.

## Suggested Uses 建议的用途¶

Use a regular or direct ISR to perform interrupt processing that requires a very rapid response, and can be done quickly without blocking.

Note

Interrupt processing that is time consuming, or involves blocking, should be handed off to a thread. See Offloading ISR Work for a description of various techniques that can be used in an application.

## Configuration Options 配置选项¶

Related configuration options:

Additional architecture-specific and device-specific configuration options also exist.

# Polling API 轮询 API¶

The polling API is used to wait concurrently for any one of multiple conditions to be fulfilled.

## Concepts 概念¶

The polling API’s main function is k_poll(), which is very similar in concept to the POSIX poll() function, except that it operates on kernel objects rather than on file descriptors.

The polling API allows a single thread to wait concurrently for one or more conditions to be fulfilled without actively looking at each one individually.

There is a limited set of such conditions:

• a semaphore becomes available

一个信号量变得可用

• a kernel FIFO contains data ready to be retrieved

内核的 FIFO 包含可以检索的数据

• a poll signal is raised

民意测验结果出来了

A thread that wants to wait on multiple conditions must define an array of poll events, one for each condition.

All events in the array must be initialized before the array can be polled on.

Each event must specify which type of condition must be satisfied so that its state is changed to signal the requested condition has been met.

Each event must specify what kernel object it wants the condition to be satisfied.

Each event must specify which mode of operation is used when the condition is satisfied.

Each event can optionally specify a tag to group multiple events together, to the user’s discretion.

Apart from the kernel objects, there is also a poll signal pseudo-object type that be directly signaled.

The k_poll() function returns as soon as one of the conditions it is waiting for is fulfilled. It is possible for more than one to be fulfilled when k_poll() returns, if they were fulfilled before k_poll() was called, or due to the preemptive multi-threading nature of the kernel. The caller must look at the state of all the poll events in the array to figured out which ones were fulfilled and what actions to take.

Currently, there is only one mode of operation available: the object is not acquired. As an example, this means that when k_poll() returns and the poll event states that the semaphore is available, the caller of k_poll() must then invoke k_sem_take() to take ownership of the semaphore. If the semaphore is contested, there is no guarantee that it will be still available when k_sem_give() is called.

## Implementation 实施¶

### Using k_poll() 使用 k_poll()¶

The main API is k_poll(), which operates on an array of poll events of type k_poll_event. Each entry in the array represents one event a call to k_poll() will wait for its condition to be fulfilled.

They can be initialized using either the runtime initializers K_POLL_EVENT_INITIALIZER() or k_poll_event_init(), or the static initializer K_POLL_EVENT_STATIC_INITIALIZER(). An object that matches the type specified must be passed to the initializers. The mode must be set to K_POLL_MODE_NOTIFY_ONLY. The state must be set to K_POLL_STATE_NOT_READY (the initializers take care of this). The user tag is optional and completely opaque to the API: it is there to help a user to group similar events together. Being optional, it is passed to the static initializer, but not the runtime ones for performance reasons. If using runtime initializers, the user must set it separately in the k_poll_event data structure. If an event in the array is to be ignored, most likely temporarily, its type can be set to K_POLL_TYPE_IGNORE.

struct k_poll_event events[2] = {
K_POLL_EVENT_STATIC_INITIALIZER(K_POLL_TYPE_SEM_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_sem, 0),
K_POLL_EVENT_STATIC_INITIALIZER(K_POLL_TYPE_FIFO_DATA_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_fifo, 0),
};


or at runtime

struct k_poll_event events[2];
void some_init(void)
{
k_poll_event_init(&events[0],
K_POLL_TYPE_SEM_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_sem);

k_poll_event_init(&events[1],
K_POLL_TYPE_FIFO_DATA_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_fifo);

// tags are left uninitialized if unused
}


After the events are initialized, the array can be passed to k_poll(). A timeout can be specified to wait only for a specified amount of time, or the special values K_NO_WAIT and K_FOREVER to either not wait or wait until an event condition is satisfied and not sooner.

A list of pollers is offered on each semaphore or FIFO and as many events can wait in it as the app wants. Notice that the waiters will be served in first-come-first-serve order, not in priority order.

In case of success, k_poll() returns 0. If it times out, it returns -EAGAIN.

// assume there is no contention on this semaphore and FIFO
// -EADDRINUSE will not occur; the semaphore and/or data will be available

void do_stuff(void)
{
rc = k_poll(events, 2, 1000);
if (rc == 0) {
if (events[0].state == K_POLL_STATE_SEM_AVAILABLE) {
k_sem_take(events[0].sem, 0);
} else if (events[1].state == K_POLL_STATE_FIFO_DATA_AVAILABLE) {
data = k_fifo_get(events[1].fifo, 0);
// handle data
}
} else {
// handle timeout
}
}


When k_poll() is called in a loop, the events state must be reset to K_POLL_STATE_NOT_READY by the user.

void do_stuff(void)
{
for(;;) {
rc = k_poll(events, 2, K_FOREVER);
if (events[0].state == K_POLL_STATE_SEM_AVAILABLE) {
k_sem_take(events[0].sem, 0);
} else if (events[1].state == K_POLL_STATE_FIFO_DATA_AVAILABLE) {
data = k_fifo_get(events[1].fifo, 0);
// handle data
}
}
}


### Using k_poll_signal_raise() 使用 k_poll_signal_raise()¶

One of the types of events is K_POLL_TYPE_SIGNAL: this is a “direct” signal to a poll event. This can be seen as a lightweight binary semaphore only one thread can wait for.

A poll signal is a separate object of type k_poll_signal that must be attached to a k_poll_event, similar to a semaphore or FIFO. It must first be initialized either via K_POLL_SIGNAL_INITIALIZER() or k_poll_signal_init().

struct k_poll_signal signal;
void do_stuff(void)
{
k_poll_signal_init(&signal);
}


It is signaled via the k_poll_signal_raise() function. This function takes a user result parameter that is opaque to the API and can be used to pass extra information to the thread waiting on the event.

struct k_poll_signal signal;

void do_stuff(void)
{
k_poll_signal_init(&signal);

struct k_poll_event events[1] = {
K_POLL_EVENT_INITIALIZER(K_POLL_TYPE_SIGNAL,
K_POLL_MODE_NOTIFY_ONLY,
&signal),
};

k_poll(events, 1, K_FOREVER);

if (events.signal->result == 0x1337) {
// A-OK!
} else {
// weird error
}
}

void signal_do_stuff(void)
{
k_poll_signal_raise(&signal, 0x1337);
}


If the signal is to be polled in a loop, both its event state and its signaled field must be reset on each iteration if it has been signaled.

struct k_poll_signal signal;
void do_stuff(void)
{
k_poll_signal_init(&signal);

struct k_poll_event events[1] = {
K_POLL_EVENT_INITIALIZER(K_POLL_TYPE_SIGNAL,
K_POLL_MODE_NOTIFY_ONLY,
&signal),
};

for (;;) {
k_poll(events, 1, K_FOREVER);

if (events[0].signal->result == 0x1337) {
// A-OK!
} else {
// weird error
}

events[0].signal->signaled = 0;
}
}


Note that poll signals are not internally synchronized. A k_poll call that is passed a signal will return after any code in the system calls k_poll_signal_raise(). But if the signal is being externally managed and reset via k_poll_signal_init(), it is possible that by the time the application checks, the event state may no longer be equal to K_POLL_STATE_SIGNALED, and a (naive) application will miss events. Best practice is always to reset the signal only from within the thread invoking the k_poll() loop, or else to use some other event type which tracks event counts: semaphores and FIFOs more more error-proof in this sense because they can’t “miss” events, architecturally.

## Suggested Uses 建议的用途¶

Use k_poll() to consolidate multiple threads that would be pending on one object each, saving possibly large amounts of stack space.

Use a poll signal as a lightweight binary semaphore if only one thread pends on it.

Note

Because objects are only signaled if no other thread is waiting for them to become available and only one thread can poll on a specific object, polling is best used when objects are not subject of contention between multiple threads, basically when a single thread operates as a main “server” or “dispatcher” for multiple objects and is the only one trying to acquire these objects.

## Configuration Options 配置选项¶

Related configuration options:

# Semaphores 信号量¶

A semaphore is a kernel object that implements a traditional counting semaphore.

## Concepts 概念¶

Any number of semaphores can be defined (limited only by available RAM). Each semaphore is referenced by its memory address.

A semaphore has the following key properties:

• A count that indicates the number of times the semaphore can be taken. A count of zero indicates that the semaphore is unavailable.

一个计数器，指示可以使用信号量的次数。计数为零表示信号量不可用。

• A limit that indicates the maximum value the semaphore’s count can reach.

指示信号量计数器可达到的最大值的限制。

A semaphore must be initialized before it can be used. Its count must be set to a non-negative value that is less than or equal to its limit.

A semaphore may be given by a thread or an ISR. Giving the semaphore increments its count, unless the count is already equal to the limit.

A semaphore may be taken by a thread. Taking the semaphore decrements its count, unless the semaphore is unavailable (i.e. at zero). When a semaphore is unavailable a thread may choose to wait for it to be given. Any number of threads may wait on an unavailable semaphore simultaneously. When the semaphore is given, it is taken by the highest priority thread that has waited longest.

Note

You may initialize a “full” semaphore (count equal to limit) to limit the number of threads able to execute the critical section at the same time. You may also initialize an empty semaphore (count equal to 0, with a limit greater than 0) to create a gate through which no waiting thread may pass until the semaphore is incremented. All standard use cases of the common semaphore are supported.

Note

The kernel does allow an ISR to take a semaphore, however the ISR must not attempt to wait if the semaphore is unavailable.

## Implementation 实施¶

### Defining a Semaphore 定义信号量¶

A semaphore is defined using a variable of type k_sem. It must then be initialized by calling k_sem_init().

The following code defines a semaphore, then configures it as a binary semaphore by setting its count to 0 and its limit to 1.

struct k_sem my_sem;

k_sem_init(&my_sem, 0, 1);


Alternatively, a semaphore can be defined and initialized at compile time by calling K_SEM_DEFINE.

The following code has the same effect as the code segment above.

K_SEM_DEFINE(my_sem, 0, 1);


### Giving a Semaphore 发送信号¶

A semaphore is given by calling k_sem_give().

The following code builds on the example above, and gives the semaphore to indicate that a unit of data is available for processing by a consumer thread.

void input_data_interrupt_handler(void *arg)
{
/* notify thread that data is available */
k_sem_give(&my_sem);

...
}


### Taking a Semaphore 使用信号灯¶

A semaphore is taken by calling k_sem_take().

The following code builds on the example above, and waits up to 50 milliseconds for the semaphore to be given. A warning is issued if the semaphore is not obtained in time.

void consumer_thread(void)
{
...

if (k_sem_take(&my_sem, K_MSEC(50)) != 0) {
printk("Input data not available!");
} else {
/* fetch available data */
...
}
...
}


## Suggested Uses 建议的用途¶

Use a semaphore to synchronize processing between a producing and consuming threads or ISRs.

## Configuration Options 配置选项¶

Related configuration options:

• None.

没有。

# Mutexes 互斥¶

A mutex is a kernel object that implements a traditional reentrant mutex. A mutex allows multiple threads to safely share an associated hardware or software resource by ensuring mutually exclusive access to the resource.

## Concepts 概念¶

Any number of mutexes can be defined (limited only by available RAM). Each mutex is referenced by its memory address.

A mutex has the following key properties:

• A lock count that indicates the number of times the mutex has be locked by the thread that has locked it. A count of zero indicates that the mutex is unlocked.

锁定计数，指示已锁定的线程锁定互斥对象的次数。计数为零表示互斥对象已解锁。

• An owning thread that identifies the thread that has locked the mutex, when it is locked.

一个拥有线程，标识锁定互斥锁的线程。

A mutex must be initialized before it can be used. This sets its lock count to zero.

A thread that needs to use a shared resource must first gain exclusive rights to access it by locking the associated mutex. If the mutex is already locked by another thread, the requesting thread may choose to wait for the mutex to be unlocked.

After locking a mutex, the thread may safely use the associated resource for as long as needed; however, it is considered good practice to hold the lock for as short a time as possible to avoid negatively impacting other threads that want to use the resource. When the thread no longer needs the resource it must unlock the mutex to allow other threads to use the resource.

Any number of threads may wait on a locked mutex simultaneously. When the mutex becomes unlocked it is then locked by the highest-priority thread that has waited the longest.

Note

Mutex objects are not designed for use by ISRs.

### Reentrant Locking 可重入锁定¶

A thread is permitted to lock a mutex it has already locked. This allows the thread to access the associated resource at a point in its execution when the mutex may or may not already be locked.

A mutex that is repeatedly locked by a thread must be unlocked an equal number of times before the mutex becomes fully unlocked so it can be claimed by another thread.

### Priority Inheritance 优先级继承¶

The thread that has locked a mutex is eligible for priority inheritance. This means the kernel will temporarily elevate the thread’s priority if a higher priority thread begins waiting on the mutex. This allows the owning thread to complete its work and release the mutex more rapidly by executing at the same priority as the waiting thread. Once the mutex has been unlocked, the unlocking thread resets its priority to the level it had before locking that mutex.

Note

The CONFIG_PRIORITY_CEILING configuration option limits how high the kernel can raise a thread’s priority due to priority inheritance. The default value of 0 permits unlimited elevation.

When two or more threads wait on a mutex held by a lower priority thread, the kernel adjusts the owning thread’s priority each time a thread begins waiting (or gives up waiting). When the mutex is eventually unlocked, the unlocking thread’s priority correctly reverts to its original non-elevated priority.

The kernel does not fully support priority inheritance when a thread holds two or more mutexes simultaneously. This situation can result in the thread’s priority not reverting to its original non-elevated priority when all mutexes have been released. It is recommended that a thread lock only a single mutex at a time when multiple mutexes are shared between threads of different priorities.

## Implementation 实施¶

### Defining a Mutex 定义互斥对象¶

A mutex is defined using a variable of type k_mutex. It must then be initialized by calling k_mutex_init().

The following code defines and initializes a mutex.

struct k_mutex my_mutex;

k_mutex_init(&my_mutex);


Alternatively, a mutex can be defined and initialized at compile time by calling K_MUTEX_DEFINE.

The following code has the same effect as the code segment above.

K_MUTEX_DEFINE(my_mutex);


### Locking a Mutex 锁定互斥对象¶

A mutex is locked by calling k_mutex_lock().

The following code builds on the example above, and waits indefinitely for the mutex to become available if it is already locked by another thread.

k_mutex_lock(&my_mutex, K_FOREVER);


The following code waits up to 100 milliseconds for the mutex to become available, and gives a warning if the mutex does not become available.

if (k_mutex_lock(&my_mutex, K_MSEC(100)) == 0) {
/* mutex successfully locked */
} else {
printf("Cannot lock XYZ display\n");
}


### Unlocking a Mutex 解锁互斥对象¶

A mutex is unlocked by calling k_mutex_unlock().

The following code builds on the example above, and unlocks the mutex that was previously locked by the thread.

k_mutex_unlock(&my_mutex);


## Suggested Uses 建议的用途¶

Use a mutex to provide exclusive access to a resource, such as a physical device.

## Configuration Options 配置选项¶

Related configuration options:

# Condition Variables 条件变量¶

A condition variable is a synchronization primitive that enables threads to wait until a particular condition occurs.

## Concepts 概念¶

Any number of condition variables can be defined (limited only by available RAM). Each condition variable is referenced by its memory address.

To wait for a condition to become true, a thread can make use of a condition variable.

A condition variable is basically a queue of threads that threads can put themselves on when some state of execution (i.e., some condition) is not as desired (by waiting on the condition). The function k_condvar_wait() performs atomically the following steps;

1. Releases the last acquired mutex.

释放最后获取的互斥对象。

2. Puts the current thread in the condition variable queue.

将当前线程放入条件变量队列中。

Some other thread, when it changes said state, can then wake one (or more) of those waiting threads and thus allow them to continue by signaling on the condition using k_condvar_signal() or k_condvar_broadcast() then it:

1. Re-acquires the mutex previously released.

重新获取以前释放的互斥对象。

2. Returns from k_condvar_wait().

从 k_condvar_wait()返回。

A condition variable must be initialized before it can be used.

## Implementation 实施¶

### Defining a Condition Variable 定义条件变量¶

A condition variable is defined using a variable of type k_condvar. It must then be initialized by calling k_condvar_init().

The following code defines a condition variable:

struct k_condvar my_condvar;

k_condvar_init(&my_condvar);


Alternatively, a condition variable can be defined and initialized at compile time by calling K_CONDVAR_DEFINE.

The following code has the same effect as the code segment above.

K_CONDVAR_DEFINE(my_condvar);


### Waiting on a Condition Variable 等待条件变量¶

A thread can wait on a condition by calling k_condvar_wait().

The following code waits on the condition variable.

K_MUTEX_DEFINE(mutex);
K_CONDVAR_DEFINE(condvar)

void main(void)
{
k_mutex_lock(&mutex, K_FOREVER);

* blocked, the mutex is released, then re-acquired before this
* thread is woken up and the call returns.
*/
k_condvar_wait(&condvar, &mutex, K_FOREVER);
...
k_mutex_unlock(&mutex);
}


### Signaling a Condition Variable 给条件变量发信号¶

A condition variable is signaled on by calling k_condvar_signal() for one thread or by calling k_condvar_broadcast() for multiple threads.

The following code builds on the example above.

void worker_thread(void)
{
k_mutex_lock(&mutex, K_FOREVER);

/*
* Do some work and fulfill the condition
*/
...
...
k_condvar_signal(&condvar);
k_mutex_unlock(&mutex);
}


## Suggested Uses 建议的用途¶

Use condition variables with a mutex to signal changing states (conditions) from one thread to another thread. Condition variables are not the condition itself and they are not events. The condition is contained in the surrounding programming logic.

Mutexes alone are not designed for use as a notification/synchronization mechanism. They are meant to provide mutually exclusive access to a shared resource only.

## Configuration Options 配置选项¶

Related configuration options:

• None.

没有。

# Events 活动¶

An event object is a kernel object that implements traditional events.

## Concepts 概念¶

Any number of event objects can be defined (limited only by available RAM). Each event object is referenced by its memory address. One or more threads may wait on an event object until the desired set of events has been delivered to the event object. When new events are delivered to the event object, all threads whose wait conditions have been satisfied become ready simultaneously.

An event object has the following key properties:

• A 32-bit value that tracks which events have been delivered to it.

一个32位的值，用于跟踪已经传递给它的事件。

An event object must be initialized before it can be used.

Events may be delivered by a thread or an ISR. When delivering events, the events may either overwrite the existing set of events or add to them in a bitwise fashion. When overwriting the existing set of events, this is referred to as setting. When adding to them in a bitwise fashion, this is referred to as posting. Both posting and setting events have the potential to fulfill match conditions of multiple threads waiting on the event object. All threads whose match conditions have been met are made active at the same time.

Threads may wait on one or more events. They may either wait for all of the the requested events, or for any of them. Furthermore, threads making a wait request have the option of resetting the current set of events tracked by the event object prior to waiting. Care must be taken with this option when multiple threads wait on the same event object.

Note

The kernel does allow an ISR to query an event object, however the ISR must not attempt to wait for the events.

## Implementation 实施¶

### Defining an Event Object 定义事件对象¶

An event object is defined using a variable of type k_event. It must then be initialized by calling k_event_init().

The following code defines an event object.

struct k_event my_event;

k_event_init(&my_event);


Alternatively, an event object can be defined and initialized at compile time by calling K_EVENT_DEFINE.

The following code has the same effect as the code segment above.

K_EVENT_DEFINE(my_event);


### Setting Events 设置活动¶

Events in an event object are set by calling k_event_set().

The following code builds on the example above, and sets the events tracked by the event object to 0x001.

void input_available_interrupt_handler(void *arg)
{
/* notify threads that data is available */

k_event_set(&my_event, 0x001);

...
}


### Posting Events 投寄活动¶

Events are posted to an event object by calling k_event_post().

The following code builds on the example above, and posts a set of events to the event object.

void input_available_interrupt_handler(void *arg)
{
...

/* notify threads that more data is available */

k_event_post(&my_event, 0x120);

...
}


### Waiting for Events 等待活动¶

Threads wait for events by calling k_event_wait().

The following code builds on the example above, and waits up to 50 milliseconds for any of the specified events to be posted. A warning is issued if none of the events are posted in time.

void consumer_thread(void)
{
uint32_t  events;

events = k_event_wait(&my_event, 0xFFF, false, K_MSEC(50));
if (events == 0) {
printk("No input devices are available!");
} else {
/* Access the desired input device(s) */
...
}
...
}


Alternatively, the consumer thread may desire to wait for all the events before continuing.

void consumer_thread(void)
{
uint32_t  events;

events = k_event_wait_all(&my_event, 0x121, false, K_MSEC(50));
if (events == 0) {
printk("At least one input device is not available!");
} else {
/* Access the desired input devices */
...
}
...
}


## Suggested Uses 建议的用途¶

Use events to indicate that a set of conditions have occurred.

Use events to pass small amounts of data to multiple threads at once.

## Configuration Options 配置选项¶

Related configuration options:

# Symmetric Multiprocessing 对称多处理机¶

On multiprocessor architectures, Zephyr supports the use of multiple physical CPUs running Zephyr application code. This support is “symmetric” in the sense that no specific CPU is treated specially by default. Any processor is capable of running any Zephyr thread, with access to all standard Zephyr APIs supported.

No special application code needs to be written to take advantage of this feature. If there are two Zephyr application threads runnable on a supported dual processor device, they will both run simultaneously.

SMP configuration is controlled under the CONFIG_SMP kconfig variable. This must be set to “y” to enable SMP features, otherwise a uniprocessor kernel will be built. In general the platform default will have enabled this anywhere it’s supported. When enabled, the number of physical CPUs available is visible at build time as CONFIG_MP_NUM_CPUS. Likewise, the default for this will be the number of available CPUs on the platform and it is not expected that typical apps will change it. But it is legal and supported to set this to a smaller (but obviously not larger) number for special purposes (e.g. for testing, or to reserve a physical CPU for running non-Zephyr code).

SMP 配置由 config_smp kconfig 变量控制。必须将其设置为“ y”以启用 SMP 特性，否则将构建单处理器内核。一般来说，平台默认在任何支持的地方都会启用这个功能。当启用时，可用的物理 cpu 数量在构建时显示为 config_mp_num_cpu。同样，默认情况下，这将是平台上可用 cpu 的数量，并且不期望典型的应用程序会改变它。但是，为了特殊目的(例如用于测试，或者为运行非 zephyr 代码保留一个物理 CPU) ，将其设置为一个较小的数字是合法的，并且是受支持的。

## Synchronization 同步¶

At the application level, core Zephyr IPC and synchronization primitives all behave identically under an SMP kernel. For example semaphores used to implement blocking mutual exclusion continue to be a proper application choice.

At the lowest level, however, Zephyr code has often used the irq_lock()/irq_unlock() primitives to implement fine grained critical sections using interrupt masking. These APIs continue to work via an emulation layer (see below), but the masking technique does not: the fact that your CPU will not be interrupted while you are in your critical section says nothing about whether a different CPU will be running simultaneously and be inspecting or modifying the same data!

### Spinlocks 自旋锁¶

SMP systems provide a more constrained k_spin_lock() primitive that not only masks interrupts locally, as done by irq_lock(), but also atomically validates that a shared lock variable has been modified before returning to the caller, “spinning” on the check if needed to wait for the other CPU to exit the lock. The default Zephyr implementation of k_spin_lock() and k_spin_unlock() is built on top of the pre-existing atomic_ layer (itself usually implemented using compiler intrinsics), though facilities exist for architectures to define their own for performance reasons.

SMP 系统提供了一个更加有约束的 k_spin_lock ()原语，它不仅像 irq_lock ()那样在本地掩盖中断，而且还原子地验证共享锁变量在返回到调用者之前是否已经被修改，如果需要等待其他 CPU 退出锁，则在检查上“旋转”。默认的 Zephyr 实现的 k_spin_lock ()和 k_spin_unlock ()是建立在已经存在的原子层之上的(本身通常使用编译器 intrinsic 实现) ，尽管架构出于性能原因可以自己定义它们自己的工具。

One important difference between IRQ locks and spinlocks is that the earlier API was naturally recursive: the lock was global, so it was legal to acquire a nested lock inside of a critical section. Spinlocks are separable: you can have many locks for separate subsystems or data structures, preventing CPUs from contending on a single global resource. But that means that spinlocks must not be used recursively. Code that holds a specific lock must not try to re-acquire it or it will deadlock (it is perfectly legal to nest distinct spinlocks, however). A validation layer is available to detect and report bugs like this.

IRQ 锁和自旋锁之间的一个重要区别是，早期的 API 是自然递归的: 锁是全局的，因此获取临界区内的嵌套锁是合法的。自旋锁是可分离的: 您可以为单独的子系统或数据结构设置许多锁，从而防止 cpu 与单个全局资源发生冲突。但这意味着不能递归地使用自旋锁。持有特定锁的代码不能尝试重新获取它，否则它将陷入死锁(然而，嵌套不同的自旋锁是完全合法的)。验证层可用于检测和报告这样的错误。

When used on a uniprocessor system, the data component of the spinlock (the atomic lock variable) is unnecessary and elided. Except for the recursive semantics above, spinlocks in single-CPU contexts produce identical code to legacy IRQ locks. In fact the entirety of the Zephyr core kernel has now been ported to use spinlocks exclusively.

### Legacy irq_lock() emulation 遗留 irq_lock ()模拟¶

For the benefit of applications written to the uniprocessor locking API, irq_lock() and irq_unlock() continue to work compatibly on SMP systems with identical semantics to their legacy versions. They are implemented as a single global spinlock, with a nesting count and the ability to be atomically reacquired on context switch into locked threads. The kernel will ensure that only one thread across all CPUs can hold the lock at any time, that it is released on context switch, and that it is re-acquired when necessary to restore the lock state when a thread is switched in. Other CPUs will spin waiting for the release to happen.

The overhead involved in this process has measurable performance impact, however. Unlike uniprocessor apps, SMP apps using irq_lock() are not simply invoking a very short (often ~1 instruction) interrupt masking operation. That, and the fact that the IRQ lock is global, means that code expecting to be run in an SMP context should be using the spinlock API wherever possible.

It is often desirable for real time applications to deliberately partition work across physical CPUs instead of relying solely on the kernel scheduler to decide on which threads to execute. Zephyr provides an API, controlled by the CONFIG_SCHED_CPU_MASK kconfig variable, which can associate a specific set of CPUs with each thread, indicating on which CPUs it can run.

By default, new threads can run on any CPU. Calling k_thread_cpu_mask_disable() with a particular CPU ID will prevent that thread from running on that CPU in the future. Likewise k_thread_cpu_mask_enable() will re-enable execution. There are also k_thread_cpu_mask_clear() and k_thread_cpu_mask_enable_all() APIs available for convenience. For obvious reasons, these APIs are illegal if called on a runnable thread. The thread must be blocked or suspended, otherwise an -EINVAL will be returned.

Note that when this feature is enabled, the scheduler algorithm involved in doing the per-CPU mask test requires that the list be traversed in full. The kernel does not keep a per-CPU run queue. That means that the performance benefits from the CONFIG_SCHED_SCALABLE and CONFIG_SCHED_MULTIQ scheduler backends cannot be realized. CPU mask processing is available only when CONFIG_SCHED_DUMB is the selected backend. This requirement is enforced in the configuration layer.

## SMP Boot Process SMP 启动过程¶

A Zephyr SMP kernel begins boot identically to a uniprocessor kernel. Auxiliary CPUs begin in a disabled state in the architecture layer. All standard kernel initialization, including device initialization, happens on a single CPU before other CPUs are brought online.

Zephyrsmp 内核开始以相同的方式引导到单处理器内核。辅助 cpu 在体系结构层中处于禁用状态。所有标准的内核初始化，包括设备初始化，在其他 CPU 联机之前都在单个 CPU 上进行。

Just before entering the application main() function, the kernel calls z_smp_init() to launch the SMP initialization process. This enumerates over the configured CPUs, calling into the architecture layer using arch_start_cpu() for each one. This function is passed a memory region to use as a stack on the foreign CPU (in practice it uses the area that will become that CPU’s interrupt stack), the address of a local smp_init_top() callback function to run on that CPU, and a pointer to a “start flag” address which will be used as an atomic signal.

The local SMP initialization (smp_init_top()) on each CPU is then invoked by the architecture layer. Note that interrupts are still masked at this point. This routine is responsible for calling smp_timer_init() to set up any needed stat in the timer driver. On many architectures the timer is a per-CPU device and needs to be configured specially on auxiliary CPUs. Then it waits (spinning) for the atomic “start flag” to be released in the main thread, to guarantee that all SMP initialization is complete before any Zephyr application code runs, and finally calls z_swap() to transfer control to the appropriate runnable thread via the standard scheduler API.

Fig. 4 Example SMP initialization process, showing a configuration with two CPUs and two app threads which begin operating simultaneously.

## Interprocessor Interrupts 处理器间中断¶

When running in multiprocessor environments, it is occasionally the case that state modified on the local CPU needs to be synchronously handled on a different processor.

One example is the Zephyr k_thread_abort() API, which cannot return until the thread that had been aborted is no longer runnable. If it is currently running on another CPU, that becomes difficult to implement.

Another is low power idle. It is a firm requirement on many devices that system idle be implemented using a low-power mode with as many interrupts (including periodic timer interrupts) disabled or deferred as is possible. If a CPU is in such a state, and on another CPU a thread becomes runnable, the idle CPU has no way to “wake up” to handle the newly-runnable load.

So where possible, Zephyr SMP architectures should implement an interprocessor interrupt. The current framework is very simple: the architecture provides a arch_sched_ipi() call, which when invoked will flag an interrupt on all CPUs (except the current one, though that is allowed behavior) which will then invoke the z_sched_ipi() function implemented in the scheduler. The expectation is that these APIs will evolve over time to encompass more functionality (e.g. cross-CPU calls), and that the scheduler-specific calls here will be implemented in terms of a more general framework.

Note that not all SMP architectures will have a usable IPI mechanism (either missing, or just undocumented/unimplemented). In those cases Zephyr provides fallback behavior that is correct, but perhaps suboptimal.

Using this, k_thread_abort() becomes only slightly more complicated in SMP: for the case where a thread is actually running on another CPU (we can detect this atomically inside the scheduler), we broadcast an IPI and spin, waiting for the thread to either become “DEAD” or for it to re-enter the queue (in which case we terminate it the same way we would have in uniprocessor mode). Note that the “aborted” check happens on any interrupt exit, so there is no special handling needed in the IPI per se. This allows us to implement a reasonable fallback when IPI is not available: we can simply spin, waiting until the foreign CPU receives any interrupt, though this may be a much longer time!

Likewise idle wakeups are trivially implementable with an empty IPI handler. If a thread is added to an empty run queue (i.e. there may have been idle CPUs), we broadcast an IPI. A foreign CPU will then be able to see the new thread when exiting from the interrupt and will switch to it if available.

Without an IPI, however, a low power idle that requires an interrupt will not work to synchronously run new threads. The workaround in that case is more invasive: Zephyr will not enter the system idle handler and will instead spin in its idle loop, testing the scheduler state at high frequency (not spinning on it though, as that would involve severe lock contention) for new threads. The expectation is that power constrained SMP applications are always going to provide an IPI, and this code will only be used for testing purposes or on systems without power consumption requirements.

## SMP Kernel Internals SMP 内核内部构件¶

In general, Zephyr kernel code is SMP-agnostic and, like application code, will work correctly regardless of the number of CPUs available. But in a few areas there are notable changes in structure or behavior.

### Per-CPU data 每个 cpu 的数据¶

Many elements of the core kernel data need to be implemented for each CPU in SMP mode. For example, the _current thread pointer obviously needs to reflect what is running locally, there are many threads running concurrently. Likewise a kernel-provided interrupt stack needs to be created and assigned for each physical CPU, as does the interrupt nesting count used to detect ISR state.

These fields are now moved into a separate struct _cpu instance within the _kernel struct, which has a cpus[] array indexed by ID. Compatibility fields are provided for legacy uniprocessor code trying to access the fields of cpus[0] using the older syntax and assembly offsets.

Note that an important requirement on the architecture layer is that the pointer to this CPU struct be available rapidly when in kernel context. The expectation is that arch_curr_cpu() will be implemented using a CPU-provided register or addressing mode that can store this value across arbitrary context switches or interrupts and make it available to any kernel-mode code.

Similarly, where on a uniprocessor system Zephyr could simply create a global “idle thread” at the lowest priority, in SMP we may need one for each CPU. This makes the internal predicate test for “_is_idle()” in the scheduler, which is a hot path performance environment, more complicated than simply testing the thread pointer for equality with a known static variable. In SMP mode, idle threads are distinguished by a separate field in the thread struct.

### Switch-based context switching 基于切换的上下文切换¶

The traditional Zephyr context switch primitive has been z_swap(). Unfortunately, this function takes no argument specifying a thread to switch to. The expectation has always been that the scheduler has already made its preemption decision when its state was last modified and cached the resulting “next thread” pointer in a location where architecture context switch primitives can find it via a simple struct offset. That technique will not work in SMP, because the other CPU may have modified scheduler state since the current CPU last exited the scheduler (for example: it might already be running that cached thread!).

Instead, the SMP “switch to” decision needs to be made synchronously with the swap call, and as we don’t want per-architecture assembly code to be handling scheduler internal state, Zephyr requires a somewhat lower-level context switch primitives for SMP systems: arch_switch() is always called with interrupts masked, and takes exactly two arguments. The first is an opaque (architecture defined) handle to the context to which it should switch, and the second is a pointer to such a handle into which it should store the handle resulting from the thread that is being switched out. The kernel then implements a portable z_swap() implementation on top of this primitive which includes the relevant scheduler logic in a location where the architecture doesn’t need to understand it.

Similarly, on interrupt exit, switch-based architectures are expected to call z_get_next_switch_handle() to retrieve the next thread to run from the scheduler. The argument to z_get_next_switch_handle() is either the interrupted thread’s “handle” reflecting the same opaque type used by arch_switch(), or NULL if that thread cannot be released to the scheduler just yet. The choice between a handle value or NULL depends on the way CPU interrupt mode is implemented.

Architectures with a large CPU register file would typically preserve only the caller-saved registers on the current thread’s stack when interrupted in order to minimize interrupt latency, and preserve the callee-saved registers only when arch_switch() is called to minimize context switching latency. Such architectures must use NULL as the argument to z_get_next_switch_handle() to determine if there is a new thread to schedule, and follow through with their own arch_switch() or derrivative if so, or directly leave interrupt mode otherwise. In the former case it is up to that switch code to store the handle resulting from the thread that is being switched out in that thread’s “switch_handle” field after its context has fully been saved.

Architectures whose entry in interrupt mode already preserves the entire thread state may pass that thread’s handle directly to z_get_next_switch_handle() and be done in one step.

Note that while SMP requires CONFIG_USE_SWITCH, the reverse is not true. A uniprocessor architecture built with CONFIG_SMP set to No might still decide to implement its context switching using arch_switch()`.

## Data Passing 数据传递¶

These pages cover kernel objects which can be used to pass data between threads and ISRs.

The following table summarizes their high-level features.

Object对象 Bidirectional?

Data structure

Data item size

Data Alignment

ISRs can send?ISRs

Overrun handling

FIFO

No

Queue

Arbitrary [1]

4 B [2] Yes [3]

Yes是的 N/A不适用
LIFO

No

Queue

Arbitrary [1]

4 B [2] Yes [3]

Yes是的 N/A不适用
Stack No

Array

Word Word Yes [3]

Yes是的 Undefined behavior

Message queue

No

Ring buffer

Power of two
2的幂
Power of two
2的幂
Yes [3]

Yes是的 Pend thread or return -errnoPend

Mailbox

Yes

Queue

Arbitrary [1]

Arbitrary

No没有 No没有 N/A不适用
Pipe No

Ring buffer [4]

Arbitrary

Arbitrary

No没有 No没有 Pend thread or return -errnoPend 线程或返回-errno

[1] Callers allocate space for queue overhead in the data elements themselves.

[1] 调用方为数据元素本身的队列开销分配空间。

[2] Objects added with k_fifo_alloc_put() and k_lifo_alloc_put() do not have alignment constraints, but use temporary memory from the calling thread’s resource pool.

[2] 添加了 k_fifo_alloc_put ()和 k_lifo_alloc_put ()的对象没有对齐约束，但是使用调用线程资源池中的临时内存。

[3] ISRs can receive only when passing K_NO_WAIT as the timeout argument.

[3] 只有当传递 k_no_wait 作为超时参数时，ISRs 才能接收。

[4] Optional.

[4] 可选。