zephyr 构建与配置系统

 

Zephyr 的构建与配置系统

Zephyr 的构建与配置系统

构建系统(CMake)

CMake is used to build your application together with the Zephyr kernel. A CMake build is done in two stages. The first stage is called configuration. During configuration, the CMakeLists.txt build scripts are executed. After configuration is finished, CMake has an internal model of the Zephyr build, and can generate build scripts that are native to the host platform.

CMake 用于与 Zephyr 内核一起构建应用程序。CMake 构建分两个阶段完成。第一个阶段称为配置。在配置期间,执行 CMakeLists.txt 构建脚本。配置完成后,CMake 拥有 Zephyr 构建的内部模型,并可以生成主机平台本机的构建脚本。

CMake supports generating scripts for several build systems, but only Ninja and Make are tested and supported by Zephyr. After configuration, you begin the build stage by executing the generated build scripts. These build scripts can recompile the application without involving CMake following most code changes. However, after certain changes, the configuration step must be executed again before building. The build scripts can detect some of these situations and reconfigure automatically, but there are cases when this must be done manually.

支持为多个构建系统生成脚本,但是只有 Ninja 和 Make 被 Zephyr 测试和支持。配置完成后,通过执行生成的构建脚本开始构建阶段。这些构建脚本可以在大多数代码更改后重新编译应用程序,而不需要使用 CMake。但是,在进行某些更改之后,必须在生成之前再次执行配置步骤。构建脚本可以检测其中的一些情况并自动重新配置,但是在某些情况下,必须手动完成这些操作。

Zephyr uses CMake’s concept of a ‘target’ to organize the build. A target can be an executable, a library, or a generated file. For application developers, the library target is the most important to understand. All source code that goes into a Zephyr build does so by being included in a library target, even application code.

Zephyr 使用 CMake 的目标概念来组织构建。目标可以是可执行文件、库或生成的文件。对于应用程序开发人员来说,最重要的是要理解库目标。所有进入 Zephyr 构建的源代码都包含在库目标中,甚至包含在应用程序代码中。

Library targets have source code, that is added through CMakeLists.txt build scripts like this:

库目标有源代码,通过 CMakeLists.txt 构建脚本添加,如下所示:

target_sources(app PRIVATE src/main.c)

In the above CMakeLists.txt, an existing library target named app is configured to include the source file src/main.c. The PRIVATE keyword indicates that we are modifying the internals of how the library is being built. Using the keyword PUBLIC would modify how other libraries that link with app are built. In this case, using PUBLIC would cause libraries that link with app to also include the source file src/main.c, behavior that we surely do not want. The PUBLIC keyword could however be useful when modifying the include paths of a target library.

在上面的 CMakeLists.txt 中,一个名为 app 的现有库目标被配置为包含源文件 src/main.c。PRIVATE 关键字表示我们正在修改构建库的内部方式。使用关键字 PUBLIC 可以修改与 app 链接的其他库的构建方式。在这种情况下,使用 PUBLIC 会导致与 app 链接的库也包含源文件 src/main.c,这种行为肯定是我们不希望看到的。但是,在修改目标库的 include 路径时,PUBLIC 关键字可能非常有用。

Build and Configuration Phases 构建和配置阶段

The Zephyr build process can be divided into two main phases: a configuration phase (driven by CMake) and a build phase (driven by Make or Ninja).

Zephyr 构建过程可以分为两个主要阶段: 配置阶段(由 CMake 驱动)和构建阶段(由 Make 或 Ninja 驱动)。

Configuration Phase 配置阶段

The configuration phase begins when the user invokes CMake to generate a build system, specifying a source application directory and a board target.

当用户调用 CMake 生成构建系统时,配置阶段开始,并指定源应用程序目录和板目标。

Zephyr's build configuration phase

CMake begins by processing the CMakeLists.txt file in the application directory, which refers to the CMakeLists.txt file in the Zephyr top-level directory, which in turn refers to CMakeLists.txt files throughout the build tree (directly and indirectly). Its primary output is a set of Makefiles or Ninja files to drive the build process, but the CMake scripts also do some processing of their own, which is explained here.

CMake 首先处理应用程序目录中的 CMakeLists.txt 文件,该文件引用 Zephyr 顶级目录中的 CMakeLists.txt 文件,而后者又引用整个构建树中的 CMakeLists.txt 文件(直接或间接)。它的主要输出是一组 makefile 或 Ninja 文件来驱动构建过程,但是 CMake 脚本也会自己进行一些处理,这里将对此进行解释。

Note that paths beginning with build/ below refer to the build directory you create when running CMake.

请注意,下面以 build/ 开头的路径指的是运行 CMake 时创建的构建目录。

  • Devicetree 工具树

    *.dts (devicetree source) and *.dtsi (devicetree source include) files are collected from the target’s architecture, SoC, board, and application directories.

    *.dts (devicetree source)和 *.dtsi (devicetree source include)文件从目标的体系结构、 SoC、 board 和应用程序目录中收集。

    *.dtsi files are included by *.dts files via the C preprocessor (often abbreviated cpp, which should not be confused with C++). The C preprocessor is also used to merge in any devicetree *.overlay files, and to expand macros in *.dts, *.dtsi, and *.overlay files. The preprocessor output is placed in build/zephyr/zephyr.dts.pre

    *.dtsi文件通过C预处理器包含在.dts文件中(通常缩写为cpp,不应与c++混淆)。C预处理器还用于合并任何设备树中的.overlay文件,并展开.dts、.dtsi和*.overlay文件中的宏。预处理器输出放在build/zephyr/zephyr.dts.pre中。

    The preprocessed devicetree sources are parsed by gen_defines.py to generate a build/zephyr/include/generated/devicetree_unfixed.h header with preprocessor macros.

    预处理的设备树源码由 gen_defines.py 解析,以生成一个 build/zephyr/include/generated/devicetree_unfixed.h 带有预处理器宏的头文件。

    Source code should access preprocessor macros generated from devicetree by including the devicetree.h header, which includes devicetree_unfixed.h.

    源代码应该通过包含devicetree.h来访问从devicetree生成的预处理器宏,头文件中包括devicetree_unfixed.h。

    gen_defines.py also writes the final devicetree to build/zephyr/zephyr.dts in the build directory. This file’s contents may be useful for debugging.

    gen_defines.py还在构建目录中将最终设备树写到build/zephyr/zephyr.dts 文件中。此文件的内容可能对调试有用。

    If the devicetree compiler dtc is installed, it is run on build/zephyr/zephyr.dts to catch any extra warnings and errors generated by this tool. The output from dtc is unused otherwise, and this step is skipped if dtc is not installed.

    如果已经安装了 devicetree 编译器 dtc,它将在 build/zephyr/zephyr.dts 上运行,以捕获由此工具生成的任何额外警告和错误。否则 dtc 的输出是未使用的,如果没有安装 dtc,则跳过此步骤。

    The above is just a brief overview. For more information on devicetree, see Devicetree.

    以上只是一个简短的概述。有关设备的详细信息,请参阅设备树。

  • Devicetree fixups 装备工具

    Files named dts_fixup.h from the target’s architecture, SoC, board, and application directories are concatenated into a single devicetree_fixups.h file. dts_fixup.h files are a legacy feature which should not be used in new code.

    来自目标架构、 SoC、 board 和应用程序目录的名为 dts_fixup.h 的文件被连接到一个单独的 devicetree_fixups.h 文件中。文件是一个遗留特性,不应该在新代码中使用。

  • Kconfig

    Kconfig files define available configuration options for for the target architecture, SoC, board, and application, as well as dependencies between options.

    Kconfig 文件为目标体系结构、 SoC、主板和应用程序定义可用的配置选项,以及选项之间的依赖关系。

    Kconfig configurations are stored in configuration files. The initial configuration is generated by merging configuration fragments from the board and application (e.g. prj.conf).

    Kconfig 配置存储在配置文件中。初始配置是通过合主并板和应用程序的配置片段生成的(例如 prj.conf)。

    The output from Kconfig is an autoconf.h header with preprocessor assignments, and a .config file that acts both as a saved configuration and as configuration output (used by CMake). The definitions in autoconf.h are automatically exposed at compile time, so there is no need to include this header.

    Kconfig 的输出是一个带有预处理器分配的 autoconf.h 头文件,以及一个 .config 文件,既作为保存的配置又作为配置输出(由 CMake 使用)。在 autoconf.h 中的定义在编译时自动公开,因此不需要包含这个头文件。

    Information from devicetree is available to Kconfig, through the functions defined in kconfigfunctions.py.

    Kconfig 可以通过 kconfigfunctions.py 定义的功能获得来自设备的信息。

    See the Kconfig section of the manual for more information.

    更多信息请参阅手册中的 Kconfig 部分。

Build Phase 构建阶段

The build phase begins when the user invokes make or ninja. Its ultimate output is a complete Zephyr application in a format suitable for loading/flashing on the desired target board (zephyr.elf, zephyr.hex, etc.) The build phase can be broken down, conceptually, into four stages: the pre-build, first-pass binary, final binary, and post-processing.

构建阶段从用户调用 make 或 ninja 开始。它的最终输出是一个完整的 Zephyr 应用程序(Zephyr.elf,Zephyr.hex 等),其格式适合于在所需的目标板上加载/烧录。构建阶段可以从概念上分为四个阶段: 构建前阶段、第一阶段二进制、最终二进制和后期处理。

Pre-build

Pre-build occurs before any source files are compiled, because during this phase header files used by the source files are generated.

预构建发生在编译任何源文件之前,因为在此阶段中将生成源文件所使用的头文件。

  • Offset generation 偏移生成

    Access to high-level data structures and members is sometimes required when the definitions of those structures is not immediately accessible (e.g., assembly language). The generation of offsets.h (by gen_offset_header.py) facilitates this.

    如果不能立即访问高级数据结构和成员的定义(例如,汇编语言) ,但有时又需要访问这些结构和成员。由 gen_offset_header.py 生成的 offsets.h 有助于实现这一点。

  • System call boilerplate 系统调用样板

    The gen_syscall.py and parse_syscalls.py scripts work together to bind potential system call functions with their implementations.

    gen_syscall.pyparse_syscalls.py 脚本协同工作,将潜在的系统调用函数与它们的实现绑定在一起。

Zephyr's build stage I

Intermediate binaries

Compilation proper begins with the first intermediate binary. Source files (C and assembly) are collected from various subsystems (which ones is decided during the configuration phase), and compiled into archives (with reference to header files in the tree, as well as those generated during the configuration phase and the pre-build stage(s)).

编译本身从第一个中间二进制开始。源文件(c 和汇编)从各个子系统(哪些是在配置阶段确定的)中收集,并编译成归档文件(参考树中的头文件,以及在配置阶段和构建前阶段生成的文件)。

Zephyr's build stage II

The exact number of intermediate binaries is decided during the configuration phase.

中间二进制文件的确切数量是在配置阶段决定的。

If memory protection is enabled, then:

如果启用了内存保护,那么:

  • Partition grouping 分区分组

    The gen_app_partitions.py script scans all the generated archives and outputs linker scripts to ensure that application partitions are properly grouped and aligned for the target’s memory protection hardware.

    gen_app_partitions.py 脚本扫描所有生成的文档并输出连接器脚本,以确保应用程序分区正确地组合并对齐,以适应目标的内存保护硬件。

Then cpp is used to combine linker script fragments from the target’s architecture/SoC, the kernel tree, optionally the partition output if memory protection is enabled, and any other fragments selected during the configuration process, into a linker.cmd file. The compiled archives are then linked with ld as specified in the linker.cmd.

然后使用 cpp 将来自目标架构/SoC 的链接器脚本片段、内核树(如果启用了内存保护,则可以选择分区输出)和配置过程中选择的任何其他片段组合到 linker.cmd 文件中。然后将编译的存档与 linker.cmd 中指定的 ld 链接。

  • Unfixed size binary 不固定大小的二进制文件

    The unfixed size intermediate binary is produced when User Mode is enabled or Reference is in use. It produces a binary where sizes are not fixed and thus it may be used by post-process steps that will impact the size of the final binary.

    在启用用户模式或使用引用时生成不固定大小的中间二进制文件。它产生一个二进制文件,其中的大小不是固定的,因此它可能被后处理步骤所使用,这些步骤将影响最终二进制文件的大小。

Zephyr's build stage III

  • Fixed size binary 固定大小二进制

    The fixed size intermediate binary is produced when User Mode is enabled or when generated IRQ tables are used, CONFIG_GEN_ISR_TABLES It produces a binary where sizes are fixed and thus the size must not change between the intermediate binary and the final binary.

    固定大小的中间二进制文件是在启用用户模式或使用生成的 IRQ 表时生成的,CONFIG_GEN_ISR_TABLES 它生成一个二进制文件,其中大小是固定的,因此中间二进制文件和最终二进制文件之间的大小不得改变。

Zephyr's build stage IV

Intermediate binaries post-processing 中间二进制文件后处理

The binaries from the previous stage are incomplete, with empty and/or placeholder sections that must be filled in by, essentially, reflection.

前一阶段的二进制文件是不完整的,必须通过反射填充空和/或占位符部分。

To complete the build procedure the following scripts are executed on the intermediate binaries to produce the missing pieces needed for the final binary.

要完成构建过程,需要在中间二进制文件上执行以下脚本,以生成最终二进制文件所需的缺失部分。

When User Mode is enabled:

当用户模式启用时:

  • Partition alignment 分区对齐

    The gen_app_partitions.py script scans the unfixed size binary and generates an app shared memory aligned linker script snippet where the partitions are sorted in descending order.

    脚本扫描不固定大小的二进制文件,并生成一个应用共享内存对齐的连接器脚本片段,其中分区按降序排序。

Zephyr's intermediate binary post-process I

When Reference is used:

使用引用时:

  • Device dependencies 设备依赖性

    The gen_handles.py script scans the unfixed size binary to determine relationships between devices that were recorded from devicetree data, and replaces the encoded relationships with values that are optimized to locate the devices actually present in the application.

    gen_handles.py 脚本扫描不固定大小的二进制文件,以确定从设备树数据记录的设备之间的关系,并将编码的关系替换为经过优化的值,以定位应用程序中实际存在的设备。

Zephyr's intermediate binary post-process II

  • When CONFIG_GEN_ISR_TABLES is enabled:

  • CONFIG_GEN_ISR_TABLES 启用时:

    The gen_isr_tables.py script scant the fixed size binary and creates an isr_tables.c source file with a hardware vector table and/or software IRQ table.

    gen_isr_tables.py 脚本不足以支持固定大小的二进制文件,并创建一个带有硬件向量表和/或软件 IRQ 表的 isr_tables.c 源文件。

Zephyr's intermediate binary post-process III

When User Mode is enabled:

当用户模式启用时:

  • Kernel object hashing 内核对象散列

    The gen_kobject_list.py scans the ELF DWARF debug data to find the address of the all kernel objects. This list is passed to gperf, which generates a perfect hash function and table of those addresses, then that output is optimized by process_gperf.py, using known properties of our special case.

    gen_kobject_list.py 扫描 ELF DWARF 调试数据,以查找所有内核对象的地址。这个列表被传递给 gperf,gperf 会生成一个完美散列和这些地址的表,然后通过 process_gperf.py 对输出进行优化,使用我们特殊情况中已知的属性。

Zephyr's intermediate binary post-process IV

When no intermediate binary post-processing is required then the first intermediate binary will be directly used as the final binary.

当不需要中间二进制后处理时,第一个中间二进制将直接用作最终二进制。

Final binary 最终二进制

The binary from the previous stage is incomplete, with empty and/or placeholder sections that must be filled in by, essentially, reflection.

前一阶段的二进制文件是不完整的,必须通过反射填充空的和/或占位符部分。

The link from the previous stage is repeated, this time with the missing pieces populated.

重复前一阶段的链接,这一次填充缺失的部分。

Zephyr's build final stage

Post processing 后期处理

Finally, if necessary, the completed kernel is converted from ELF to the format expected by the loader and/or flash tool required by the target. This is accomplished in a straightforward manner with objdump.

最后,如果需要,完成的内核将从 ELF 转换为目标所需的加载程序和/或闪存工具所期望的格式。这是通过 objdump 以一种直接的方式完成的。

Zephyr's build final stage post-process

Supporting Scripts and Tools 支持脚本和工具

The following is a detailed description of the scripts used during the build process.

下面是构建过程中使用的脚本的详细描述。

scripts/gen_syscalls.py

Script to generate system call invocation macros

用于生成系统调用宏的脚本

This script parses the system call metadata JSON file emitted by parse_syscalls.py to create several files:

这个脚本解析 parse_syscalls.py 发出的系统调用元数据 JSON 文件来创建几个文件:

  • A file containing weak aliases of any potentially unimplemented system calls, as well as the system call dispatch table, which maps system call type IDs to their handler functions.

    一个文件,其中包含任何可能未实现的系统调用的弱别名,以及系统调用分派表,该表将系统调用类型 id 映射到它们的处理程序函数。

  • A header file defining the system call type IDs, as well as function prototypes for all system call handler functions.

    一个头文件,定义系统调用类型 id,以及所有系统调用处理程序函数的函数原型。

  • A directory containing header files. Each header corresponds to a header that was identified as containing system call declarations. These generated headers contain the inline invocation functions for each system call in that header.

    包含头文件的目录。每个标头对应于一个标识为包含系统调用声明的标头。这些生成的报头包含该报头中每个系统调用的内联调用函数。

scripts/gen_handles.py

Translate generic handles into ones optimized for the application.

将通用句柄转换为为应用程序优化的句柄。

Immutable device data includes information about dependencies, e.g. that a particular sensor is controlled through a specific I2C bus and that it signals event on a pin on a specific GPIO controller. This information is encoded in the first-pass binary using identifiers derived from the devicetree. This script extracts those identifiers and replaces them with ones optimized for use with the devices actually present.

不可变设备数据包括关于依赖关系的信息,例如,一个特定的传感器是通过一个特定的 I2C 总线来控制的,并且它在一个特定的 GPIO 控制器上的一个引脚上发出信号。此信息使用设备树派生的标识符编码在第一遍二进制文件中。此脚本提取这些标识符,并将其替换为为实际存在的设备所优化的标识符。

For example the sensor might have a first-pass handle defined by its devicetree ordinal 52, with the I2C driver having ordinal 24 and the GPIO controller ordinal 14. The runtime ordinal is the index of the corresponding device in the static devicetree array, which might be 6, 5, and 3, respectively.

例如,传感器可能具有由其设备序号52定义的首次通过句柄,I2C 驱动程序具有序号24,GPIO 控制器序号14。运行时序号是静态设备树数组中相应设备的索引,它可能分别为6、5和3。

The output is a C source file that provides alternative definitions for the array contents referenced from the immutable device objects. In the final link these definitions supersede the ones in the driver-specific object file.

输出是一个 c 源文件,它为从不可变设备对象引用的数组内容提供了替代定义。在最后一个链接中,这些定义取代了特定于驱动程序的对象文件中的定义。

scripts/gen_kobject_list.py

Script to generate gperf tables of kernel object metadata

用于生成内核对象元数据 gperf 表的脚本

User mode threads making system calls reference kernel objects by memory address, as the kernel/driver APIs in Zephyr are the same for both user and supervisor contexts. It is necessary for the kernel to be able to validate accesses to kernel objects to make the following assertions:

用户模式线程使系统通过内存地址调用引用内核对象,因为 Zephyr 中的内核/驱动 api 对于用户和管理者上下文都是相同的。内核必须能够验证对内核对象的访问,从而做出以下断言:

  • That the memory address points to a kernel object

    内存地址指向一个内核对象

  • The kernel object is of the expected type for the API being invoked

    内核对象是被调用的 API 的预期类型

  • The kernel object is of the expected initialization state

    内核对象具有预期的初始化状态

  • The calling thread has sufficient permissions on the object

    调用线程对该对象有足够的权限

For more details see the Kernel Objects section in the documentation.

有关更多细节,请参见文档中的内核对象部分。

The zephyr build generates an intermediate ELF binary, zephyr_prebuilt.elf, which this script scans looking for kernel objects by examining the DWARF debug information to look for instances of data structures that are considered kernel objects. For device drivers, the API struct pointer populated at build time is also examined to disambiguate between various device driver instances since they are all ‘struct device’.

zephyr 构建系统生成一个中间的 ELF 二进制文件,zephyr_prebuilt.elf,。该脚本通过检查 DWARF 调试信息来寻找被认为是内核对象的数据结构的实例,从而对内核对象进行扫描。对于设备驱动程序,还检查了构建时填充的 API 结构指针,以消除各种设备驱动程序实例之间的歧义,因为它们都是“结构设备”。

This script can generate five different output files:

这个脚本可以生成五个不同的输出文件:

  • A gperf script to generate the hash table mapping kernel object memory addresses to kernel object metadata, used to track permissions, object type, initialization state, and any object-specific data.

    gperf 脚本,用于生成哈希表,将内核对象内存地址映射到内核对象元数据,用于跟踪权限、对象类型、初始化状态和任何特定于对象的数据。

  • A header file containing generated macros for validating driver instances inside the system call handlers for the driver subsystem APIs.

    包含生成宏的头文件,用于验证驱动程序子系统 api 的系统调用处理程序中的驱动程序实例。

  • A code fragment included by kernel.h with one enum constant for each kernel object type and each driver instance.

    由 kernel.h 包含的代码片段,每个内核对象类型和每个驱动程序实例有一个 enum 常量。

  • The inner cases of a switch/case C statement, included by kernel/userspace.c, mapping the kernel object types and driver instances to their human-readable representation in the otype_to_str() function.

    Switch/case c 语句的内部实例,由 kernel/userspace.c 包含,它将内核对象类型和驱动程序实例映射到 otype_to_str ()函数中人们可读的表示。

  • The inner cases of a switch/case C statement, included by kernel/userspace.c, mapping kernel object types to their sizes. This is used for allocating instances of them at runtime (CONFIG_DYNAMIC_OBJECTS) in the obj_size_get() function.

    Switch/case c 语句的内部实例,由 kernel/userspace.c 包含,将内核对象类型映射到它们的大小。这用于在 obj_size_get ()函数中在运行时分配它们的实例(CONFIG_dynamic_objects)。

scripts/gen_offset_header.py

This script scans a specified object file and generates a header file that defined macros for the offsets of various found structure members (particularly symbols ending with _OFFSET or _SIZEOF), primarily intended for use in assembly code.

这个脚本扫描一个指定的对象文件,并生成一个头文件,该文件为各种已找到的结构成员的偏移量定义了宏(特别是以_OFFSET 或_SIZEOF 结尾的符号) ,主要用于汇编代码。

scripts/parse_syscalls.py Scripts/parse_syscalls. py

Script to scan Zephyr include directories and emit system call and subsystem metadata

扫描 Zephyr 的脚本包括目录和发出系统调用和子系统元数据

System calls require a great deal of boilerplate code in order to implement completely. This script is the first step in the build system’s process of auto-generating this code by doing a text scan of directories containing C or header files, and building up a database of system calls and their function call prototypes. This information is emitted to a generated JSON file for further processing.

系统调用需要大量的样板代码才能完全实现。这个脚本是构建系统自动生成此代码过程的第一步,方法是对包含 c 或头文件的目录进行文本扫描,并建立系统调用及其函数调用原型的数据库。此信息被发出到生成的 JSON 文件中,以便进行进一步处理。

This script also scans for struct definitions such as __subsystem and __net_socket, emitting a JSON dictionary mapping tags to all the struct declarations found that were tagged with them.

这个脚本还扫描结构定义,比如_subsystem 和_net_socket,发出一个 JSON 字典映射标记到所有用它们标记的结构声明。

If the output JSON file already exists, its contents are checked against what information this script would have outputted; if the result is that the file would be unchanged, it is not modified to prevent unnecessary incremental builds.

如果输出 JSON 文件已经存在,它的内容将根据该脚本将输出的信息进行检查; 如果结果是文件将保持不变,则不会修改该文件以防止不必要的增量构建。

arch/x86/gen_idt.py

Generate Interrupt Descriptor Table for x86 CPUs.

为 x86 cpu 生成中断描述符表

This script generates the interrupt descriptor table (IDT) for x86. Please consult the IA Architecture SW Developer Manual, volume 3, for more details on this data structure.

该脚本为 x86生成中断描述符表(IDT)。请参考 IA 架构 SW 开发者手册第3卷,了解更多关于这个数据结构的细节。

This script accepts as input the zephyr_prebuilt.elf binary, which is a link of the Zephyr kernel without various build-time generated data structures (such as the IDT) inserted into it. This kernel image has been properly padded such that inserting these data structures will not disturb the memory addresses of other symbols. From the kernel binary we read a special section “intList” which contains the desired interrupt routing configuration for the kernel, populated by instances of the IRQ_CONNECT() macro.

此脚本接受作为输入的 zephyr_prebuilt.elf 二进制文件,它是没有插入各种构建时生成的数据结构(例如 IDT)的 Zephyr 内核链接。这个内核映像经过了适当的填充,以便插入这些数据结构时不会干扰其他符号的内存地址。从内核二进制文件中,我们读取了一个特殊的节“ intList”,其中包含所需的内核中断路由配置,由 IRQ_CONNECT() 宏的实例填充。

This script outputs three binary tables:

这个脚本输出三个二进制表:

  1. The interrupt descriptor table itself.

    中断描述符表本身。

  2. A bitfield indicating which vectors in the IDT are free for installation of dynamic interrupts at runtime.

    一个位域,指示 IDT 中哪些向量可以在运行时安装动态中断。

  3. An array which maps configured IRQ lines to their associated vector entries in the IDT, used to program the APIC at runtime.

    一种数组,它将配置的 IRQ 行映射到 IDT 中的相关向量条目,用于在运行时对 APIC 进行编程。

arch/x86/gen_gdt.py

Generate a Global Descriptor Table (GDT) for x86 CPUs.

为 x86 cpu 生成一个全局描述符表。

For additional detail on GDT and x86 memory management, please consult the IA Architecture SW Developer Manual, vol. 3.

关于 GDT 和 x86内存管理的更多细节,请参考 IA 架构 SW 开发手册,第3卷。

This script accepts as input the zephyr_prebuilt.elf binary, which is a link of the Zephyr kernel without various build-time generated data structures (such as the GDT) inserted into it. This kernel image has been properly padded such that inserting these data structures will not disturb the memory addresses of other symbols.

此脚本接受作为输入的 zephyr_prebuilt.elf 二进制,它是一个没有插入各种构建时生成的数据结构(如 GDT)的 Zephyr 内核链接。这个内核映像经过了适当的填充,以便插入这些数据结构时不会干扰其他符号的内存地址。

The input kernel ELF binary is used to obtain the following information:

输入内核 ELF 二进制用于获取以下信息:

  • Memory addresses of the Main and Double Fault TSS structures so GDT descriptors can be created for them

    主要和双故障 TSS 结构的内存地址,因此可以为它们创建 GDT 描述符

  • Memory addresses of where the GDT lives in memory, so that this address can be populated in the GDT pseudo descriptor

    GDT 存储在内存中的内存地址,以便可以在 GDT 伪描述符中填充该地址

  • whether userspace or HW stack protection are enabled in Kconfig

    Kconfig 是否启用了用户空间或硬件堆栈保护

The output is a GDT whose contents depend on the kernel configuration. With no memory protection features enabled, we generate flat 32-bit code and data segments. If hardware- based stack overflow protection or userspace is enabled, we additionally create descriptors for the main and double- fault IA tasks, needed for userspace privilege elevation and double-fault handling. If userspace is enabled, we also create flat code/data segments for ring 3 execution.

输出是 GDT,其内容依赖于内核配置。在没有启用内存保护功能的情况下,我们生成32位的平面代码和数据段。如果启用了基于硬件的堆栈溢出保护或用户空间,我们还会为主要的和双故障 IA 任务创建描述符,这些描述符用于用户空间权限提升和双故障处理。如果启用了用户空间,我们还将为第3环的执行创建平面代码/数据段。

scripts/gen_relocate_app.py

This script will relocate .text, .rodata, .data and .bss sections from required files and places it in the required memory region. This memory region and file are given to this python script in the form of a string.

这个脚本将重新定位所需文件中的.text、.rodata、.data和.bss部分,并将其放到所需的内存区域。这个内存区域和文件是以字符串的形式给这个python脚本的。

Example of such a string would be:

这种字符串的示例如下:

SRAM2:COPY:/home/xyz/zephyr/samples/hello_world/src/main.c,\
SRAM1:COPY:/home/xyz/zephyr/samples/hello_world/src/main2.c, \
FLASH2:NOCOPY:/home/xyz/zephyr/samples/hello_world/src/main3.c

To invoke this script:

调用这个脚本:

python3 gen_relocate_app.py -i input_string -o generated_linker -c generated_code

Configuration that needs to be sent to the python script.

需要发送到 python 脚本的配置。

  • If the memory is like SRAM1/SRAM2/CCD/AON then place full object in the sections

    如果内存类似于 SRAM1/SRAM2/CCD/AON,那么将整个对象放置在各个部分中

  • If the memory type is appended with _DATA / _TEXT/ _RODATA/ _BSS only the selected memory is placed in the required memory region. Others are ignored.

    如果内存类型附加了_DATA/_ TEXT/_ RODATA/_ BSS,则只将选定的内存放在所需的内存区域中。其他人则被忽略。

  • COPY/NOCOPY defines whether the script should generate the relocation code in code_relocation.c or not

    COPY/NOCOPY 定义脚本是否应该在 code_relocation.c 中生成重定位代码

Multiple regions can be appended together like SRAM2_DATA_BSS this will place data and bss inside SRAM2.

多个区域可以像 SRAM2_data_bss 一样附加在一起,这将把数据和 bss 放在 sram2内部。

scripts/process_gperf.py

gperf C file post-processor

文件后处理程序

We use gperf to build up a perfect hashtable of pointer values. The way gperf does this is to create a table ‘wordlist’ indexed by a string representation of a pointer address, and then doing memcmp() on a string passed in for comparison

我们使用 gperf 构建一个完美的指针值散列表。Gperf 做到这一点的方法是创建一个由指针地址的字符串表示索引的表“ wordlist”,然后在传入的字符串上执行 memcmp ()以进行比较

We are exclusively working with 4-byte pointer values. This script adjusts the generated code so that we work with pointers directly and not strings. This saves a considerable amount of space.

我们只使用4字节的指针值。这个脚本调整生成的代码,这样我们可以直接使用指针而不是字符串。这样可以节省大量的空间。

scripts/gen_app_partitions.py

Script to generate a linker script organizing application memory partitions

脚本来生成一个组织应用程序内存分区的链接器脚本

Applications may declare build-time memory domain partitions with K_APPMEM_PARTITION_DEFINE, and assign globals to them using K_APP_DMEM or K_APP_BMEM macros. For each of these partitions, we need to route all their data into appropriately-sized memory areas which meet the size/alignment constraints of the memory protection hardware.

应用程序可以使用 K_APPMEM_PARTITION_DEFINE 声明构建时内存域分区,并使用 K_APP_DMEM 或 K_APP_BMEM 宏为它们分配全局。对于每个分区,我们需要将它们的所有数据路由到适当大小的内存区域,这些区域满足内存保护硬件的大小/对齐约束。

This linker script is created very early in the build process, before the build attempts to link the kernel binary, as the linker script this tool generates is a necessary pre-condition for kernel linking. We extract the set of memory partitions to generate by looking for variables which have been assigned to input sections that follow a defined naming convention. We also allow entire libraries to be pulled in to assign their globals to a particular memory partition via command line directives.

这个链接器脚本是在构建过程的早期创建的,在构建尝试链接内核二进制文件之前,因为这个工具生成的链接器脚本是内核链接的必要前置条件。我们通过查找分配给输入部分的变量来提取要生成的内存分区集,这些输入部分遵循定义的变数命名原则。我们还允许整个库通过命令行指令将它们的全局分区分配给特定的内存分区。

This script takes as inputs:

这个脚本接受输入:

  • The base directory to look for compiled objects

    查找已编译对象的基本目录

  • key/value pairs mapping static library files to what partitions their globals should end up in.

    键/值对将静态库文件映射到它们的全局分区应该最终位于的分区。

The output is a linker script fragment containing the definition of the app shared memory section, which is further divided, for each partition found, into data and BSS for each partition.

输出是一个链接器脚本片段,其中包含应用程序共享内存部分的定义,对于找到的每个分区,该片段进一步划分为数据和每个分区的 BSS。