Configuring Hardware: A Proper Start
Keith Gangarahwe
@keith-gang
Configuring Hardware: A Proper Start
So, it’s been almost two months since the last post! Let me do a quick recap of what I’ve found out and how I’ll be approaching the project going forward.
The Misstep
I underestimated the complexity of the project structure—mostly because I didn’t have the physical hardware at the time. I thought I’d start by making a Dynamic Binary Translation Just-In-Time (JIT) interpreter. While we could load a real ARM7TDMI binary and see simple commands being executed, it just wasn’t practical.
Why? Because running it on a proper system with loads of RAM gives a false sense of performance. It wasn’t being built with the target hardware constraints in mind, so I had to stop.
The Way Forward
Now, instead, I’m going to try and get Zig running on the ESP32-P4. Yeah, just that. With the seamless C-interop Zig is famous for, how hard could it be? (Famous last words!)
The Environment Requirements
Let’s break down everything we need to configure so we can get this running:
- A Linux instance: Any will do (including WSL), but Windows works too. (I use openSUSE Tumbleweed BTW 🦎🦎).
- The ESP-IDF tool: This post assumes you’re using the CLI version. Follow the official guide here.
- Zig installed: But there’s a massive catch. I recommend the Espressif fork of Zig for proper target support.
- Patience: A lot of it.
For the ESP-IDF tools, make sure you have the prerequisites installed first, or you’ll encounter a world of pain. Install tools for all boards, or specifically for the ESP32-P4.
For the Zig fork, I used the latest version available at the time: 0.16.0-xtensa-dev.3189+b1880ae28.
To make life easier, add an alias to your shell that activates the IDF environment and switches your PATH to the Zig fork. For example: alias esp_init='source "$HOME/.espressif/tools/activate_idf_v6.0.sh" && export PATH="$HOME/esp-zig:$PATH"' (Where esp-zig is the folder containing the Zig fork). This gets executed in my terminal whenever I run esp_init.
Setting Up The Project Directory
After activating the environment (running esp_init), here is our folder structure:
.
├── build/ --> CMake Build Directory
├── main/ --> Project source code folder
│ ├── app.zig --> The Zig code that will run on the P4
│ ├── CMakeLists.txt --> The CMake file that coordinates the build with IDF
│ └── paths.zig --> A generated file telling Zig where the ESP C headers are
├── zig-out/ --> Zig build output folder (auto-generated)
├── build.zig --> The Zig build file
├── build.zig.zon
├── CMakeLists.txt --> Root CMake file
├── generate_paths.py --> Python script to generate paths.zig (a lifesaver!)
└── sdkconfig --> Generated when we set the target via `idf.py set-target esp32p4`The full template is ready for use and can be found here.
Now, let’s look at the code and understand what each file does.
CMakeLists.txt
This is the root CMake file.
cmake_minimum_required(VERSION 3.16)
include($ENV{IDF_PATH}/tools/cmake/project.cmake)
project(zig_esp32c3)main/CMakeLists.txt
This is where the actual configuration happens. It handles everything from calling the generate_paths.py script to running zig build:
# 1. Register the component
idf_component_register(SRCS "" INCLUDE_DIRS ".")
if(NOT IDF_BUILD_PREPARING)
set(ZIG_LIB "${CMAKE_CURRENT_SOURCE_DIR}/../zig-out/lib/libzig_app.a")
set(PATHS_SCRIPT "${CMAKE_CURRENT_SOURCE_DIR}/../generate_paths.py")
# 2. Chain the Python script and Zig build together!
# By putting them in the same custom_command, they run during the BUILD phase
# when compile_commands.json is guaranteed to exist.
add_custom_command(
OUTPUT "${ZIG_LIB}"
COMMAND ${PYTHON} "${PATHS_SCRIPT}"
COMMAND zig build
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/.."
VERBATIM
)
add_custom_target(zig_app_target DEPENDS "${ZIG_LIB}")
target_link_libraries(${COMPONENT_LIB} INTERFACE "${ZIG_LIB}")
add_dependencies(${COMPONENT_LIB} zig_app_target)
# Force the linker to look for app_main in the Zig static lib
target_link_options(${COMPONENT_LIB} INTERFACE "-u" "app_main")
endif()generate_paths.py
The reason this file exists is that the Espressif fork of Zig currently struggles to automatically resolve the massive amount of ESP-IDF include paths across different local environments. This Python script solves that by dynamically parsing CMake’s compile_commands.json (which contains the exact include paths generated for your specific machine) and hunting down the newlib platform headers. It then compiles these into a paths.zig file. This means whether you installed ESP-IDF in ~/.espressif, /opt/esp-idf, or anywhere else on your Linux machine, the script will find the right paths and pass them to Zig automatically!
import json
import os
import subprocess
def get_newlib_include():
"""Find the riscv32-esp-elf newlib include directory."""
# Primary: ask gcc directly
try:
result = subprocess.run(
["riscv32-esp-elf-gcc", "-print-sysroot"],
capture_output=True, text=True
)
sysroot = result.stdout.strip()
candidate = os.path.join(sysroot, "include")
if os.path.isdir(candidate):
return candidate
except Exception:
pass
# Fallback: scan ~/.espressif/tools/riscv32-esp-elf/ for newest version
tools_dir = os.path.join(
os.environ.get("IDF_TOOLS_PATH", os.path.expanduser("~/.espressif")),
"tools", "riscv32-esp-elf"
)
if os.path.isdir(tools_dir):
for version in sorted(os.listdir(tools_dir), reverse=True):
candidate = os.path.join(
tools_dir, version,
"riscv32-esp-elf", "riscv32-esp-elf", "include"
)
if os.path.isdir(candidate):
return candidate
return ""
def get_newlib_platform():
idf_path = os.environ.get("IDF_PATH", "")
if not idf_path:
# idf.py sometimes sets this instead
idf_path = os.environ.get("IDF_PATH_OVERRIDE", "")
if not idf_path:
# last resort: derive from the compile_commands.json paths themselves
# any path containing esp-idf/components tells us where IDF lives
try:
with open('build/compile_commands.json', 'r') as f:
sample = f.read(4096)
import re
m = re.search(r'(/[^\s"]+/esp-idf)/components', sample)
if m:
idf_path = m.group(1)
except Exception:
pass
if idf_path:
candidate = os.path.join(idf_path, "components", "newlib", "platform_include")
if os.path.isdir(candidate):
return candidate
return ""
def generate():
try:
with open('build/compile_commands.json', 'r') as f:
commands = json.load(f)
include_paths = set()
for cmd in commands:
parts = cmd['command'].split()
for i, part in enumerate(parts):
if part.startswith('-I'):
include_paths.add(part[2:].strip())
elif part in ('-isystem', '-iwithprefix', '-iwithprefixbefore') and i + 1 < len(parts):
include_paths.add(parts[i+1].strip())
newlib_include = get_newlib_include()
newlib_platform = get_newlib_platform()
if not newlib_include:
print("WARNING: could not find riscv32-esp-elf newlib include dir")
if not newlib_platform:
print("WARNING: could not find newlib/platform_include — is IDF_PATH set?")
def clean(p):
return p.replace("\\", "/")
with open('main/paths.zig', 'w') as f:
f.write('// Auto-generated by generate_paths.py — do not edit, do not commit\n')
f.write(f'pub const newlib_include = "{clean(newlib_include)}";\n')
f.write(f'pub const newlib_platform = "{clean(newlib_platform)}";\n')
f.write('\n')
f.write('pub const include_paths = &[_][]const u8{\n')
for path in sorted(include_paths):
f.write(f' "{clean(path)}",\n')
f.write('};\n')
print("Successfully generated main/paths.zig")
except Exception as e:
print(f"Error generating paths: {e}")
raise
if __name__ == "__main__":
generate()build.zig
Our Zig build script. This configures everything so that IDF can pick up the compiled static library and flash it to the P4. Crucially, it imports the paths.zig file generated by our Python script and feeds those dynamically discovered include paths directly into the Zig compiler via addIncludePath. This bridges the gap between ESP-IDF’s complex C environment and Zig’s build system, ensuring seamless C-interop regardless of where your toolchain is installed.
const std = @import("std");
const idf_data = @import("main/paths.zig");
pub fn build(b: *std.Build) void {
const target = b.resolveTargetQuery(.{
.cpu_arch = .riscv32,
.os_tag = .linux,
.abi = .musl,
.cpu_model = .{ .explicit = &std.Target.riscv.cpu.esp32p4 },
// ESP32-P4 has an FPU — tell Zig to use single-precision hard float
// so the output matches ESP-IDF's -mabi=ilp32f objects
.cpu_features_add = std.Target.riscv.featureSet(&.{
.f, // single-precision float extension (F)
}),
});
const lib = b.addLibrary(.{
.linkage = .static,
.name = "zig_app",
.root_module = b.createModule(.{
.root_source_file = b.path("main/app.zig"),
.target = target,
.optimize = .ReleaseSmall,
.link_libc = true,
}),
});
lib.root_module.addCMacro("ESP_PLATFORM", "1");
lib.root_module.addCMacro("__IEEE_LITTLE_ENDIAN", "1");
lib.root_module.addCMacro("FORCE_INLINE_ATTR", "static inline");
lib.root_module.addCMacro("__WINT_TYPE__", "unsigned int");
lib.root_module.addCMacro("_WINT_T_DECLARED", "1");
lib.root_module.addCMacro("wint_t", "unsigned int");
lib.root_module.addCMacro("ESP_IDF_RISCV_COMPAT", "1");
// These paths come from main/paths.zig, which is generated at build time
// by generate_paths.py. Never hardcoded, never committed, always correct.
lib.root_module.addIncludePath(.{ .cwd_relative = idf_data.newlib_include });
lib.root_module.addIncludePath(.{ .cwd_relative = idf_data.newlib_platform });
for (idf_data.include_paths) |path| {
lib.root_module.addIncludePath(.{ .cwd_relative = path });
}
b.installArtifact(lib);
}main/app.zig
This is where the magic happens! We write our Zig code for the ESP32-P4 by directly importing the ESP headers.
const std = @import("std");
const idf = @cImport({
@cDefine("wint_t", "unsigned int");
@cInclude("riscv/rv_utils.h");
@cInclude("riscv/interrupt.h");
@cInclude("esp_private/interrupt_intc.h");
// FreeRTOS standard includes
@cInclude("freertos/FreeRTOS.h");
@cInclude("freertos/task.h");
@cInclude("esp_log.h");
});
const TAG = "ZIG_ESP32_P4_DUAL_CORE";
// Pure Zig RISC-V assembly to read the hardware core ID
inline fn getCoreID() u32 {
return asm volatile ("csrr %[ret], mhartid"
: [ret] "=r" (-> u32),
);
}
// ============================================================================
// TASK A: Pinned to Core 0
// ============================================================================
fn task_core_0(arg: ?*anyopaque) callconv(.c) void {
_ = arg;
while (true) {
// Ask the hardware which core is physically executing this line of code
const core_id = getCoreID();
idf.esp_log_write(idf.ESP_LOG_INFO, "CORE_0_TASK", "<<< Hello from Task A! I am physically running on Core: %d\n", core_id);
// Do some dummy work
var counter: u32 = 0;
while (counter < 1000000) : (counter += 1) {
asm volatile ("nop"); // Prevent Zig from optimizing this loop away
}
// Rest for 1 second
idf.vTaskDelay(1000 / idf.portTICK_PERIOD_MS);
}
}
// ============================================================================
// TASK B: Pinned to Core 1
// ============================================================================
fn task_core_1(arg: ?*anyopaque) callconv(.c) void {
_ = arg;
while (true) {
// Ask the hardware which core is physically executing this line of code
const core_id = getCoreID();
idf.esp_log_write(idf.ESP_LOG_WARN, "CORE_1_TASK", ">>> Hello from Task B! I am physically running on Core: %d\n", core_id);
// Do some dummy work
var counter: u32 = 0;
while (counter < 1000000) : (counter += 1) {
asm volatile ("nop");
}
// Rest for 1 second (slightly offset so logs don't collide instantly)
idf.vTaskDelay(950 / idf.portTICK_PERIOD_MS);
}
}
// ============================================================================
// MAIN ENTRY POINT
// ============================================================================
export fn app_main() void {
idf.esp_log_write(idf.ESP_LOG_INFO, TAG, "Waking up the beast. Starting Dual-Core Mode...\n");
// Arguments:
// Function, Name, Stack Size, Args, Priority, Handle, CORE ID!
// Pin Task A exclusively to Core 0
_ = idf.xTaskCreatePinnedToCore(task_core_0, "Task0", 4096, null, 2, null, 0);
// Pin Task B exclusively to Core 1
_ = idf.xTaskCreatePinnedToCore(task_core_1, "Task1", 4096, null, 2, null, 1);
// The bootloader runs app_main() on Core 0 by default.
// We let it exit, and FreeRTOS takes over routing our tasks to the physical cores.
}To keep this post from getting too long, I’ll make a separate post explaining the app.zig code in depth and why build.zig and generate_paths.py are structured the way they are.
The Hardware Itself
I bought this beauty off AliExpress (link here). It’s essentially an ESP32-P4 paired with an ESP32-C6 (for Wi-Fi and Bluetooth support) and a 7-inch LCD screen—perfect for when we start building tools with a UI!
A Quick Configuration
The latest version of ESP-IDF is configured for newer ESP32-P4 silicon by default. If you have an older revision (like I do), you’ll need to run:
idf.py menuconfigNavigate to Component Config -> Hardware Settings -> Chip revision -> Select ESP32-P4 revisions <3.0 (No >=3.x Support). Press Space to toggle it (you should see an [*]), then press S to save and Q to quit.
Also, to ensure ESP-IDF can access the board on Linux, run:
sudo usermod -a -G dialout $USERAnd then log out, or just restart your pc for this command to take effect.
Running The Project
All that’s left is to build and flash! In your terminal (with the environment active), run:
idf.py build flash monitorAnd that’s it! You should see something like this in your terminal:
Waking up the beast. Starting Dual-Core Mode...
<<< Hello from Task A! I am physically running on Core: 0
>>> Hello from Task B! I am physically running on Core: 1
I (301) main_task: Returned from app_main()
>>> Hello from Task B! I am physically running on Core: 1
<<< Hello from Task A! I am physically running on Core: 0
>>> Hello from Task B! I am physically running on Core: 1
<<< Hello from Task A! I am physically running on Core: 0
>>> Hello from Task B! I am physically running on Core: 1And there we have it! Zig running on an ESP32-P4 with zero C source files in our main folder, and a properly structured pipeline that makes development straightforward.
And with this, we can now make the environemt that our DBT-JIT can run on.