11 Aug 2025
Zig’s new design for IO and concurrency is beautiful.
Consider three different kinds of code that do IO or other blocking tasks:
doesn’t care about concurrency:
fn saveData(io: Io, data: []const u8) !void {
try saveFile(io, data, "saveA.txt");
try saveFile(io, data, "saveB.txt");
}
fn saveDataEfficiently(io: Io, data: []const u8) !void {
var a_future = io.async(saveFile, .{io, data, "saveA.txt"});
defer a_future.cancel(io) catch {};
var b_future = io.async(saveFile, .{io, data, "saveB.txt"});
defer b_future.cancel(io) catch {};
try a_future.await(io);
try b_future.await(io);
}
fn saveDataInParallel(io: Io, data: []const u8) !void {
//not a great example, sue me
var a_future = io.concurrent(saveFile, .{io, data, "saveA.txt"});
defer a_future.cancel(io) catch {};
var b_future = io.async(saveFile, .{io, data, "saveB.txt"});
...
The beautiful thing is that all of this code works fine with any concurrency technology that the application cares to configure: a thread pool, green threads, or stackless coroutines. The first and second snippet even work fine with no concurrency at all. The same code which can take advantage of concurrency can also be run in single threaded, blocking mode. And the first (concurrency oblivious) snippet could be run in parallel with other code.
It’s similar to the design of allocators: just as the vast majority code is oblivious to the choice of allocator, the vast majority code is oblivious to the choice of concurrency model. And it has no barriers to reuse!
Motivated by the confusion on Hackernews
RSS