I have been working on go-fw, a small, code-driven guide to learning HTTP frameworks in Go.
Each chapter focuses on one HTTP concept and shows the same runnable example across net/http, Chi, Gin, Echo, Fiber, and Mizu. The key idea is that the README files are the single source of truth. All example code is generated from them and verified to compile and run (some may contain errors, let me build proper CI/CD later).
It is meant as a practical reference and a way to understand where frameworks differ in responsibility and wiring, not just syntax.
I'm working on Mizu, a small Go web framework built around a simple idea: net/http is already good, frameworks should not fight it.
I've kept running into the same problems in popular Go frameworks: hidden context mutation, magic middleware ordering, reflection-heavy binding, and APIs that slowly drift away from the standard library. The Gin ecosystem in particular has accumulated a lot of technical debt and footguns, which this post summarizes well: https://eblog.fly.dev/ginbad.html
Mizu is deliberately boring by design:
- Built directly on Go 1.22 http.ServeMux
- Explicit middleware chains with clear scoping
- No reflection, no codegen, no global state
- A real request context type that still interoperates with net/http
- First class graceful shutdown and error handling
If you're happy with net/http but want slightly better ergonomics and structure without losing control, that's the gap Mizu tries to fill.
"Emacs is the ground. We run around and act silly on top of it, and when we die, may our remnants grace its ongoing incrementation." - Thien-Thi Nguyen
I don't think this design in the article works in practice.
A single `events` table falls apart as the system grows, and untyped JSONB data in `event_data` column just moves the mess into code. Event payloads drift, handlers fill with branching logic, and replaying or migrating old events becomes slow and risky. The pattern promises clarity but eventually turns into a pile of conditionals trying to decode years of inconsistent data.
A simpler and more resilient approach is using the database features already built for this. Stored procedures can record both business data and audit records in a controlled way. CDC provides a clean stream for the tables that actually need downstream consumers. And even carefully designed triggers give you consistent invariants and auditability without maintaining a separate projection system that can lag or break.
Event sourcing works when the domain truly centers on events, but for most systems these database driven tools stay cleaner, cheaper, and far more predictable over time.
Yeah, you should. Zig is a trending language right now, and in the coming years many projects are likely to be rewritten in Zig instead of Rust (often referred to as "riiz").
I was half joking. Folks keep saying everything will get rewritten in Zig, so I played along with that. Nothing serious behind it.
With only half serious intent, I think only the real wizard types, like Jarred Sumner (Bun) and Mitchell Hashimoto (Ghostty), who understand both low level systems and higher level languages, should be writing big tools in Zig. The tough part in the next few years will not be building things, it will be keeping them alive if the authors step away or the ecosystems move in a different direction.
If you're doing it for real-world values, keep doing that. But if you want traction, writing in a "fancy" language is almost a requirement. "A database engine written in Zig" or "A search engine written in Zig" sounds much flashier and guarantees attention. Look at this book: it is defintely an AI slop, but it stays at the top spot, and there's barely any discussion about the language itself.
Enough rant, now back on some reasons for why choosing Zig:
- Cross platform tools with tiny binaries (Zig's built in cross compilation avoids the complex setup needed with C)
- System utilities or daemons (explicit error handling instead of silent patterns common in C)
- Embedded or bare metal work (predictable rules and fewer footguns than raw C)
- Interfacing with existing C libraries (direct header import without manual binding code)
- Build and deployment tooling (single build system that replaces Make and extra scripts)
For my personal usage, I'm working on replacing Docker builds for some Go projects that rely heavily on CGO by using `zig cc`. I'm not using the Zig language itself, but this could be considered one of its use cases.
> For my personal usage, I'm working on replacing Docker builds for some Go projects that rely heavily on CGO by using `zig cc`. I'm not using the Zig language itself, but this could be considered one of its use cases.
Hm, i can see a good use case when we want to have reproducible builds from go packages, including its C extensions. Is that your use case, or are you aiming for multi-environment support of your compiled "CGO extensions"
need to bundle a lot of C libraries, some using dynamic linking and some using static linking, and I need to deploy them on different operating systems, including some that are difficult to work with like RHEL. Right now the builds are slow because I use a separate Dockerfile for each platform and then copy the binary back to the host. With Zig CC I could build binaries for different platforms and architectures without using Docker.
I second SigNoz. I was paying a fortune for a cloud observability platform that cost more and more every month. Then I switched to self-hosted SigNoz on a cheap Hetzner box and now my observability stack costs $10 a month.
reply