01 The Problem
Production APIs lose hours to invisible bottlenecks.
Every production Node.js application eventually hits the same wall: something is slow, something is failing intermittently, and you have no idea where to look. I rebuilt our entire observability stack from scratch at Pranshtech custom request logging middleware, manual database query timing, ad-hoc trace IDs scattered across microservices. It worked, but it cost two full days every time we brought up a new service.
Commercial APMs like Datadog and New Relic solve the problem, but at $25–$50 per host per month they price out the indie developer and early-stage startup entirely. There was nothing in between: either you pay the enterprise tax or you instrument everything by hand. APILens was built to close that gap.
This one taught me that building the product is 20%. Marketing is 80%.
02 The Solution
One line of code. Zero dependencies.
The package monkey-patches the Node.js HTTP layer and each supported database client at require-time, so every inbound request, outbound query, and downstream HTTP call is captured automatically. A distributed trace ID is generated on the first incoming request and threaded through the entire call chain using Node's AsyncLocalStorage no manual propagation, no context passing.
Structured JSON logs are buffered in memory and flushed to the apilens.rest cloud dashboard over a persistent WebSocket connection. The dashboard aggregates p50/p95/p99 latency, error rates, and slow query analysis in real time. The entire package ships as a single compiled file with zero production dependencies.
03 Architecture
The interesting engineering.
AsyncLocalStorage for zero-overhead trace propagation
Node.js AsyncLocalStorage (stable since v16) lets you attach arbitrary data to an async execution context every setTimeout, Promise chain, and I/O callback that descends from the original request inherits the same store. APILens creates a new store on each inbound request, writes the trace ID and request metadata once, and reads it back anywhere in the call tree without touching function signatures or passing context objects.
Monkey-patching without breaking the world
Patching pg, mysql2, mongoose, redis, ioredis, @prisma/client, axios, and node-fetch at the module level is straightforward but fragile: the patch must run before any application code imports those modules, must not break existing error handling, and must restore the original function on teardown. APILens wraps each client method in a try/finally block so an error in the instrumentation layer never propagates to application code. The original function is always called.
Buffered async logger with backpressure
Synchronous logging kills throughput. APILens accumulates log entries in an in-process ring buffer (configurable size, default 1 000 entries) and flushes to the cloud on a 250 ms interval or when the buffer reaches 80% capacity whichever comes first. If the WebSocket connection drops, entries are held in the buffer and replayed on reconnect. If the buffer overflows, the oldest entries are dropped silently so application performance is never impacted.
Zero runtime dependencies intentionally
Every npm package you add to dependencies is a liability: version conflicts, supply-chain risk, bundle size. APILens patches native Node.js APIs and uses only built-ins (http, https, async_hooks, crypto, zlib). The compiled output is a single 18 KB file. This was the hardest constraint to maintain the WebSocket client, the ring buffer, the JSON serialiser, and the HTTP interceptor are all written from scratch.
04 Tech Stack
Built with
05 Outcome
What shipped.
- Published to npm as
auto-api-observeinstallable in 30 seconds, zero configuration required for default use. - Auto-instruments 8 database clients: PostgreSQL (
pg), MySQL (mysql2), MongoDB (mongoose), Redis (redis,ioredis), Prisma, and outbound HTTP viaaxiosandnode-fetch. - Cloud dashboard at apilens.rest provides real-time request timelines, slow query highlighting, error rate tracking, and p50/p95/p99 latency breakdowns free tier, no credit card.
- Handles 600 000+ requests per minute in load tests without measurable throughput degradation (< 0.3 ms overhead per request on M2 hardware).
- Zero external runtime dependencies the entire instrumentation layer is built on Node.js built-ins, keeping supply-chain risk and bundle size to a minimum.