Do you know how much software it takes to display “Hello, World” in a browser in 2026?

In 1991, Tim Berners-Lee built the first web server on a NeXT workstation. The HTTP server, the HTML renderer, and the entire network stack fit in 8MB of RAM. The machine could serve pages, render them, and still have room for his email.

A “Hello, World” web app generated by a modern framework starter template now requires:

  • A JavaScript runtime: ~30MB
  • A node_modules directory: ~150MB (47,000 files, but who’s counting)
  • A Webpack/Vite build artifact: ~50MB
  • A Docker base image: ~200MB
  • A .env.example file that’s somehow 40 lines long: priceless

~500MB to render two words on a screen.

We made the software 62,500 times larger to do the same thing. If this were any other engineering discipline, there would be a congressional hearing.


The Dependency Black Hole

Open any project generated by a popular JavaScript framework. Run npm list. Count the direct dependencies. Maybe 15, 20. Reasonable. Now run npm list --all. Count again.

The number is not 20 plus some extras. It’s 20 multiplied by 50. You have a thousand packages. You asked for a router, a logger, and a testing framework. You got a thousand packages.

Here’s how that happens: the router depends on a path-matching library. The path-matching library depends on a string escaping utility. The string escaping utility depends on a regular expression helper. The regular expression helper depends on a function that checks if a number is an integer. Yes. There is a package on npm, downloaded 40 million times a week, whose entire purpose is to check if a number is an integer. It was last updated in 2019. It has three open security advisories. It exists because someone, somewhere, wanted isInteger and chose to import it instead of writing Number.isInteger(n).

Every one of those thousand packages is:

  • A potential security vulnerability you didn’t choose and can’t easily remove
  • A potential breaking change next time someone runs npm update without reading the changelog
  • Code written by a stranger, maintained in their spare time, funded by nobody, depended on by everyone
  • Something you are now legally responsible for if it does something bad, per the terms of service you didn’t read

Loom — the engine running this blog — has zero external dependencies beyond zlib. The compiled binary is 800KB. Not 800MB. 800KB. Eight hundred kilobytes. You could fit it on a floppy disk. Twice. The container image, if you want one, is under 2MB.

That’s not a flex. That’s what software looks like when you only add things you actually need. It’s also what software looked like all the time before we collectively lost our minds.

The Toolchain That Ate Itself

Here is the complete history of JavaScript build tooling, compressed into one painful timeline:

  1. JavaScript was too slow at startup -> bundlers (Browserify, Webpack)
  2. Webpack was too slow to configure -> zero-config bundlers (Parcel)
  3. Parcel was too slow to build -> faster bundlers rewritten in Go/Rust (esbuild, SWC)
  4. Raw esbuild was tedious to configure -> dev server wrappers (Vite)
  5. Vite alone wasn’t enough -> meta-frameworks wrapping Vite (Next.js, Nuxt, SvelteKit)
  6. Meta-frameworks were complex -> starter templates pre-configuring meta-frameworks
  7. Starter templates diverged -> framework-specific CLIs enforcing the templates

We spent a decade solving problems created by the previous decade’s solutions. Each layer exists because the layer below it was broken. The current state of the art is seven abstraction layers on top of a language that runs in a virtual machine inside a container orchestrated by Kubernetes running Linux inside a hypervisor on a server you’ve never seen.

At each step, someone wrote a blog post about how this new layer was going to “simplify everything.” The everything stayed complicated. The blog posts got more optimistic.

This is the software equivalent of building a taller ladder to get out of the hole you dug. At some point, you should stop digging.

Resume-Driven Development

There’s a name for choosing technologies because they look impressive in an interview rather than because they solve the problem at hand.

Resume-Driven Development.

It is not a fringe practice. It is not something other teams do. It describes a meaningful percentage of all architectural decisions made at small-to-medium companies, and if you’ve ever sat in an architecture meeting where someone proposed Kubernetes for a two-container app, you’ve seen it happen in real time.

The engineer who proposes microservices gets to put “designed microservice architecture” on their resume. The engineer who implements Kubernetes gets “Kubernetes” on their LinkedIn. The engineer who deploys a Kafka cluster gets “event-driven architecture” and a talk proposal for a local meetup. The engineer who builds a simple, boring monolith that works perfectly for five years and never wakes anyone up at 3am gets to write… “maintained backend system.”

Guess which one gets the job offers.

The incentive structure rewards complexity. Complexity reads as seniority. Simplicity reads as naivete — even when the simple system outperforms the complex one by every metric that actually matters: uptime, latency, cost, time-to-fix, time-to-onboard, and engineer happiness.

The people funding the infrastructure often don’t have the technical context to push back. “We need Kubernetes” sounds like a technical requirement. “We need a bigger EC2 instance” sounds like we’re behind. The result: small teams running infrastructure designed for organizations 50 times their size, paying cloud bills that could fund a junior engineer’s salary, while the senior engineers are fully employed keeping the complexity from collapsing under its own weight.

The Hidden Cost of “Modern”

“Modern” has become a virtue signal completely detached from technical meaning.

A modern stack is not better than an older one by virtue of being newer. A modern stack is better only if it solves more problems than it introduces. Most modern stacks do not clear this bar. Most modern stacks trip over this bar, fall face-first, and then blog about the experience.

When you choose a framework, you’re not just choosing how to write the feature today. You’re choosing:

  • How hard it will be to debug in 18 months when you’ve forgotten the conventions
  • How long it takes to onboard the next engineer (and the next one, and the next one)
  • What happens when the framework ships a breaking major version and the migration guide is a GitHub discussion with 200 comments and no clear answer
  • What percentage of your engineering time is “building the product” vs. “feeding the machine that builds the product”

These costs are invisible on day one. They’re manageable at month three. By month twelve, they’re the dominant expense. By year two, they’re the reason your best engineer quit — not because the work was hard, but because the work was joyless. Nobody went into software to update Webpack configs.

Simplicity Is an Achievement, Not a Shortcut

The hardest thing to do in software is say no.

No, we don’t need this dependency for something we can write in 40 lines. No, we don’t need a distributed queue for 3,000 daily users. No, we don’t need Kubernetes for a two-container application. No, we don’t need a meta-framework for a site that serves HTML. No, we don’t need GraphQL. We have four endpoints.

Saying no is hard because complexity provides cover. If the system is complex, failures are “infrastructure issues” and not anyone’s fault. If the system is complex, your title has “platform” in it, which sounds more important than “backend.” If the system is complex, removing anything feels risky because nobody fully understands it, which means nobody can prove the removal is safe, which means nothing ever gets removed, which means the system gets more complex, which means — you see where this goes.

Complexity is job security for individuals and a slow bleed for the organization paying for it. It is the most successful wealth transfer in the history of software engineering: from companies to cloud providers, one unnecessary microservice at a time.

But simplicity is also: a $6/month server instead of a $600/month cluster. A 10-second build instead of a 10-minute one. A new engineer productive in two days instead of two weeks. A bug found in 5 minutes instead of 3 hours. A system that runs for three years without anyone touching it, because there’s nothing to break.

Loom is 800KB. No dependencies. 10-second build. Sub-millisecond response times on hardware that costs less than a cup of coffee per month. Not because it does something clever — because it refuses to do thousands of unnecessary things.

That’s the other end of the spectrum. It’s not a theoretical position. It’s running, right now, serving this page, on a server that costs less than your lunch.

It’s available to anyone willing to make different choices. The hard part was never the code. The hard part is looking at the industry consensus and saying: no, I don’t think I will.

The source code is here. It’s 800KB. It serves this page. It has no opinions about your career — only about whether software should work.


End of the “Rebuilding the Web” series.