Backend architecture
Production-ready API layer with type safety, background jobs, and rate limiting.
The backend decisions are already made
Building a backend from scratch means making dozens of decisions before you write your first endpoint. How to validate input. How to handle errors. How to structure your API. How to run background work. How to protect against abuse. Each decision feels small but they compound into either a clean system or a tangled one.
The framework ships with all of this resolved. The API layer is type-safe. Input validation happens automatically. Background jobs retry on failure. Rate limiting is built in and turns on with two environment variables.
Type-safe from database to browser
When you add a column to the database, the types update in the API, and your editor catches any mismatch in the frontend before you even run the app. This chain is unbroken. There's no point where data passes through an untyped gap and silently breaks.
Every API procedure validates its input against a defined schema. If someone sends bad data, they get a clear error. If the input is valid, the types are guaranteed downstream. The agents follow this pattern for every procedure they create.
Background jobs that handle failure
Some work can't happen during a web request. Sending a batch of emails. Processing an uploaded file. Running a daily cleanup task. These need to happen in the background, and they need to handle failure gracefully.
The framework uses a job system that retries failed work automatically, supports scheduled tasks (daily reports, weekly cleanups), and handles long-running operations that would time out in a normal request. Background jobs connect directly to the database with elevated permissions, so they're not limited by the same security policies that protect user-facing requests.
The agents know when to use a background job instead of an API endpoint. You don't have to think about it.
Rate limiting with one command
Abuse protection is built into the API layer. Three tiers handle different levels of trust:
- Public routes allow a reasonable number of requests per minute from any visitor
- Auth routes (login, signup, password reset) have strict limits to prevent brute force attacks
- Protected routes give authenticated users higher limits based on their user ID
To turn it on, add two environment variables from Upstash Redis (there's a free tier). That's it. No code changes. The middleware detects the variables and activates. Leave them empty and rate limiting stays off with zero overhead.
The agents apply the right tier to every procedure they create. New endpoints get rate limiting automatically.
What this means in practice
You don't need a backend engineer on your team to have a backend that handles edge cases properly. The patterns are documented, the agents follow them, and the result is an API layer that validates input, handles errors with clear messages, runs background work reliably, and protects itself against abuse. The kind of setup that usually takes a senior engineer a few weeks to get right.