The Tool Writes, The Engineer Thinks
A practical guide to working with AI as a senior engineer. What to delegate, what to own, and where the line is.
Every engineer has the moment. You're vibing. Claude or GPT is cranking out functions, tests, whole modules. The code compiles. Tests pass. You feel like a 10x developer, which is a phrase that should trigger an automatic psych evaluation. Then, around file fifty, something breaks. Not a clean break. A "spend three days reading stack traces in your underwear" break.
That's the line. On one side: AI-assisted development. On the other: actual engineering. Most people don't know which side they're standing on.
What to delegate
Boilerplate and CRUD. API routes, database models, form validation, type definitions. AI writes these faster than you can think them. I once spent four hours hand-crafting a Prisma schema that Claude generated in eleven seconds. I don't talk about that day.
Tests for existing code. Describe the behavior, let AI write the test. You review the edge cases it missed. There will be edge cases it missed.
Refactoring. "Convert this class component to hooks" or "extract this into a utility function." Mechanical transformations. This is what AI was born for. Not sentience. Not art. Converting your 2019 class components.
Documentation. AI is genuinely good at generating docs, JSDoc comments, and README updates from existing code. It writes better documentation than most engineers, which is less a compliment to AI and more an indictment of engineers.
delegate to AI
you own this
What to own
Architecture. Which database? Monolith or microservices? REST or GraphQL? Event-driven or request-response? These are judgment calls. AI can't make them because it doesn't know your constraints. It doesn't know your team has three people, one of whom is on paternity leave, and the other just discovered Rust. It doesn't know your deadline was last week.
Security boundaries. AI will happily write injectable SQL, auth flows that leak tokens, and APIs that expose internal IDs. It does this with the confidence of a LinkedIn thought leader. Every security-critical path needs a human who has been burned before.
State management. The shape of your data, where it lives, how it flows through your app. This is where complexity compounds. AI generates plausible-looking state code that works beautifully for a demo and creates subtle, soul-destroying bugs three months later. By then, the original prompt is long gone and you're just a person staring at a Redux store, questioning your life choices.
Error handling at boundaries. What happens when the database is down? When the third-party API changes its response format without telling anyone, because of course it does? When a user sends Unicode that breaks your parser? AI handles the happy path. You handle reality.
The review discipline
I review AI code the way I review junior engineer PRs: assume competence, verify judgment. Except the junior engineer eventually learns. The AI will make the same mistake tomorrow with the same confidence.
- Read the diff, don't skim it. AI code looks clean. It passes every linter. The bugs hide in logic, not formatting. It's the well-dressed con artist of code.
- Check the assumptions. AI assumes things about your system that aren't true. "This will always be an array." Will it? Will it really? Have you met your users?
- Test the boundaries. Null inputs, empty strings, concurrent requests, large payloads. AI writes for the common case. Your users will find the uncommon case within six minutes of deployment.
- Question the architecture. Just because AI suggested a pattern doesn't mean it fits. It suggested the pattern because it saw it in ten thousand repos. Nine thousand of those repos are abandoned.
Try it yourself
Here's a simple React component. AI wrote it. Can you spot what's wrong?
export default function App() { return <h1>Hello world</h1> }
The productivity trap
Here's where the industry loses the plot. The trap is measuring productivity in lines of code. AI can produce thousands of lines per hour. That's not productivity. That's a landfill.
Real productivity is: how fast can you go from user need to working, deployed, maintainable feature? AI accelerates every step of that pipeline. But the pipeline itself, the understanding, the designing, the reviewing, the deploying, the getting paged at 2 AM because your "AI-accelerated" feature is eating 40GB of RAM? That's still engineering.
The engineers who thrive with AI aren't writing more code. They're thinking more clearly about what code should exist. Most code shouldn't.
Ashutosh Makwana
10+ years engineering. AI-native since 2022. Building things that think.
