The most useless interview question in software hiring is quickly becoming "Do you use AI?" Almost everyone answers yes now, and that is the problem.
A junior developer answers yes because Copilot finishes their imports and explains TypeScript errors. A founder answers yes because an agent built a convincing demo over the weekend. A senior engineer answers yes because Claude Code helped finish a migration after reading half the repo and running tests. None of those people are doing the same job.
One is using AI as autocomplete. Another is using it as a prototype machine. A third is running a controlled agent workflow with context, constraints, review, and rollback. Somewhere in the middle is the expensive vibe coder who generates a lot of code and owns very little of it. HR sees one checkbox; production sees the difference.
The AI-native vs AI-powered developers debate is not really about terminology. The actual question is not whether a developer uses AI, but how they structure work around it.
Prefer the walkthrough version first? Artem recorded a full review of the article below, with the same argument in a more conversational format.
AI-native is about architecture, not vibes
IBM describes AI-native systems as products, companies, or workflows designed from the ground up with AI as a core component, not bolted on later as a feature. The same definition is useful for thinking about developers.
An AI-powered developer uses AI inside an existing workflow. They still work mostly the old way: read ticket, inspect code, write code, run tests, open PR. AI helps with snippets, explanations, boilerplate, regex, test cases, and the occasional second opinion. A strong AI-powered developer is faster, clearer, and less blocked than before, but the loop is still human-led at almost every step.
An AI-native developer redesigns the loop itself. The agent is no longer just answering questions; it becomes part of the execution system. It reads the repo, edits files, runs commands, reports failures, receives constraints, retries, and sometimes works in the background while the developer is reviewing another branch. OpenAI describes Codex as a coding agent that can read, edit, and run code, including cloud tasks running in parallel.
The job changes underneath. The developer is no longer only writing implementation details; they are designing context, defining boundaries, selecting tasks, checking diffs, shaping tests, and deciding when the agent is allowed to continue. Less typing, more orchestration. Less "write this function" and more "here is the contract, here are the files, here is what must not change, prove it with tests."
It sounds cleaner than it feels in practice. The agent will fix one bug by creating three smaller ones. It apologizes with the emotional confidence of a support chatbot and then repeats the same mistake in a different file. It can produce a beautiful refactor that compiles locally and quietly breaks an edge case nobody wrote a test for. A good AI-native developer expects all of that and builds the workflow around it.
The four buckets hiring teams keep mixing together
I would not use these labels as formal titles. Please do not open a job post for "Senior UNC Developer" unless you want confused applicants and one very funny Slack thread. As a mental model, though, the buckets are useful.
A traditional developer may not use AI in the normal coding loop at all. They rely on docs, search, IDE autocomplete, tests, debugger, code review, and accumulated scar tissue. This is still a valid way to build software, and some of the best engineers I know are conservative with tools because they work in systems where mistakes are expensive.
An AI-powered developer keeps the old loop and adds AI acceleration on top. They ask for explanations, generate small helpers, convert formats, draft tests, or explore library behavior. The tool makes parts of the workflow faster without owning the shape of the task.
An AI-native developer builds the loop around agents. They think in prompts, repo context, task decomposition, acceptance criteria, test gates, diffs, and rollback. The agent does more of the mechanical work, so the human has to be better at framing, reviewing, and stopping.
Then there is the UNC developer: Unplugged, No-Claude. UNC is not an insult; it is a ridiculous label for a real category. This developer may use no coding agents at all, or only use them lightly, and still ship with docs, local autocomplete, search, and stubbornness. Sometimes they are slower. Sometimes they are the person who saves production because they actually understand the system.

The mistake is treating all four as the same candidate, just because all four can answer "yes" to "Do you have AI experience?"
The token bill is not always waste
Token usage is one of the easiest places to misunderstand AI-native work. A simple AI-powered workflow sends a small prompt and gets a small answer back. "Write a helper function." "Explain this TypeScript error." "Draft a test for this component." The context is tiny.
Agentic development is a different shape. The model needs the task, the conversation so far, relevant project files, tool outputs, errors, test results, and sometimes summaries of earlier attempts. Anthropic's context-window docs describe context as the model's working memory, and Claude Code's usage docs explain the practical version: each turn can include the conversation, project context such as files the tool has read, and the new prompt. Long sessions carry that history forward, which is where cost and context limits begin.
So yes, the token bill goes up. Sometimes that bill is waste because the developer pasted half the repo into chat, let the session wander through unrelated problems, left the wrong model on for routine work, and asked the tool to "try again" until everybody lost the will to live.
Other times the token bill is the receipt for context. Reading files costs context. Running a long debugging session costs context. Keeping the agent aware of constraints costs context. Asking it to compare an implementation against existing patterns costs context. Good AI-native work is not free just because fewer human keystrokes are visible.
The question is not whether a developer uses many tokens. The better question is whether they know what those tokens are buying. If they can explain why the agent needed the files it read, when they cleared context, what they compacted, which model they used for which step, and where they reviewed the output manually, the cost is part of a real workflow. If they cannot explain any of that, they are probably just an expensive vibe coder.
Vibe coding is not the same thing as AI-native engineering
Vibe coding has a place. For prototypes, internal throwaways, demos, scripts, and "I need to see if this idea has legs by Friday," a loose AI loop can be perfectly useful. Production systems are a different game.
The problem with vibe coding is not that the tool generates code; it is that the human stops owning the result. The diff arrives, the UI looks right, the happy path passes, and nobody asks what changed in auth, data access, error handling, permissions, observability, or rollback.
AI-native engineering is almost the opposite. It can move fast, but the workflow has more structure, not less. The developer defines what the agent is allowed to touch, keeps context clean, runs tests that matter rather than tests that are convenient, and reviews the diff like someone who will be woken up when it breaks. Claude can apologize for the same bug forty times, but production still needs one human owner.
Why Staff-level judgment matters more, not less
When agents make code generation cheaper, the bottleneck moves. The scarce skill is no longer only writing code; it is deciding what should be built, where it belongs, how it fits the existing architecture, what risks it creates, and whether the generated change is safe enough to merge. That sits very close to Staff engineer territory.
StaffEng describes the Staff engineer as a technical leadership path beyond Senior, separate from engineering management. In plain English, a Staff engineer is not just a faster Senior. They own ambiguous technical spaces, create direction, coordinate across teams, make trade-offs visible, and protect the system from clever local changes that create global damage.
AI makes that work more important because agents increase output speed. A junior developer with a strong agent generates a lot of code. A senior developer with a strong agent generates a lot of plausible code. A Staff-level developer asks whether the change should exist at all.
The boring questions become valuable again. What part of the system is the source of truth? What is the rollback path? Which invariant must not be broken? What happens if the model misunderstood the domain? Are we adding architecture or just moving mess into a new folder? Can the next developer maintain this without reading a 400-message agent transcript? None of those questions disappear because the code was produced quickly. They become the job.
How to interview for the difference
Asking "Do you use AI?" is almost useless now. Ask for a real workflow instead. A good candidate can walk through one recent AI-assisted change from ticket to merged code, explaining what they gave the agent, what they kept out of context, how they constrained the task, what the first output got wrong, how they tested it, and what they reviewed manually.
For an AI-powered developer, you are looking for good judgment around acceleration. Do they use AI to unblock themselves without outsourcing understanding? Can they explain the code afterward? Do they catch hallucinated APIs and subtle logic mistakes?
For an AI-native developer, you are looking for workflow architecture. Do they know how to split tasks for agents? Do they maintain project instructions? Do they use tests and type checks as gates? Do they know when to reset context? Can they explain why an agent should not touch a certain part of the system?
For a Staff-level AI-native developer, add architecture pressure. Give them a messy cross-cutting change and ask how they would decompose it, which parts they would delegate to an agent, which parts they would keep human-owned, where they would require review, and what failure would make them stop the agent and redesign the task. You will learn more from that conversation than from a generic coding test.
The practical hiring checklist
If you are hiring for AI development, replace the single "AI experience" checkbox with better questions:
- What AI tools do you use in your normal development loop?
- What tasks do you delegate to agents, and what tasks do you keep manual?
- How do you manage repo context, long sessions, and token usage?
- How do you review AI-generated diffs before merging?
- What tests or checks do you require before trusting agent output?
- Tell me about a time the agent was confidently wrong. How did you catch it?
- When would you avoid using AI for a task?
The answers do not need to sound trendy. If they do sound too trendy, I would actually listen more carefully. Strong answers are usually concrete and slightly unglamorous: files, tests, failed runs, bad assumptions, context cleanup, code review, and the moment they stopped the agent because it was heading in the wrong direction. That is the signal.
The label matters less than the operating model
AI-native, AI-powered, traditional, UNC. These labels are imperfect and will probably age badly, like most labels in tech. The distinction behind them still matters.
A developer who uses AI as autocomplete is not doing the same job as a developer orchestrating repo-aware agents through tests and review. A developer who avoids agents is not automatically weak. A developer who burns tokens is not automatically strong. Staff-level judgment is not required for every AI project, but it becomes more valuable as code output gets cheap and mistakes get quiet.
The real question is not whether someone "uses AI." It is whether they can make AI-generated work understandable, reviewable, maintainable, and safe enough for the system they are touching. Everything else is just a badge on LinkedIn.
If you are evaluating an AI-built prototype or hiring for agent-assisted development, start with a short technical review before the code reaches production. 2muchcoffee can help with that through the AI development page or the contact form.