I say some version of this to my team all the time:
Don’t use AI to outsource your knowledge. Use it to enhance and speed up what you already know how to do.
I keep repeating it because the temptation is obvious. The tools are fast. They are available all the time. They sound confident. They can produce a lot of output in a hurry. When you are tired, behind, or context-swamped, it is incredibly easy to let them take over parts of the job you should still be doing yourself.
That is the risk.
Not that the tools will write bad code. Of course they will sometimes write bad code. Humans do that too.
The real risk is that you slowly stop exercising the muscles that let you tell the difference.
That is a much more serious problem than a bad suggestion in a chat window. Bad suggestions can be rejected. Atrophied judgment is harder to fix.
The Failure Mode Is Becoming a Passenger
There is a certain kind of engineer who looks productive in an AI-heavy environment right up until you ask one hard follow-up question.
Why is this query structured that way?
Not sure, the model suggested it.
Why are we confident this retry logic is safe?
It looked right and the tests passed.
Why did we choose this abstraction boundary?
It seemed cleaner.
Why is the cache invalidation handled here instead of at the write path?
Silence.
That is what becoming a passenger sounds like.
You are still touching the wheel, but you are no longer really driving.
And to be clear, this is not a junior engineer problem. Senior people can drift into this just as easily, sometimes faster. Experience can create a false sense that your taste alone will catch everything. It will not. Taste matters, but taste without active technical contact turns into aesthetic preference with better vocabulary.
If you stop reasoning through the system yourself, eventually you stop noticing what the system is doing.
That is why I dislike the framing that AI "frees us from the boring parts" with no downside. Some repetitive work absolutely should be delegated. Good. Please do that. But a lot of what people call boring is actually where technical instinct gets reinforced.
Reading a diff carefully is not glamorous. Tracing a bug across three services is not glamorous. Working through a failing test instead of re-prompting until it goes green is definitely not glamorous. But that is where your map of the system stays accurate.
Skill Atrophy Does Not Announce Itself
The tricky part is that you do not feel yourself getting rusty in real time.
You feel faster.
You feel assisted.
You feel like you are clearing tickets at an impressive clip.
Meanwhile, some important things may be quietly slipping:
- Your first instinct is to ask for an answer instead of forming a hypothesis
- Your tolerance for reading unfamiliar code drops
- Your ability to reason about performance from the code alone gets weaker
- Your debugging loop starts with re-prompting instead of investigation
- You accept tests as proof even when you did not inspect what they actually protect
That is not acceleration. That is dependency wearing a productivity costume.
I am not making a moral argument here. I am making a practical one. The moment a novel failure shows up, borrowed competence gets exposed. The moment the generated code collides with a weird domain rule, a flaky third-party service, an ugly migration path, or a production-only concurrency issue, you are back to fundamentals whether you like it or not.
And if you have spent six months outsourcing the fundamentals, that moment is going to hurt.
Use AI After You Have a Point of View
One of the simplest ways to stay sharp is this: do not start with the model when the problem matters.
Start by thinking.
Not for an hour. Not to prove your purity. Just long enough to establish a point of view.
What do you think the bug is? What do you think the right shape of the solution is? What constraints matter? What part are you least sure about? What would you try if the tool vanished for the next thirty minutes?
That matters because AI is much more useful as a multiplier on direction than as a substitute for direction.
If you already have a hypothesis, you can use the tool to pressure-test it, accelerate implementation, compare alternatives, generate scaffolding, or fill in syntax you do not feel like typing.
If you do not have a hypothesis, you are mostly inviting the tool to think on your behalf.
That can feel productive, but it is a bad long-term trade.
I want engineers who can say, "Here is my current read on the problem, here are the likely failure points, and here is where I want help moving faster."
That person is using AI well.
I do not want engineers who say, "I pasted in the file and this is what it gave me."
That is not engineering leverage. That is surrender with better formatting.
The Habits That Keep You Dangerous
If you want to use these tools heavily without getting soft, you need habits that keep your instincts alive.
Nothing fancy. Just disciplined reps.
1. Trace at Least Some Bugs Manually
Do not let every debugging session begin and end in a prompt window.
Read the stack trace. Inspect the logs. Walk the request path. Check the state transitions. Reproduce the issue yourself before you ask for a summary.
You do not have to do this for every tiny paper cut. But if the problem is real, your brain should touch the system directly before delegation takes over.
2. Read Diffs Like You Mean It
Generated code has a way of looking finished before it is correct. The formatting is clean. The comments sound confident. The tests appear thoughtful. None of that guarantees the behavior is right.
So read the diff.
Not just for style. Read it for assumptions. Read it for hidden coupling. Read it for duplicated logic, suspicious abstractions, and strangely convenient defaults. Read it as if you expect it to contain one subtle lie, because sometimes it does.
The goal is not paranoia. The goal is contact.
3. Write Important Parts by Hand
Some code should still go through your fingers.
Boundary definitions. Core domain logic. Complex state transitions. Tricky queries. Recovery code. Anything where correctness depends on real understanding and not just pattern matching.
I am not saying every important function must be handwritten like a love letter from 2009. I am saying there is value in still doing difficult work yourself often enough to keep your instincts calibrated.
When you handwrite hard code, you are forced to notice the shape of the problem. You do not get to skip straight to the plausible answer.
4. Form a Hypothesis Before Asking for Help
This might be the highest-leverage habit of the bunch.
Before you ask AI why something is broken, write down your guess. Even if it is wrong. Especially if it is wrong.
That tiny pause trains you to stay in the problem. It keeps your diagnostic muscles engaged. It also gives you a much better basis for evaluating the answer you get back.
If the suggestion contradicts your hypothesis, great. Now you have something to compare. If you skip the hypothesis step entirely, you are much more likely to accept the first plausible explanation because it sounds coherent.
Coherent is not the same as true.
Assisted Engineering Versus Borrowed Competence
This is the distinction I care about most.
Assisted engineering looks like this:
- You understand the task
- You have a view on the solution
- You use AI to move faster
- You review the output with real judgment
- You can explain and defend the final result
Borrowed competence looks like this:
- You do not fully understand the task
- You delegate your way to a plausible artifact
- You rely on surface confidence as validation
- You cannot explain the tradeoffs
- You struggle the moment something deviates from the happy path
One of those scales your effectiveness. The other creates a thin shell of productivity around a shrinking core of understanding.
If that sounds harsh, good. It should.
Because the industry is going to reward visible output for a while before it catches up to the difference. Some people are going to look faster than they really are. Some teams are going to mistake throughput for capability. Some leaders are going to count lines, tickets, and turnaround time while the actual depth of the organization erodes underneath them.
Then something important will break.
And that is when everyone will suddenly care whether the team still knows how to think.
Keep the Tool, Keep the Edge
I am not interested in anti-AI chest beating. That is mostly insecurity dressed up as principle. The tools are useful. We should use them. They can save time, reduce drudgery, explore options, and help strong engineers move faster.
But they should make you sharper, not softer.
That means using them from a position of knowledge. That means staying in contact with the system. That means keeping enough hands-on reps that your judgment stays grounded in reality instead of style.
Use AI to accelerate what you understand. Use it to reduce the boring parts. Use it to explore alternatives faster. Use it to pressure-test your thinking.
Just do not use it as a replacement for thinking.
Because once you outsource your instincts, you are not really speeding yourself up anymore.
You are just giving away the part of the job that made you valuable in the first place.