In Part 1 of this series, we explored how AI can amplify both great engineers and those who lack the foundational understanding to verify what they’re creating. We looked at the rise of AI slop in software development and the systemic challenges it creates.
The response I’ve received since publishing Part 1 has been fascinating. Some readers expressed concern that I was advocating for banning AI tools entirely. Others worried that we’re heading towards a future where no one truly understands the systems we’re building. A few shared their own stories of encountering AI-generated code that looked impressive but fell apart under scrutiny.
Let me be clear about my position: I don’t have a problem with generative AI-based contributions. Not at all. What I do care about is this: if and only if the person creating the contribution genuinely understands what the code does, has properly verified it, and it meets the standards of the repository or organisation.
The challenge isn’t the tool; it’s ensuring we use it in ways that build capability rather than erode it. So how do we do that?
For CTOs and Technical Leaders: Recognising the Signs
As a fractional CTO, I’ve seen organisations at various stages of AI adoption. Some are thriving with these tools, whilst others are accumulating technical debt faster than ever before. The difference isn’t whether they’re using AI; it’s how they’re using it and who’s using it.
What Over-Reliance Looks Like
Here are the warning signs that suggest an engineer might be over-reliant on AI tools:
⚠️ Cannot explain their own code:
When asked to walk through their implementation, they struggle to articulate the approach or the reasoning behind key decisions. They might say “that’s just how the AI did it” or reference the AI’s choices as if they were external requirements rather than options they selected.
⚠️ Resistance to debugging without AI:
When something breaks, they immediately reach for the AI rather than attempting to understand the problem themselves. They’ve developed a dependency on the tool for troubleshooting rather than building their own debugging intuition.
⚠️ Code reviews reveal gaps in understanding:
During code review, when asked about edge cases, error handling, or design decisions, they cannot articulate why the code works the way it does. They may have tested that it works, but they cannot explain how it works.
⚠️ Patterns of “AI style” code:
Experienced reviewers can often spot AI-generated code. It might include overly verbose variable names, unnecessary abstraction layers, comments that state the obvious, or architectural patterns that don’t quite fit the existing codebase. When these patterns appear consistently from one contributor, it suggests they’re accepting AI suggestions wholesale.
⚠️ Volume without depth:
They produce a lot of code quickly, but the quality is inconsistent. There’s little evidence of iteration or refinement based on genuine understanding.
What Productive Use Looks Like
By contrast, engineers who use AI productively show different characteristics:
✅ Clear articulation:
They can explain their code fluently, including discussing alternatives they considered and why they chose the approach they did. The AI might have suggested options, but the engineer made informed decisions.
✅ Selective adoption:
They cherry-pick useful suggestions from AI whilst rejecting or modifying others. You’ll see evidence of human judgment: refactored names, simplified logic, or comments that explain the “why” rather than the “what.”
✅ Progressive enhancement:
They use AI to accelerate routine work (boilerplate, test scaffolding, documentation) whilst investing their cognitive effort in the genuinely difficult problems. The ratio of AI-assisted to human-crafted code shifts based on the complexity of the task.
✅ Learning from AI:
When AI suggests an approach they haven’t seen before, they research it, understand it, and can articulate when it would or wouldn’t be appropriate. They’re not just copying; they’re learning.
✅ Appropriate self-correction:
When code review reveals issues, they can fix them independently and explain what was wrong. They don’t need to run back to the AI to understand their own code.
Creating an AI Contribution Policy
Several open-source projects and organisations have started codifying their expectations around AI-assisted contributions. These provide useful templates for organisations developing their own policies.
The Fedora Approach: Transparency and Accountability
The Fedora Linux distribution approved an AI-assisted contributions policy in October 2025 that strikes a sensible balance. Their key principles include:
Disclosure requirements: Contributors must disclose AI tool usage when AI generates a significant portion of their work unchanged. Minor assistance, like grammar checking, doesn’t require disclosure. This helps maintainers calibrate their review effort appropriately.
Full accountability: Contributors must take complete accountability for their submissions, regardless of how they were created. Using AI doesn’t diminish responsibility; if anything, it increases it, because the contributor must verify work they didn’t personally create.
Quality over automation: The policy explicitly warns against “AI slop” that creates excessive burden for reviewers. The goal isn’t to enable maximum contribution volume; it’s to maintain quality whilst allowing helpful AI assistance.
Research enablement: By requiring transparency, Fedora can study the actual impact of AI assistance on their project, gathering data to refine their policies over time.
The Gentoo Counter-Example: When to Say No
Not every organisation has embraced AI contributions. Gentoo Linux banned AI-generated code entirely in April 2024, citing three major concerns: potential copyright infringement, quality control issues, and ethical considerations around AI’s environmental impact.
Whilst this might seem extreme, Gentoo’s reasoning is instructive. They recognised that their maintainer capacity couldn’t support the additional review burden that AI contributions would require. Rather than compromise their quality standards, they opted out entirely.
This is a legitimate choice, particularly for projects with limited maintainer resources. The policy includes a clause allowing it to be revisited as technology evolves, acknowledging that today’s decision might not be appropriate forever.
Crafting Your Organisation’s Policy
If you’re developing an AI policy for your organisation, consider these elements:
1. Define what requires disclosure: Where’s the line between “AI-assisted” and “AI-generated”? Using autocomplete in your IDE? Probably fine. Having AI write an entire module? That should be disclosed.
2. Establish verification standards: What does “properly verified” mean in your context? Should there be additional testing requirements for AI-generated code? More thorough code review?
3. Set accountability expectations: Make it clear that using AI doesn’t diminish responsibility. The person submitting the code is accountable for understanding it, maintaining it, and fixing issues that arise.
4. Create learning pathways: How will you help engineers develop the skills to verify AI-generated code effectively? What resources or mentorship will you provide?
5. Enable measurement: Can you track the impact of AI assistance on your team’s productivity, code quality, and technical debt? You need data to make informed decisions about these tools.
The Verification Challenge: What Does "Trust, But Verify" Really Mean?
In Part 1, I referenced Jim Humelsine’s invocation of “trust, but verify.” But what does verification actually look like in practice?
A Verification Framework
Here’s a practical framework for verifying AI-generated code:
Level 1: Does it work?
- Does the code compile?
- Do the tests pass?
- Does it fulfil the stated requirements?
This is the minimum bar, and unfortunately, it’s where many people stop. But this only tells you that the code works today, in the specific scenarios you’ve tested.
Level 2: How does it work?
- Can you trace the logic from inputs to outputs?
- Do you understand each function’s purpose and how they interact?
- Could you explain this code to a colleague without referencing the AI?
This is where understanding begins. If you can’t pass Level 2, you shouldn’t be shipping the code.
Level 3: Why does it work this way?
- What design decisions were made, and why?
- What alternatives were considered (or should have been)?
- What assumptions does the code make?
- What are its limitations or edge cases?
This is the level at which you can truly maintain and extend the code. This is also where you catch architectural issues that might not be apparent from testing alone.
Level 4: What could go wrong?
- What are the security implications?
- How will this perform at scale?
- What happens when assumptions are violated?
- How will this interact with the rest of the system?
This is expert-level verification, and it’s where experienced engineers add the most value. AI can’t reliably perform this level of analysis because it requires deep contextual understanding of your specific system and business requirements.
Making Verification Part of Your Culture
The challenge is making verification feel like part of the development process rather than an unwelcome burden. Here are some approaches:
Pair programming with AI: When using AI to generate code, do it with a colleague. Talk through what you’re asking for and why. Review what comes back together. This turns AI usage into a learning opportunity rather than a solitary shortcut.
Explain-it-back sessions: In team meetings or code reviews, have the author of AI-assisted code explain their implementation to the team. This serves two purposes: it verifies understanding and it spreads knowledge.
Progressive disclosure: Start with small, low-risk uses of AI-generated code. As engineers demonstrate they can verify effectively at one level, they earn the trust to use AI for more complex tasks.
Shared learning: Create a channel or forum where team members share interesting AI suggestions they received, explaining why they accepted, modified, or rejected them. This builds collective wisdom about using these tools effectively.
Building Psychological Safety Around AI Use
One of the most insightful conversations I had whilst researching this topic was when talking with Safia Abdalla about psychological safety on The Modern .NET Show. She emphasised how important it is to create environments where people can admit what they don’t know and ask for help without fear of judgment.
This becomes critical when discussing AI use. If engineers feel they’ll be looked down on for using AI tools, they’ll simply hide it. If they fear admitting they don’t understand something, they’ll ship code they can’t maintain. Neither outcome is desirable.
Creating Safety Around Not Knowing
The goal isn’t to shame people for using AI or for having gaps in their knowledge. The goal is to create an environment where:
💭 It’s safe to say “I don’t understand this”:
Whether it’s AI-generated code or code written by a colleague from five years ago, admitting confusion should be welcomed, not punished. It’s the first step towards understanding.
💭 Using AI is openly discussed:
Teams should be able to talk about when and how they use AI tools, what works well, and what doesn’t. Transparency enables learning.
💭 Questions are encouraged:
During code reviews, asking “can you explain how this works?” shouldn’t feel like an attack. It should feel like genuine curiosity or a teaching opportunity.
💭 Mistakes are learning opportunities:
When AI generates problematic code that makes it through review, the response should be “what can we learn from this?” rather than “who should we blame?”
The Role of Senior Engineers
Senior engineers and technical leads play a crucial role in establishing this culture. Safia spoke about how certain people in her open-source career excelled at “creating psychological safety in open source spaces so that people could present their ideas.”
When it comes to AI usage, senior engineers can model healthy behaviours:
- Being transparent about their own AI usage
- Sharing examples where AI led them astray and how they caught it
- Asking genuine questions about code rather than making assumptions about how it was created
- Praising good verification practices when they see them
- Offering to pair with junior engineers who are learning to use these tools effectively
The message should be clear: we want you to use tools that make you more effective, and we want to help you develop the judgment to use them wisely.
When AI Should Be a Crutch (And When It Shouldn’t)
Not all uses of AI tools are equal. There are scenarios where heavy reliance on AI is appropriate, and others where it’s actively harmful to your development as an engineer.
Appropriate Heavy Use
✅ Boilerplate and scaffolding:
Setting up project structure, configuration files, standard CRUD operations. These are well-understood patterns where AI can save significant time without requiring deep expertise to verify.
✅ Documentation generation:
First drafts of API documentation, README files, or code comments. These should always be reviewed and edited, but AI can handle the initial structure well.
✅ Test case generation:
AI can suggest test scenarios you might have missed. You still need to verify they’re the right tests, but AI excels at “what about this case?” thinking.
✅ Code translation:
Converting code from one language to another, especially for similar paradigms (e.g., Python to JavaScript). The logic exists; you’re just changing syntax.
✅ Exploring unfamiliar domains:
When researching a technology you’re learning, AI can provide explanations and examples. You’ll still need to verify and understand them, but it’s a reasonable learning tool.
Problematic Heavy Use
⚠️ Core business logic:
The unique parts of your system that embody your business requirements shouldn’t be wholesale AI-generated. This is where human understanding matters most.
⚠️ Security-critical code:
Authentication, authorisation, data validation, encryption; these require careful consideration of edge cases and attack vectors that AI may not consider.
⚠️ Architectural decisions:
Choosing patterns, designing APIs, structuring systems; these require understanding of your specific context and constraints.
⚠️ Performance-critical code:
When performance matters, you need to understand what the code actually does at a low level, not just trust that AI made it “fast.”
⚠️ Learning fundamentals:
If you’re trying to learn a new language, framework, or concept, having AI write all your code prevents you from developing genuine understanding.
Measuring Impact: Are We Building Better Engineers?
As a fractional CTO, I’m often asked to help organisations measure the impact of their engineering practices. With AI tools, the temptation is to measure output: lines of code written, features shipped, velocity increased.
But these metrics miss the point. The question shouldn’t be “are we shipping faster?” It should be “are we building more capable engineers who happen to ship faster?”
What to Measure
Consider tracking these indicators:
Code review feedback patterns: Are AI-assisted PRs requiring more revisions than human-written code? Are the same issues appearing repeatedly, suggesting lack of learning?
Maintenance burden: Six months after shipping AI-assisted code, how much effort does it take to modify or debug? Is it proportional to the complexity, or is it harder than it should be?
Engineer growth: Are team members developing new skills and deeper understanding, or are they becoming more dependent on AI over time? Can they handle increasingly complex tasks independently?
Knowledge distribution: Is knowledge spreading across the team, or are we creating silos where only the AI “knows” how certain systems work?
Technical debt trends: Is AI helping you pay down technical debt by accelerating refactoring, or is it creating new debt through mediocre code that’s “good enough”?
The Long View
The real test of AI’s impact on your organisation won’t be apparent this quarter or even this year. It will become clear when you ask:
- Can your team maintain and extend the systems you’ve built?
- Are your engineers more capable than they were a year ago?
- Can they work effectively when AI tools aren’t available or appropriate?
- Is your technical debt manageable, or is it growing faster than your capacity to address it?
If the answers to these questions are positive, you’re using AI well. If not, you may be trading short-term velocity for long-term capability.
Practical Recommendations
Let me close with some concrete recommendations for different roles:
For CTOs and Engineering Managers
Develop an AI contribution policy that balances enabling productivity with maintaining quality. Look at Fedora’s approach as a starting point.
Make verification expectations explicit. Don’t assume people know what “properly verified” means in your organisation.
Invest in code review capacity. AI-assisted contributions may require more thorough review, especially initially.
Create learning pathways for engineers to develop the skills needed to verify AI-generated code effectively.
Measure the right things. Focus on capability growth and maintainability, not just velocity.
Model transparency. If you use AI tools yourself, be open about it. Show your team what responsible usage looks like.
For Technical Leads and Senior Engineers
Establish psychological safety around AI usage. Make it okay to admit when you don’t understand something.
Share your own AI usage patterns. Show your team how you use these tools productively and when you choose not to use them.
Offer pair programming sessions focused on effective AI usage and verification.
Make “explain it back” a standard part of code review for AI-assisted contributions.
Celebrate good verification practices when you see them. Make it clear that understanding matters, not just output.
For Individual Engineers
Be honest with yourself about whether you understand the code you’re shipping. If you can’t explain it, don’t ship it.
Use the verification framework: Does it work? How does it work? Why does it work this way? What could go wrong?
Learn from AI suggestions. When AI proposes something you don’t understand, take time to research and understand it before using it.
Start small. Use AI for low-risk tasks whilst you develop your verification skills, then progressively take on more complex uses.
Ask for help when you need it. Using AI doesn’t mean you have to understand everything instantly on your own.
Remember: you’re accountable. The AI won’t get called at 2am when something breaks. You will. Make sure you can handle it.
The Goal: Amplification, Not Replacement
Matthew Skelton was right that AI can amplify great engineers. Our challenge is ensuring it helps create great engineers in the first place.
The tools aren’t going away. The AI capabilities will only improve. But the fundamental truth remains: software engineering is still about humans representing intent in executable form and taking responsibility for the systems we create.
AI can help us write code faster, explore solutions more broadly, and automate tedious work. But it cannot replace understanding, judgment, and accountability. Those remain distinctly human responsibilities.
The organisations that will thrive in this new era aren’t those that generate the most AI-assisted code. They’re the ones that use AI to build more capable, more confident, more thoughtful engineers who happen to be more productive.
That’s the amplification we should be aiming for.
This is Part 2 of a two-part series on AI amplification in software development. Read Part 1 to explore the amplification paradox and the rise of AI slop.
