When Agile Isn’t Agile: How DORA Metrics Reveal What’s Really Broken

A colourful chain of a paper clips, laid out on a surface. The paper clips alternate in colour between red and blue. The central paper clip in the link is broken. The paper clips are laid on an asphalt looking background, implying that they are laying in the street.

“We’re Agile, but it takes six weeks to get a simple change to production.”

The tech lead’s frustration was palpable. They had all the Agile ceremonies: daily standups, sprint planning, retrospectives, story points. The burndown charts looked perfect. The velocity was consistent.

Yet delivering value to customers felt like pushing boulders uphill.

Sound familiar? You’re not alone. I’ve seen dozens of teams discover they’re not as Agile as they thought. The good news? DORA metrics can show you exactly what’s broken, and more importantly, what to fix.

The Agile Illusion

Here’s the uncomfortable truth: most “Agile” teams are just doing Waterfall in two-week chunks. They’ve adopted the ceremonies but missed the essence: rapid delivery of value to customers.

The symptoms are everywhere:

  • Sprints that are really mini-waterfalls
  • “Done” that doesn’t mean “in production”
  • Retrospectives that produce no meaningful change
  • Velocity metrics that hide delivery problems
  • Standups that are status reports, not collaboration

But without data, any attempt to fix these issues is just shooting in the dark. You might reorganize teams, change sprint length, or add more ceremonies; all while the real problems remain hidden.

Enter DORA: Your Agile Reality Check

DORA metrics (DevOps Research and Assessment) measure what actually matters: how quickly and safely you deliver value to customers. Unlike story points or velocity, they can’t be gamed. They reveal the truth.

The four key metrics:

  1. Deployment Frequency: How often you deploy to production
  2. Lead Time for Changes: Time from commit to production
  3. Time to Restore Service: How quickly you recover from failures
  4. Change Failure Rate: Percentage of deployments causing failures

These metrics don’t care about your sprint ceremonies. They measure whether you’re actually agile.

What DORA Metrics Reveal About Your "Agile" Process

Let me show you how DORA metrics expose common Agile antipatterns:

The "Sprint Waterfall" Pattern

What you see: Perfect sprint completion, consistent velocity

What DORA reveals:

  • Deployment Frequency: Once per sprint (or worse)
  • Lead Time: 2-6 weeks

The reality: You’re batching work into sprint-sized waterfalls. Code sits “done” but not deployed, accumulating risk and delaying feedback.

The fix: Decouple deployments from sprints. Aim for multiple deployments per day, not per sprint.

The "Fear of Production" Pattern

What you see: Extensive testing phases, careful release planning

What DORA reveals:

  • Change Failure Rate: Still high despite all the caution
  • Time to Restore: Hours or days

The reality: Your careful process doesn’t prevent failures, it just makes them more painful when they happen.

The fix: Invest in automated testing, feature flags, and rollback capabilities. Make deployment boring, not scary.

The "Fake Definition of Done" Pattern

What you see: Stories marked complete, velocity looks great

What DORA reveals:

  • Lead Time: Weeks from “done” to production
  • Deployment Frequency: Monthly or worse

The reality: Your Definition of Done doesn’t include “running in production.” You’re optimizing for velocity theater, not value delivery.

The fix: Redefine “Done” to mean “delivering value to users.” Everything else is work in progress.

From Metrics to Action: The Improvement Playbook

Once you start measuring DORA metrics, patterns emerge quickly. Here’s how to translate those insights into Agile improvements:

Week 1-2: Establish Baseline

Before changing anything, measure where you are:

# Simple DORA tracking (add to your CI/CD)
- name: Record deployment
  run: |
    echo "$(date),$(git rev-parse HEAD),production" >> deployments.csv
        
- name: Calculate lead time
  run: |
    COMMIT_TIME=$(git show -s --format=%ct HEAD)
    DEPLOY_TIME=$(date +%s)
    LEAD_TIME=$((($DEPLOY_TIME - $COMMIT_TIME) / 3600))
    echo "Lead time: $LEAD_TIME hours"    

Don’t judge, just measure. You need honest baselines.

Week 3-4: Identify Your Constraint

Look at your metrics. Which one is worst compared to industry standards?

Deployment Frequency Issues:

  • Elite: On-demand (multiple per day)
  • High: Weekly to monthly
  • Medium: Monthly to biannually
  • Low: Less than biannually

If you’re deploying monthly but calling yourself Agile, start here.

Lead Time Issues:

  • Elite: Less than one hour
  • High: Less than one week
  • Medium: One week to one month
  • Low: More than one month

If code sits for weeks after being “done,” focus here.

Month 2: Target One Metric

Don’t try to fix everything. Pick your worst metric and experiment:

Improving Deployment Frequency:

  • Start with automated deployments to staging
  • Introduce feature flags for safer releases
  • Decouple deployments from releases
  • Make Friday deployments normal, not scary

Improving Lead Time:

  • Shrink batch sizes (smaller stories)
  • Implement continuous integration
  • Automate code reviews for common issues
  • Remove manual approval gates

Improving Time to Restore:

  • Practice rollbacks regularly
  • Implement proper monitoring
  • Create runbooks for common issues
  • Ensure everyone can deploy

Improving Change Failure Rate:

  • Increase automated test coverage
  • Implement progressive rollouts
  • Add smoke tests post-deployment
  • Review failed deployments in retrospectives

Month 3+: Adjust Your Agile Practices

Now that you have data, make informed changes to your Agile process:

Sprint Planning Changes:

  • Plan for continuous deployment, not sprint releases
  • Break stories into deployable increments
  • Include deployment tasks in estimates

Daily Standup Evolution:

  • Focus on “what’s blocking deployment?”
  • Celebrate production releases, not task completion
  • Discuss DORA metrics weekly

Retrospective Revolution:

  • Review DORA trends, not just feelings
  • Set metric improvement goals
  • Celebrate metric improvements

Definition of Done Transformation:

  • Must include “deployed to production”
  • Must include “monitoring verified”
  • Must include “rollback tested”

Real-World Transformation

A client’s journey from “Agile in name only” to true agility:

Starting Point:

  • Deployment Frequency: Every 3 weeks (end of sprint)
  • Lead Time: 18 days average
  • Time to Restore: 4 hours average
  • Change Failure Rate: 15%

After 3 Months of DORA-Driven Improvements:

  • Deployment Frequency: Daily
  • Lead Time: 2 days average
  • Time to Restore: 30 minutes average
  • Change Failure Rate: 5%

What Changed:

  • Automated deployments (removed approval gates)
  • Feature flags (decoupled deployment from release)
  • Smaller stories (1-2 day maximum)
  • “Done” means “in production”
  • Retrospectives focused on metrics

The Business Impact:

  • Features reach customers 9x faster
  • Bugs fixed before most users notice
  • Team morale dramatically improved
  • Actual agility, not just ceremonies

The Dashboard Connection

As I discussed in “From ‘Trust Us’ to ‘Look at This’: Building Engineering Dashboards That Boards Actually Understand”, these metrics need to be visible. But now they serve dual purpose:

  1. Upward Communication: Show the board you’re improving
  2. Team Improvement: Drive Agile process changes

Your DORA dashboard becomes your Agile health monitor:

MetricValue
Deployment Frequency📈 Daily
Lead Time📊 2.3 days (↓ from 18)
MTTR⚡ 30 min (↓ from 4 hours)
Failure Rate✅ 5% (↓ from 15%)

This isn’t just data for executives—it’s your Agile improvement roadmap.

Common Objections, Answered

“But we need sprints for planning!”: Plan in sprints, deploy continuously. They’re not mutually exclusive.

“Our compliance requires careful releases”: Automated compliance checks deploy safer than manual reviews.

“We’re not Netflix/Google/Amazon”: Neither were they when they started. Every improvement counts.

“This isn’t real Agile!”: Correct. It’s better. It’s Agile that actually delivers.

Your DORA-Driven Agile Transformation

Here’s your action plan:

This Week:

  1. Set up basic DORA tracking
  2. Share this article with your team
  3. Discuss which metric hurts most

This Month:

  1. Establish baselines for all four metrics
  2. Pick one metric to improve
  3. Run experiments in your Agile process
  4. Make metrics visible to the team

This Quarter:

  1. Improve at least one metric by 50%
  2. Adjust Agile ceremonies based on data
  3. Celebrate improvements publicly
  4. Plan next metric to target

The Bottom Line

You realized you’re not as Agile as you thought. That’s not failure, that’s awareness. Now you have a choice: continue with Agile theatre, or use DORA metrics to build genuine agility.

Real Agile isn’t about perfect ceremonies. It’s about delivering value to customers quickly and safely. DORA metrics show you the truth, and the truth will set you free from fake Agile.

Stop shooting in the dark. Measure what matters. Fix what’s actually broken.

Your customers are waiting.


Ready to transform your Agile process with data-driven insights? Schedule a consultation using the contact form below to build your DORA-driven improvement strategy.

We'll never share your name with anyone else.
We'll never share your email with anyone else.
Providing a subject can help us deal with your request sooner.
Please be as specific as possible; it will help us to provide a more specific response.
Un-checking this box will ensure that your data is deleted (in line with our privacy policy after we have dealt with your request. Leaving this box checked will auto enrol you into our email communications, which are used for marketing purposes only.