What Internal Developer Platforms Should Actually Do
Most IDPs are built around enforcing consistency. The ones that actually work measure themselves by how fast teams can ship.
The question most platform teams ask when building an IDP is: how do we ensure consistency? The question they should be asking is: how do we help teams ship faster?
These are not the same question, and they lead to very different platforms.
Standardisation is a means. Acceleration is the goal. When the two get confused, you end up with a platform that optimises for uniformity across environments while developers are still waiting on the platform team to update a template, extend the golden path, or sign off on an edge case. Consistent, yes. Fast, no.
The wrong optimisation
Most IDPs are built by infrastructure and operations teams who have spent years dealing with the fallout of autonomy without guardrails: seventeen different Kubernetes configurations, six monitoring stacks, no agreed logging format, security gaps that only surface during audits. Their instinct is to eliminate that variation. That instinct is reasonable. But the platform they build reflects their pain, not the developer's problem.
The result is an IDP designed around enforcement. Consistent golden paths. Mandatory pipelines. Gated approvals for anything outside the standard. The developer experience becomes: here is the prescribed way to do things, and here is the queue you join if you need anything else.
Standardisation delivered on its own terms. Acceleration did not make it into the requirements.
What acceleration means as a design principle
An IDP optimised for acceleration has one primary success metric: time from intention to running change.
Not "number of standards adopted." Not "percentage of services on the golden path." Not "reduction in configuration drift." Time from when a developer has a change ready to when it is running in a real environment, with clear feedback on whether it worked.
DORA's 2025 research isolates this precisely. The platform capability most correlated with a positive developer experience is not governance, not security scanning, not compliance visibility. It is: getting clear feedback on the outcome of my tasks. The fastest-moving teams are the ones who know quickly whether something worked or failed, and can act on that signal without raising a ticket.
That is an acceleration design principle. Build the feedback loop first. Everything else is secondary.
What the data shows
The evidence on IDPs that work is unambiguous. DORA found that teams with high-quality internal platforms see individual productivity improve by 8% and team productivity by 10%. More striking: 71% of teams using IDPs can deploy on-demand or multiple times per day, compared to 43% of teams without one.
But DORA is equally clear on what breaks these outcomes. An inefficient adoption process, one with spurious automation and unintended handoffs, can reduce delivery speed by 8% and cut reliability by up to 14%. Standardisation that adds steps does not just fail to help. It actively slows teams down.
The distinction is between a platform that removes decisions developers should not need to make, and one that adds decisions they did not ask for.
The AI proof point
The 2025 DORA report adds a dimension that makes platform quality urgent rather than aspirational. The finding is direct: when platform quality is high, AI adoption produces strong, measurable gains in organisational performance. When platform quality is low, AI adoption produces essentially nothing at the organisational level.
Individual developers using AI tools do generate more output. DORA's AI research shows a 21% increase in task completion and a 98% increase in pull request volume among AI users. But that individual throughput disappears into deployment and testing bottlenecks downstream. The AI-generated code queues behind the same platform that was already slow.
If your IDP is a compliance layer, AI fills up the queue faster. If your IDP is an acceleration layer, AI compounds every team that uses it. Platform quality has always mattered. In an AI-assisted workflow, it determines whether the investment pays off at all.
What an acceleration-first IDP looks like in practice
A few patterns distinguish platforms built around acceleration from platforms built around enforcement.
One golden path, done completely. The advice from DORA and teams that have shipped effective platforms is consistent: do not try to solve everything at once. Pick the most common workflow, the one developers do most often and that causes the most friction, and solve it end to end. Not partially. Not with a queue for the edge cases. Completely. Trust compounds from there. As we covered in the first piece in this series, platforms that try to cover CI, observability, security, and cost management simultaneously usually do none of them well.
Self-service success, not self-service availability. There is a difference between offering a self-service button and measuring whether that button works without intervention. The metric that matters is the self-service success rate: how often does a developer provision what they need without eventually needing a human? A portal with a 40% success rate is a form with a fallback queue, not self-service. As the preceding post on self-serve infrastructure covers in detail, real self-service requires opinionated defaults, automatic permission scoping, and documented escape hatches.
Cognitive load as a first-class metric. Research from DORA and DX consistently links cognitive load to both developer burnout and output quality. Cognitive load here means: how many things does a developer need to know, find, decide, or ask about in order to ship a change? A platform that reduces those decisions is accelerating. One that adds them, even with good intentions, is not.
Voluntary adoption as the fitness test. The test is the one raised in the first piece in this series: if the platform were optional tomorrow, would developers choose it? Mandatory adoption hides the gap between the platform's self-image and its actual utility. Platforms that win on voluntary adoption do so because they make the right path faster than any alternative.
Forge's POV: acceleration is the product
Forge is built around the premise that deployment should require zero understanding of the hosting layer. Git push. Branch deploys. Instant preview environments. Clear feedback on build status. No ticket to get a staging slot. No platform team to ask about environment availability.
That is an acceleration design, not a compliance design. The developer makes decisions about their code. The platform makes decisions about everything else, consistently, within documented bounds.
The Forge developer platform applies the same principle to web infrastructure that the best IDPs apply to backend systems: remove the decisions developers should not need to make, give them clear feedback on the ones they do make, and get out of the way.
An IDP that achieves that is more useful than one that enforces consistency. More importantly, it is the only kind developers will voluntarily keep using once they have a choice.