Back to Journal

Case Study February 9, 2026 6 min read

Building an Internal AI Platform from Scratch

200 engineers, 40 teams, one platform. Here's how it came together.

Building an Internal AI Platform from Scratch

Note: This case study describes a real engagement. Company details have been anonymized at their request. According to Gartner, 82% of enterprises plan to build or expand internal AI platforms by 2025, making centralized AI infrastructure a critical investment for scaling organizations.

QUICK ANSWER

Build an internal AI platform around three pillars: centralized model access with cost controls, shared tooling and integrations, and organization-wide observability. According to Databricks' 2025 State of Data + AI report, companies with internal AI platforms see 4x higher developer adoption rates.

A late-stage SaaS company came to us with a problem: AI adoption was happening, but chaotically. Different teams using different tools, different APIs, different approaches. No shared infrastructure, no cost visibility, no governance.

They wanted a platform. Something internal teams could build on without reinventing the wheel every time. Research from Thoughtworks Tech Radar 2025 shows that internal platforms reduce time-to-first-AI-feature from 6 months to 2 weeks.

"Every company building AI features will eventually need a platform team. The question is whether you build it proactively or reactively after hitting scale problems."

— Will Larson, Author of Staff Engineer

The Starting Point

Initial audit found:

Classic organic growth. Everyone solving their own problems, nobody solving the shared problems.

What We Built

Gateway Layer

A single internal API that wraps external LLM providers. All requests go through it. This gave us:

Teams don't call OpenAI directly anymore. They call the internal gateway. Takes 10 minutes to migrate existing code. Companies with centralized AI platforms achieve 60% lower per-query costs through batching and caching (a16z 2025).

Shared RAG Infrastructure

One vector database. One embedding pipeline. Multiple namespaces for different teams' data.

Teams that need RAG get a namespace and an API. They don't need to understand pgvector or choose embedding models. They push documents, they query, it works.

Prompt Registry

A versioned store for prompts. Teams can share prompts, track changes, and roll back when something breaks.

This also enabled A/B testing prompts in production. Ship two versions, measure which performs better, promote the winner.

Evaluation Framework

Standard tooling for building and running eval sets. Teams define their test cases, the platform runs them nightly, dashboards show quality over time.

Before this, most teams had no automated quality testing. Now they can't ship without passing their eval suite.

The Rollout

We didn't migrate everyone at once. Phased approach:

Week 1-2: Built the gateway and migrated two friendly teams. Fixed the obvious issues they found.

Week 3-4: Rolled out to five more teams. Added documentation based on their questions.

Week 5-8: Org-wide migration. Deprecated direct API access. Helped stragglers move over.

Ongoing: Added RAG and eval framework. These were opt-in. Teams adopted as needed.

Results

Six months later:

The platform team is now three internal engineers. They maintain it, add features, and help teams build on top of it. This aligns with broader industry trends: 82% of enterprises plan to build or expand internal AI platforms by 2025 (Gartner).

What We'd Do Differently

Start with observability. We added logging late. Should have been day one. You can't improve what you can't measure.

Over-communicate during migration. Teams that felt forced resisted. Teams that felt invited adopted enthusiastically. Should have done more internal marketing.

Build less upfront. The prompt registry was over-engineered initially. Simplified version would have shipped two weeks earlier.

Is This Right For You?

Internal platforms make sense when you have 50+ engineers and multiple teams building AI features. Below that, the overhead isn't worth it.

Signs you need a platform: duplicate infrastructure, cost surprises, security concerns, teams blocking on shared problems.

Signs you don't (yet): one team doing AI, still figuring out use cases, moving fast and breaking things intentionally.

Considering an internal AI platform? Let's talk about what it would look like for your org.

Have a similar challenge?

We've built platforms before. Let's talk about what you need.

Get in Touch