Why most AI engineer placements fail at week 8 (and how we fixed it)
Six years of running the same 30/60/90 framework on every engineer we place. What we learned, what we changed, and why we now treat placement as the start of the deal, not the end.
There’s a pattern we noticed about a decade ago. It was so consistent that we eventually built our entire delivery model around it.
Most engineering placements that fail don’t fail at week 1. They don’t fail at week 4. They fail somewhere between week 6 and week 10, the period after the engineer has been onboarded but before they’re truly integrated. The team has moved past the early curiosity phase. The novelty is gone. The engineer is now expected to contribute at the level the JD promised, and a small but cumulative set of frictions starts to compound.
By the time anyone notices, it’s too late. The team has formed an opinion. The engineer has formed an opinion of the team. The relationship is already structurally broken, even if everyone is still polite in standup.
What’s frustrating about this is that almost every signal you need to predict it is visible by week 5. You just have to be looking.
This post is about what we look for, and the framework we built to make sure someone is always looking.
The shape of the failure
When a placement fails at week 8, the postmortem usually surfaces one of a small number of patterns. We’ve cataloged them across 12+ active engagements and a decade of embedded delivery. The pattern is almost always one of these:
The hidden context gap. The engineer was technically capable on paper. The codebase walkthrough went fine. But there’s a layer of institutional knowledge, why a service was named what it was named, why a test suite is structured the way it is, what an internal acronym means, that nobody documented and nobody thought to explain. Six weeks in, the engineer is making technically correct decisions that violate unwritten conventions, and the team is starting to think of them as someone who “doesn’t get how we work.”
The feedback vacuum. No one on the client team has been told they’re responsible for giving the engineer feedback. The engineer’s PRs get approved without comment. Their questions in Slack get reactji’d, not answered. By week 6 they have no idea whether they’re meeting expectations, and they start to either overdeliver in unhelpful ways or pull back. Both are signals the team eventually reads as “this isn’t working.”
The expectation drift. The engineer was hired for one role and is being asked to operate in another. This usually isn’t malicious, it’s just that the team’s needs evolved between the JD being written and the engineer arriving. By week 7 the engineer is being judged against work they were never told they would be doing. They don’t know it. The team doesn’t quite know it either. Everyone is just vaguely dissatisfied.
The integration ceiling. The engineer was treated as a contractor. They weren’t invited to architecture discussions. They weren’t included in the Slack channel where the real decisions get made. They got handed tickets and shipped them. Then in week 8 the team complained that the engineer “wasn’t proactive”, but proactivity requires context, and they were structurally denied it.
Notice what these have in common. None of them are technical. None of them would be solved by hiring “a better engineer.” They’re all design failures of the embed itself.
What we changed
About four years ago we stopped treating placement as the deliverable and started treating it as week zero of a structured 90 day process. The framework is deliberately boring. It is not innovative. What’s innovative is that we actually run it on every single placement, and we still run it six years later with no exceptions.
Here is the structure.
Days 0 to 30: Orientation and context
The first thirty days are not about productivity. They are about building the model of how the team works. The engineer reads the codebase, but more importantly they sit through enough standups, planning meetings, and 1:1s to understand the political topology of the team. Who actually decides what gets built. Who has historical context that isn’t documented. Who is the unofficial reviewer that everything has to pass through.
Concrete deliverables in this window:
- A codebase walkthrough led by the team’s most senior engineer, recorded.
- A team introduction round where each team member spends 15 minutes explaining what they own and what they care about.
- An expectations document, signed by both the engineer and the client lead, that names what success looks like in 30, 60, and 90 days.
- Day one access to our internal AI playbook, the same one we update across 12+ engagements, so the engineer is not learning RAG patterns or eval design from first principles on company time.
The expectations document is the single most important artifact in the entire framework. It is the thing that prevents expectation drift. It gets revisited every 30 days.
Days 30 to 60: Contribution and feedback
By day 30 the engineer should be shipping production code. Not heroic work, that comes later, but real, merged, deployed contributions. The point is to surface friction early, when it can still be corrected.
This is also when we run our weekly tech lead check ins. Once a week, a Density tech lead, not the client’s lead, ours, has a 30 minute private conversation with the placed engineer. The goal of that call is specifically to surface things the client team can’t see: confusion about expectations, friction with a particular team member, a sense that the role is drifting from what was scoped.
We’ve heard things in those calls that nobody on the client team knew. A backend engineer who was being pulled into design decisions and didn’t feel qualified. A senior who realized at week 5 that the team’s tech lead was ignoring her PRs. A junior who was being given too much autonomy and was too proud to ask for more guidance.
In every one of those cases, the issue was solvable at week 5. By week 12 it would have been a placement failure.
Days 60 to 90: Integration and ownership
By day 60 the engineer should be getting handed a feature or surface to own. Not a ticket, not a sprint of tickets, a coherent piece of the product that they are accountable for, end to end. This is the test of whether the embed has worked. An engineer who has truly integrated takes the surface and runs with it. An engineer who hasn’t, even one who is technically excellent, will hesitate, ask too many clarifying questions, or quietly stay in execution mode.
This is also when the engineer starts being included in design discussions as a peer, not as a service provider. The shape of the engagement past day 90 is set in this window. Engineers who become peers stay for years. Engineers who stay in execution mode, in our experience, leave within 18 months.
What this costs us
The framework is expensive to run. Each engineer we place gets four hours per week of senior Density attention for the first 90 days, between the weekly check in, async check ins, expectation reviews, and the work of resolving frictions when they surface.
That’s economically indefensible if you treat staff augmentation as an arbitrage business. You can’t run this framework and also charge the lowest hourly rate in the market. We don’t try. We charge what the framework costs to run, plus a margin that lets us keep doing this for the next decade.
The economic argument is that this is cheaper, not more expensive, for our clients. A failed placement costs the client roughly six months of velocity, a quarter of leadership attention, and the cultural cost of a public failure. Avoiding two of those over the life of a five year engagement pays for the framework many times over.
That’s why the metric we report on is forced replacements. We’ve had zero of them in the last six years. That’s the framework working.
What it doesn’t fix
There are things this process does not solve.
It doesn’t fix a misalignment between the client’s stated needs and their actual needs. If the client said they wanted a senior backend engineer but actually needed a tech lead, the framework will surface the gap, but the gap is theirs to close.
It doesn’t compensate for a fundamentally hostile team culture. We’ve had two cases over the years where a client team was structurally not ready to integrate an outside engineer. The framework surfaced the problem clearly within 60 days. We ended both engagements. Neither was the placed engineer’s fault.
It doesn’t replace the client’s responsibility to lead. The check ins surface friction; they don’t resolve it. Resolution requires someone on the client side to act on what we surface. Most of the time they do. When they don’t, the framework gives us the data to have an honest conversation early enough to do something about it.
The real lesson
The single most important thing we’ve learned over six years of running this framework is that most placement failures are visible by week 5 and almost always avoidable by week 7, but only if someone is paying attention. Most providers stop paying attention the moment the offer letter is signed. We start paying attention at exactly that moment, and we don’t stop until day 90.
That is, in the end, the whole pitch. The engineers we place are good. So are the engineers a lot of other providers place. The difference isn’t the engineer. It’s whether anyone is in the room when something starts to go wrong at week 8.
We are.
If you’re considering an embedded engineering engagement and you want to talk about how this would work for your team, book a 30 minute discovery call. We’ll tell you honestly whether we can help, and if not, who to talk to instead.