Trusted by teams at
In this article
- Why the onboarding period is where most outsourcing relationships lose time
- Week 1: What the team needs from you before they can build anything
- Week 2: How to run the first working session that sets the rhythm
- Month 1: The milestones that confirm the team is calibrated
- The single biggest onboarding mistake
- What a team that has done this 50 times does in the first week
In Wednesday's experience, engagements that are fully set up by end of week two ship their first update to real users by week four. Engagements that take six weeks to set up typically ship their first update in month three. That gap — six weeks of compressed output versus two months of slow start — represents the difference between a vendor relationship that builds confidence and one that breeds doubt before it has a chance to succeed.
You have signed with a new agency. The contract is done. The kick-off call is scheduled. What happens in the next 60 days determines whether the relationship delivers or stalls. Most of the risk lives in decisions made, or not made, in week one.
Key findings
Engagements fully onboarded by end of week two ship their first real update to users by week four. Engagements that take six weeks to onboard typically reach that same milestone in month three.
The four most common causes of slow starts are not vendor capability failures — they are access failures. System access, design files, decision authority, and a written definition of done not granted in week one each add weeks to the engagement, not days.
The single biggest onboarding mistake is starting feature work before "ready to ship" has been defined in writing. When the vendor interprets "done" differently from the client, the gap only surfaces at the first review — at which point rework has already consumed the margin.
A team that has onboarded 50 times does not wait for the client to hand over access. They send a specific list — by system, by permission level, by person who needs to grant it — in the first 24 hours after the contract is signed.
Why the onboarding period is where most outsourcing relationships lose time
The decisions made in week one compound for months. An outsourced team that has system access, a clear definition of done, and a named point of contact on the client side by day five is a team that can ship in week two. A team missing any of those three things will spend weeks two and three discovering the gap, asking for the things they need, and waiting. That wait feels like a vendor problem. It is almost always a setup problem.
The onboarding period is where trust is established or not established. A client who sees working software in week one has evidence. A client who sees a planning document in week one has a promise. Evidence creates a very different working relationship than a promise does, and the working relationship that forms in the first four weeks rarely changes.
There is also a compounding effect that most CTOs underestimate. Every week of slow start is not a one-week cost. The team that builds momentum early ships faster in month two because they have already solved the integration points, established the feedback rhythm, and learned how your product works. The team that spends three weeks in setup starts building that knowledge in week four — which means month two looks like what month one should have been. You are not just losing the weeks of delay. You are losing the compound output that comes from a team operating at full speed.
Week 1: What the team needs from you before they can build anything
Week one is about access, not planning. The team can plan independently. What they cannot do without you is get into the systems they need to build.
Four categories of access must be in place by day three of the engagement:
System access. Every environment the app runs in, every test account, every third-party service the app touches — the new team needs credentials and permissions before they can evaluate what they are building against or write code that talks to real systems. This is the access most clients underestimate. It is not one set of credentials. It is typically six to ten distinct systems, each with its own admin who needs to grant access. Start the list before the kick-off call. Send it in the first 24 hours.
Design files. The team needs the current designs in whatever tool your organization uses. Not a PDF export. Not screenshots. The live files, with edit or comment access, so they can inspect spacing, see component states, and flag gaps before they are building to incomplete specs. Design gaps discovered during development cost five times what they cost to catch in review.
A decision-maker, named and available. Every outsourced engagement has a client-side dependency list that grows faster than expected. Design questions that need sign-off. Product decisions that require context. Third-party integrations that behave differently than documented. The team needs one person on your side who can answer or escalate within four hours during your business day. Not a committee. One person, one response-time commitment, named before the engagement starts.
The first feature, scoped in writing. Not the full backlog. The first feature the team will build, described in enough detail that a new engineer with no context can understand what it does, what it does not do, and who signs off on it. One page of written scope prevents three weeks of scope interpretation.
Week 2: How to run the first working session that sets the communication rhythm
The communication rhythm for the entire engagement is set in week two. How this first working session runs is how every session will run. Get it right here and you will not have to fix it in month two.
The working session in week two is not a status update. It is the first review of working software. The team should come to it with a build you can open on a device, a list of the decisions they made while building it, and a list of the decisions they need you to make before they can proceed.
Run the session in this order. Open the build first — not a screen recording, a live device. Walk through what the team built against the scope you agreed in week one. Ask where it matches and where it does not. Ask what the team changed from the original spec and why. This is the moment where you establish whether the team can make good judgment calls or whether they need to be directed on every decision. Both are valid working styles. You need to know which one you have.
After the build review, move to the decision list. The team should have two to four product or design decisions that they cannot resolve without client input. Work through them in the session. Document the decisions and who made them. This documentation habit prevents the most common source of late-stage rework: the client thought the team was building one thing, the team was building another, and neither has a record of when the decision was made.
End the session by agreeing on the format and timing of the weekly update. Written, not just call-based. Delivered by end of business on a day that works for both sides. Covering three things: what shipped, what is blocked or at risk, and what decisions the client needs to make before the next session. Agreeing on this format in week two means you will not have to ask for it in week six.
Not ready to call yet? Browse vendor evaluation frameworks, onboarding guides, and switching playbooks for enterprise mobile development.
Read more decision guides →Month 1: The milestones that confirm the team is calibrated — and what to do if they are not
At day 30, you should be able to answer yes to four specific questions. If you cannot, you have a calibration problem — and month two is the last point where it is easy to fix.
Has the team shipped working software at least three times? Week one, week two, and week three or four. Not three builds of the same screen. Three distinct pieces of work that represent genuine forward progress. A team shipping three times in month one is calibrated to delivery. A team that has shipped once and has been in "active development" since is not.
Has the team surfaced at least one blocker or risk in writing? Real engagements have blockers. A team that has not surfaced any in 30 days is either sailing through unusually smooth conditions or is not communicating what they are actually experiencing. Ask directly. If the answer is that everything has been smooth, ask for examples of the decisions they made independently and why. A team doing real work makes real decisions. Those decisions should be visible to you.
Have you had at least one scope conversation that resulted in a written decision? In a healthy engagement, the scope shifts slightly in month one as the team learns more about the product and the client learns more about the team's working style. That shift should be documented. If you have had no scope conversations in 30 days, either the original scope was unusually precise or the team is not raising the questions it should be.
Does the weekly update format match what you agreed in week two? If the team agreed to write updates in a specific format and that format has drifted, address it now. The update format is the proxy for the communication standard. If the team is not holding to a format they agreed to, they are not holding to the agreement. Name it specifically, in writing. Ask for a confirmation that it will return to the agreed format next week.
If any of these four milestones is missing at day 30, address it in writing to the delivery lead. Name what was agreed. Name what is missing. Set a specific date by which you expect it to be corrected. A delivery lead who responds with a concrete plan is worth working through the correction. A delivery lead who responds with explanations is telling you something about how the rest of the engagement will go.
The single biggest onboarding mistake
The single biggest onboarding mistake is starting the first feature before "ready to ship" has been defined in writing.
This mistake costs more time than any other in the first 60 days. Here is why it happens. The contract is signed. The kick-off is done. There is energy on both sides to move quickly. The team starts building. The client is pleased that things are moving. Six weeks later, the first feature comes up for review and the client says "this is not what we agreed." The team says "this is exactly what the scope described." Both are right, because the scope described what to build but not what "done" looks like.
The definition of done is not the feature specification. The specification says what the feature does. The definition of done says: what environment does it need to work in, what devices does it need to be tested on, what edge cases are in scope, what does the sign-off process look like, and who has authority to approve it. A feature can fully match its specification and still require three rounds of revision because the definition of done was never established.
Write the definition of done before the team writes a line of code for any feature. It should be one page. It should be agreed on by both sides. It should be specific enough that two different people reading it would arrive at the same answer for whether a given build meets it.
Teams that have onboarded many clients know to ask for the definition of done explicitly. If your new vendor did not ask, write it yourself and send it. It will save weeks.
What a team that has done this 50 times does in the first week that a new vendor doesn't
An experienced outsourced team does not wait for the client to organize the onboarding. They run it.
In the first 24 hours after the contract is signed, an experienced team sends a structured access request. Not a verbal list on the kick-off call. A written document, organized by system, specifying the exact permission level needed and the name of the person who typically grants it. The client's job is to forward it to the right people, not to figure out what the team needs.
By day three, an experienced team has reviewed whatever documentation exists and returned a list of gaps. Not complaints about the documentation. A specific list: "We found specs for the dashboard and the profile screen. We did not find specs for the notification system or the offline behavior. Before we build those, we need either a spec or a decision session to define the requirements." The client knows exactly what is missing. The team does not slow down waiting to discover gaps mid-build.
By end of week one, an experienced team has shipped a working build of the first screen or flow. It does not cover the full feature set. It covers enough to demonstrate that the team can access the systems, build against the design files, and produce something real. The client can open it on a device. That fact alone changes the working relationship.
The difference between a team that has done this 50 times and one doing it for the third time is not technical skill. It is process fluency. An experienced team knows that most onboarding delays are access delays, that most rework comes from undefined "done," and that the communication rhythm set in week two persists for the full engagement. They structure the first week to prevent those problems before the client even knows to worry about them.
When you are evaluating vendors before signing, ask what their onboarding process looks like in concrete terms. Ask what they will deliver by end of week one. Ask what they send clients in the first 24 hours after the contract is signed. The answers tell you whether you are talking to a team that has built this process or a team that is figuring it out as they go.
Frequently asked questions
How long does it take to onboard an outsourced mobile development team?
A well-run onboarding takes two weeks, not six. By the end of week one, the team should have access to every system they need and a first build in your hands. By the end of week two, the communication rhythm should be established and the team should be shipping independently. Engagements that take four to six weeks to complete onboarding typically ship their first real update in month three rather than month one — the delay compounds every week.
What do I need to give an outsourced mobile team before they can start?
Four categories of access must be granted in the first 48 hours: system access (the environments the app runs in, the test accounts, the design files), decision access (the person who can approve designs and answer product questions without a two-day delay), context (documented requirements for the first feature, not just a backlog of ideas), and one definition of done for that first feature so the team knows what "finished" looks like before they write a line of code. Any one of these missing in week one extends the engagement by weeks, not days.
How do I know if my outsourced mobile development vendor is actually ramping up?
The clearest signal is working software in your hands by the end of week one. Not a status update. Not a design review. A build you can open on a phone and interact with. If a team cannot show you working software in week one, they are still in setup mode — and a team calibrated to planning rarely shifts to shipping without a direct conversation about what you expect. Ask at the start of the engagement: what will I be able to open on my phone by Friday of week one?
What is the most common reason outsourced mobile development engagements start slowly?
The most common reason is that "ready to ship" was never defined before the first feature was started. The vendor interpreted "done" one way, the client interpreted it another, and neither found out until the feature was demonstrated. The definition of done should be written and agreed before any work begins: what does the feature do, what does it not do, what environment does it need to work in, and who has authority to sign off on it. One page of agreement before week one saves a month of rework.
30 minutes with a Wednesday delivery lead. You leave with a clear picture of what a structured onboarding looks like for your product, your team size, and your timeline — and whether Wednesday is the right fit.
Book my 30-min call →Frequently asked questions
Not ready to call yet? Browse vendor evaluation frameworks, onboarding guides, and switching playbooks for enterprise mobile development.
Read more decision guides →About the author
Mohammed Ali Chherawalla
LinkedIn →Co-founder & CRO, Wednesday Solutions
Mac co-founded Wednesday Solutions and has shipped mobile apps used by more than 10 million people, written APIs that take over a billion calls a day, and architected systems that have driven hundreds of millions in revenue across fintech and logistics. He is one of the leading practitioners of on-device AI for enterprise mobile and the creator of Off Grid, one of the top on-device AI applications in the world. He now leads commercial strategy at Wednesday while staying close to architecture, AI enablement, and vendor evaluation for enterprise clients.
30 minutes with an engineer. You leave with a squad shape, a monthly cost, and a start date.
Get your start date →Keep reading
Shipped for enterprise and growth teams across US, Europe, and Asia