Trusted by teams at
In this article
- Why the first 30 days predict the next 12 months
- Week 1: what should be done before the first review call
- Week 2: first working software in your hands
- Weeks 3-4: quality, communication, and the first release
- The 30-day review: what to assess and when to walk away
- What Wednesday delivers in the first 30 days
- Frequently asked questions
A mobile development agency that has not shipped working software to at least an internal test environment by the end of week two is, in Wednesday's experience, running four to eight weeks behind from week one. That gap compounds. By month three, it looks like a missed roadmap. By month six, it looks like a vendor relationship that needs to end. The setup phase is where most engagements succeed or fail - not because of technical capability, but because of what gets done, documented, and agreed before the first line of new code is written.
Key findings
Working software in your internal test environment by the end of week two is the single most predictive signal of a healthy engagement. Agencies that miss this threshold rarely recover the lost time.
The first 30 days surface the real communication pattern, not the sales pitch. If structured weekly updates are not established by week two, they typically never get established at all.
A named engineer assigned to your account - not a team, a person - is a prerequisite for accountability. Without one, there is no single owner when something goes wrong in month three.
A 30-day structured review with defined pass and walk-away thresholds is the most effective contract clause most enterprise buyers never include. Add it before you sign.
Why the first 30 days predict the next 12 months
The first 30 days are predictive because they reveal whether the agency is running a managed engagement or a project. A managed engagement has a named delivery lead, a documented plan broken into weeks, and a communication rhythm that holds regardless of whether everything is going well. A project has a deadline and a Slack channel.
Most enterprise mobile development engagements that fail do not fail because the engineers were incapable. They fail because the structure around the engineers was never built. The client assumed the agency was running the engagement. The agency assumed the client was providing direction. Both assumptions held for 60 days. By month three, neither side could explain what the other expected.
The first 30 days are when those assumptions either get replaced with documented agreements or get baked in permanently. An agency that does not establish a weekly update cadence in week one will not establish it in week eight. An agency that does not assign a named delivery lead in the first two weeks will not assign one when you ask for it in month four.
You can read the next 12 months in the first 30 days. The signals are not subtle.
Week 1: what should be done before the first review call
By the end of week one, five things should be done - not in progress, not scheduled, done. This is the minimum for a mobile app development agency operating at a professional level.
Environments are set up and documented. Your internal test environment exists, the engineers have access, and the setup is documented somewhere your team can find it. This is not a technical achievement. It is proof that the agency can execute straightforward tasks without a two-week delay.
The existing app has been reviewed and documented. If the agency is taking over an existing product, they should have reviewed the app by end of week one and produced at least a brief summary of what they found - the architecture, the quality, and the risks they see going in. An agency that has not reviewed the product by end of week one is not managing the engagement; they are waiting to be told what to do.
A named engineer is assigned to your account. Not a team. A person. You should know their name, their role, and how many other accounts they are currently running. If the answer to that last question is more than two, ask the agency to explain how they will maintain responsiveness.
A communication rhythm is agreed and in place. Weekly update, format, and owner are confirmed. The first weekly update is scheduled. This takes thirty minutes to set up. An agency that has not done it by end of week one is not going to do it in week four.
A 30-day plan is shared, broken down by week. Not a general engagement roadmap - a week-by-week breakdown of what the agency expects to deliver in the first 30 days. This document is what you use at the 30-day review to assess whether the engagement is on track.
If any of these five items are missing at end of week one, raise them in writing before week two begins. An agency that responds quickly and closes the gap is recoverable. An agency that explains why they could not complete the setup is telling you something about how the engagement will run.
Week 2: first working software in your hands
Week two ends with working software in your internal test environment. This is the threshold that separates agencies running structured engagements from agencies managing activity.
Working software means something you can open on a device, tap through, and observe. It does not mean a design mockup, a prototype, or a demo recorded against a staging environment. It means the engineers have shipped a build to your internal test environment and you have received it.
The build does not need to be complete or polished. For a new feature, it might be a single screen with real data flowing through it. For a takeover engagement, it might be the existing app running cleanly in your environment with one documented change applied. The specific scope matters less than the fact that something has shipped.
The reason this matters so much is that shipping to an internal test environment requires the agency to have completed every infrastructure task in week one. If environments are not set up, the build cannot ship. If the existing product has not been reviewed, the engineers do not know what they are building against. Week two's deliverable is a forcing function for week one's. An agency that misses week two's threshold almost always has a week one problem underneath it.
What to do when week two's deadline passes without a build: ask the delivery lead, by name, for a specific date when the build will arrive and what is blocking it. One-week delays are recoverable. Two-week delays almost never recover the lost ground by month three.
Not sure what your current agency should have delivered by now? Wednesday's delivery team can run a 30-minute review of your engagement and tell you where it stands.
Get my recommendation →Weeks 3-4: quality, communication, and the first release
Weeks three and four are when an engagement either establishes its operating rhythm or reveals that it does not have one. Three things should be completed before the 30-day mark.
Quality checks are completed and documented. The agency should have run at least one structured quality review of what they have shipped - automated tests, a manual review against your acceptance criteria, or both. The output should be a document you can read without a technical background: here is what we tested, here is what passed, here is what did not pass and why. An agency that has no quality documentation at the 30-day mark is not running a quality process; they are waiting for you to find bugs.
The weekly update rhythm is working. By week three, you should have received at least two structured weekly updates. Each should include what shipped, what is next, and anything that is blocked. If the updates are showing up but not covering these three elements, ask the delivery lead to revise the format. If the updates are not showing up, the communication pattern is already broken.
At least one release has reached a real audience. For most engagements, this means an internal test group - field team leads, internal stakeholders, or a handful of power users. It does not mean a public release. But it does mean that the software has been in the hands of someone other than the engineering team and that feedback has been collected. An agency that reaches the 30-day mark without a release to any audience has not completed the first delivery cycle.
Two secondary items worth checking in weeks three and four: the agency should have flagged at least one risk or dependency they did not know about at the start of the engagement. Every real engagement surfaces something unexpected. An agency that has not flagged anything unexpected by week four is either working on a trivially simple product or not looking closely enough.
The 30-day review: what to assess and when to walk away
The 30-day review is not a check-in. It is a structured assessment against the plan the agency produced in week one. Run it with your delivery lead and, if possible, one member of your internal team who has been close to the engagement.
What to assess. Compare the week-by-week plan from week one against what actually shipped. Note which items were completed, which were not, and whether the agency communicated the gaps in advance or disclosed them at the review. Assess the quality documentation: is there evidence the agency is testing their own work, or are you expected to find issues? Review the weekly updates from weeks two through four: are they consistent in format, timely, and specific? And confirm that the named engineer assigned in week one is still the person running the engagement.
Thresholds that should trigger a conversation. If working software did not reach your internal test environment until week three or later, ask the delivery lead to explain the delay and what will change. If the weekly updates were inconsistent or arrived after you chased them, ask for a process change in writing. If the week-one plan and the week-four reality have significant gaps that were not flagged until the review, ask the delivery lead to explain how they will catch similar gaps in month two.
Thresholds that should trigger a walk-away. No working software in your internal test environment by the end of week four is a walk-away signal. No named delivery lead who can speak to the specifics of your engagement is a walk-away signal. A pattern of weekly updates that arrive only when chased is a walk-away signal. These are not recoverable with a process change in month two. They are indicators of how the agency operates, and they will repeat.
Build the 30-day review into your contract before you sign. Define the pass criteria, the conversation thresholds, and the walk-away thresholds. An agency that resists contract language specifying a structured 30-day review is telling you something about their confidence in what they deliver.
What Wednesday delivers in the first 30 days
This is the actual checklist Wednesday uses on every engagement. It is not an aspiration. It is the agreement between Wednesday and every client from day one.
By end of day two. Named engineer assigned and introduced by name. Internal test environment set up and documented. Kickoff call completed with a written summary of agreements and open items distributed within 24 hours.
By end of week one. Existing app reviewed and documented - architecture, quality, and risks. Week-by-week 30-day plan shared with the client. Communication rhythm confirmed: weekly update format, day, and owner. Access and permissions completed for all systems the team needs.
By end of week two. Working software shipped to the client's internal test environment. First weekly update delivered on the agreed day and in the agreed format. Any risks or blockers identified in the existing product flagged in writing.
By end of week three. Quality documentation completed for everything shipped in weeks one and two. A second weekly update delivered. At least one piece of client feedback collected and either incorporated or logged with a rationale for why it was not.
By end of week four. First release to an internal audience completed and feedback collected. 30-day review completed against the week-one plan, with a written summary of what shipped, what did not, and the plan for month two. Month-two plan shared before the 30-day review meeting ends.
Every item on this list is observable. You do not need a technical background to assess whether it happened. That is the point. A mobile app development agency's first-30-days performance should be legible to the person who signed the contract, not just the person managing the technical integration.
Wednesday's delivery team has run this 30-day framework across 50+ enterprise mobile engagements. A 30-minute call can help you set the right expectations before your next engagement starts - or diagnose where your current one stands.
Book my 30-min call →Frequently asked questions
Not ready for a call yet? Browse vendor comparisons, onboarding guides, and decision frameworks for enterprise mobile development.
Read more decision guides →About the author
Praveen Kumar
LinkedIn →Technical Lead, Wednesday Solutions
Praveen is a Technical Lead at Wednesday Solutions who specialises in React Native and enterprise AI solutions. He has built mobile apps for card network providers, healthcare platforms, and insurance products, and has shipped apps handling millions of transactions.
30 minutes with an engineer. You leave with a squad shape, a monthly cost, and a start date.
Get your start date →Keep reading
Shipped for enterprise and growth teams across US, Europe, and Asia