Writing
Board-Ready Mobile AI Pilot Scoping: The Complete 90-Day Framework for US Enterprise 2026
Week-by-week plan to go from board mandate to App Store in one quarter, with the board presentation structure at the end.
In this article
90 days. That is the window between a board AI mandate and the next board presentation where the answer needs to be something in the App Store, not something in development. Wednesday has run this program for US enterprise mobile clients across healthcare, logistics, and retail. The framework below is the working document - week by week, decision by decision, with the board presentation structure at the end.
Key findings
90 days from mandate to App Store is achievable for one well-scoped AI feature with the right vendor and a defined scope by end of week one.
The most common reason 90-day pilots fail is scope expansion after week four - projects that add a second AI feature mid-pilot rarely finish either.
App Store review for AI features averages 5-7 days when submission notes address AI review criteria explicitly; submit in week ten to protect the board presentation date.
Below: the complete week-by-week framework.
Why 90 days
The 90-day window is not arbitrary. It aligns with one board meeting cycle. A board that issues an AI mandate in Q2 expects a live result to present at the Q3 board meeting - not a progress report, not a prototype, not a roadmap. A feature in the App Store with users interacting with it.
90 days also aligns with how AI feature risk compounds. A project that cannot define scope in week one will not be in the App Store at week twelve. A project that makes the wrong on-device vs cloud decision in week eight has lost the timeline by the time the mistake surfaces. 90 days works because every stage is designed to close the high-risk decisions early, before they can slip the deadline.
The framework assumes one AI feature. Not three. One. If the board mandate is to "add AI to the app," your job as the CTO or VP Engineering is to translate that into one specific, measurable feature before the project starts. This document helps you make that decision.
Weeks 1-2: Define the one feature
The goal of weeks one and two is a single written decision: which one AI feature will ship in 90 days. The output is a one-page feature brief, not a roadmap.
The feature selection criteria
Evaluate candidate features against four criteria:
User impact. Which AI feature solves the most significant problem for your highest-value user segment? If 60% of your users spend 30 minutes per session on a manual data entry task, document scanning that eliminates that task has higher impact than personalized content recommendations for a feature used by 10% of users.
Technical feasibility in 90 days. Is the feature buildable - from nothing to App Store - in 90 days with a team of three engineers? Features requiring new data pipelines, custom model training, or integrations with external systems that your organization does not control are higher risk. Features using existing structured data, well-established AI models, and infrastructure the app already has are lower risk.
App Store approval risk. Some AI features have higher rejection risk than others. Generative AI with user-visible output requires content filtering. Medical or financial AI requires specific disclaimer language. Document scanning and smart search have well-established App Store precedents and low rejection risk. If this is your first AI feature, start with a lower-risk feature type.
Measurability. Can you define a before-and-after metric? If you cannot articulate how you will measure success before the feature is built, the board presentation will not have a number. Every AI feature proposed for a 90-day pilot must have a pre-defined success metric: task completion time, error rate, user adoption rate, or a specific business outcome.
The feature brief
By end of week two, produce a one-page document that states: the feature, the user problem it solves, the success metric and its current baseline, and why this feature was selected over the alternatives you considered. This document becomes the scope control anchor for the rest of the project. Any request to add scope is evaluated against this document.
What to do if the board has already picked the feature
Boards sometimes specify the AI feature, not just the mandate. If the board has specified a feature, apply the four criteria above to assess feasibility and risk. If the specified feature fails feasibility or approval risk, document the assessment and take it back to the board before the project starts - not at week eight when the timeline has already slipped.
Wednesday runs 90-day AI pilots for US enterprise mobile apps. 30 minutes covers feature selection, feasibility, and timeline.
Get my estimate →Weeks 3-4: Technical decisions
The goal of weeks three and four is making the two technical decisions that determine whether the project stays on timeline: on-device vs cloud AI, and data requirements.
The on-device vs cloud decision
This decision has the highest cost-of-error of any technical decision in the project. A wrong decision discovered at week eight means rebuilding the AI integration from scratch. Make it in week three.
The decision framework:
Use on-device AI when: the feature processes data with regulatory restrictions on cloud transmission (patient data subject to HIPAA, financial data subject to data residency requirements), the user base has a significant offline or low-connectivity segment, or the feature needs sub-500ms response times that cloud round-trip latency cannot reliably achieve.
Use cloud AI when: the feature requires model complexity beyond what current device hardware handles (typically above 1-2GB model size), the training data requires frequent updates that would be impractical to push as app updates, or the feature is used in settings where connectivity is reliable and compliance does not restrict cloud data transmission.
Document the decision and the reasoning. If the data inputs change - if the compliance review in week four reveals a constraint that was not anticipated - revisit the decision explicitly. Do not quietly adjust the implementation without updating the written decision.
Data requirements
By end of week four: confirm that the data the AI feature needs exists, is accessible, and is in a format the model can use. This is the most commonly underestimated requirement in AI feature delivery.
Common data problems discovered too late:
- The data exists but is in a legacy system that requires a new API to access (add four to eight weeks and $40,000-$80,000 if not already scoped).
- The data exists but is inconsistently structured across records (require data cleaning work that was not in scope).
- The data exists and is accessible, but has insufficient volume for the AI model to produce reliable outputs.
Confirm data access and quality in week four. If a problem is found, address it as a scope decision: add the data work and extend the timeline, or select a different feature that uses data that is already clean and accessible.
Compliance check
Week four is also the time to run the compliance check: does this AI feature, using this data, with this technical architecture, meet the regulatory requirements of your industry? For healthcare clients, this means confirming that the on-device vs cloud decision is consistent with HIPAA requirements. For fintech clients, this means confirming data handling with the compliance team before development begins, not after it is complete.
Weeks 5-8: Build phase
The build phase runs from week five to week eight. Four weeks is the target for a single, well-scoped AI feature with a team of three engineers.
What done looks like at the end of week eight
At the end of week eight, the feature is complete and passing internal testing. Not in final review. Not 90% complete. Feature-complete, with the success metric measurable against a test population.
To make this real: define "done" in writing at the start of week five. Done is not "engineers are satisfied." Done is: the feature works for 95% of the test cases defined in week one, the success metric is measurable in a test population, the App Store submission package (screenshots, description copy, privacy manifest entries) is drafted, and there are no open compliance review items.
Tracking against the 90-day window
Review progress against the feature brief every Friday. The questions to answer:
- Is the feature on track to be feature-complete at end of week eight?
- Has any scope been added to the feature since week four? If yes, what has been removed to compensate?
- Are there any technical blockers that were not anticipated in the week three decisions? If yes, what is the resolution path and timeline?
A project that is 80% of the way through week eight but 60% complete on the feature has a problem that needs to be addressed explicitly - not "we will catch up next week."
Scope control
The most common threat to the build phase is scope expansion. A feature that looked simple in the brief acquires edge cases, design improvements, and related functionality requests during development. Each addition is small. Together, they push the feature-complete date past week eight and make App Store submission in week ten impossible.
The scope control rule: any change to the feature after week four requires a written decision that either accepts the change and removes equivalent scope elsewhere, or declines the change and logs it as a next-quarter item. There are no free scope additions in weeks five through eight.
Weeks 9-10: App Store submission
App Store review requirements for AI features
Week nine is for App Store submission preparation. Do not wait until week ten to start this work - it is more involved than a standard submission and the consequences of an error are higher.
The App Store review requirements specific to AI features in 2026:
Privacy manifest. Apps that process user data with AI must declare the data types used and the purpose in the app's privacy manifest. Missing or incomplete entries trigger rejection. Complete the privacy manifest in week nine, reviewed against Apple's current required reasons API list.
Accuracy claims. Review every piece of UI copy that describes the AI feature's output. Language that implies the AI output is definitive, diagnostic, or advisory triggers App Store guideline 5.1 (safety) for health-adjacent features and 4.8.3 for medical or financial advice. Replace definitive language with analytical language: "suggested" instead of "detected," "based on available data" instead of "accurate to."
Content filters. If the feature generates text, images, or other content, document the content filtering implementation in the submission notes. Apple reviewers look for this documentation on AI feature submissions.
Submission notes. Write explicit submission notes for Apple's review team that address the AI feature directly: what it does, what data it uses, how the privacy manifest is configured, and why the accuracy claim language is appropriate. Reviewers who see a well-documented AI feature submission approve it faster than reviewers who have to investigate.
Submission timeline
Submit in week ten. App Store review for a well-prepared AI feature submission averages 5-7 days. Submitting in week ten gives two to three weeks of buffer before the week 12 board presentation. If the first submission is rejected, the rejection is typically addressable within two to five days, allowing a resubmission and approval before week 12.
A vendor who has not submitted AI features before will not know these specifics. Verify the vendor's App Store AI submission experience before week nine begins.
Weeks 11-12: Metrics and board presentation
Measuring the success metric
By week eleven, the feature is live and users are interacting with it. Measure the success metric defined in week one against the baseline established before the feature existed. If the metric is task completion time, measure it on the current user cohort and compare to the pre-feature baseline. If the metric is error rate, compare the current error rate to the pre-feature rate.
You need two to three weeks of post-launch data for the board presentation. That is why App Store approval by week ten is not optional - it is the data collection window.
The board presentation structure
The board presentation at 90 days is six slides, not thirty.
Slide 1: The AI feature, in one sentence. What it does and who uses it.
Slide 2: Three numbers. Users who have interacted with the feature since launch. Before-and-after on the success metric. Cost per user per month for the AI feature (either marginal cloud inference cost or amortized on-device development cost).
Slide 3: How it was built. One slide on the technical approach (on-device vs cloud, data source, model type) without technical jargon. One paragraph a CFO can read without stopping.
Slide 4: What went as planned and what was adjusted. Boards trust CTO presentations that include one honest problem encountered and how it was resolved. Presentations that claim everything went perfectly are viewed with suspicion.
Slide 5: Next-quarter scope. One paragraph on the single highest-impact next step - either expanding the feature to more user segments or adding the next AI feature using the same technical infrastructure.
Slide 6: Budget summary. What the pilot cost, what the next quarter costs, and what the three-year projected cost-per-user looks like at scale.
Wednesday has run 90-day AI pilots from board mandate to App Store for US enterprise mobile clients. 30 minutes covers your specific feature, timeline, and board presentation structure.
Book my call →What the board presentation contains
The board wants to see three things: evidence the mandate was executed, a number that shows it worked, and a plan for what comes next. The 90-day framework delivers all three.
Evidence the mandate was executed: the feature is live in the App Store. Users are interacting with it. The slide deck shows screenshots from the live app.
A number that shows it worked: the before-and-after on the success metric defined in week one. Task completion time down 40%. Error rate down 60%. User adoption at 34% in the first 14 days. One number with a before and after is more persuasive than ten numbers without context.
A plan for what comes next: a one-paragraph next-quarter scope. Not a full roadmap - that can be presented separately. The paragraph answers the question the board will ask: given what you learned, what do you do in Q4?
The board that issued the mandate in Q2 and receives this presentation in Q3 has evidence that the mandate produced a result on time, at budget, with a measured outcome. That closes the loop. The conversation shifts from "are you executing the mandate" to "what do we invest next quarter."
Frequently asked questions
Not ready for a conversation yet? The writing archive has cost analyses, vendor comparisons, and decision frameworks for every stage of the buying process.
Read more articles →About the author
Praveen Kumar
LinkedIn →Technical Lead, Wednesday Solutions
Praveen leads mobile architecture at Wednesday Solutions and has scoped and delivered AI pilot programs for US enterprise mobile apps across healthcare, logistics, and retail.
Four weeks from this call, a Wednesday squad is shipping your mobile app. 30 minutes confirms the team shape and start date.
Get your start date →Keep reading
Shipped for enterprise and growth teams across US, Europe, and Asia