Writing
How to Turn a Board AI Mandate Into a Deliverable Mobile Project
The board said add AI to the app. That is not a project brief. Here is how to turn a mandate into something your team can scope, build, and ship.
In this article
"Add AI to the app" is not a project. It is a direction. The board has given you a mandate without a scope, a budget without a brief, and a deadline without a deliverable. Turning that into something your team can actually build requires three decisions the board did not make for you.
The good news is that the mandate is real and the timeline is not hypothetical. The board is not asking whether AI should be in the product. That decision has been made. The question on the table is which AI feature, in which part of the app, built by which team, and measurable by which metric.
Key findings
Board AI mandates fail most often not because the technology is hard but because the project was never scoped. A mandate without a defined first deliverable produces exploration that lasts until the board asks for a progress report, at which point there is nothing to show.
The first AI project should be selected for learnability, not ambition. A smaller feature that ships in 12 weeks and produces a measurable result teaches the organization more about AI delivery than a larger feature that ships in 24 weeks with no clear success metric.
Vendor capability is the gating factor. Most mobile vendors cannot deliver AI features - not because they lack ambition but because they lack the specific experience: on-device model integration, privacy review navigation, AI quality assurance. Selecting a vendor without verifying that capability before signing the contract is the most common reason board AI mandates stall.
What the board actually means
When a board says "add AI," they typically mean one of three things - and they are not the same project.
They may mean: reduce our operating costs using AI. This translates to automation features - AI that handles tasks currently done by support agents, operations staff, or manual processes. The app is a delivery mechanism for efficiency gains.
They may mean: make our product more competitive. This translates to user-facing AI features - personalization, intelligent search, AI-powered recommendations. The app needs to feel smarter than the competition.
They may mean: demonstrate to investors and customers that we are an AI company. This translates to a visible, marketable AI feature that can be pointed to in earnings calls and press releases. The feature needs to be real but it also needs to be legible.
Which of the three the board means changes the scope of the project entirely. A 30-minute conversation with the board member who raised the mandate - asking which outcome they are optimizing for - is the most valuable 30 minutes in the scoping process.
The four project types under AI
Once you know which outcome the board wants, the project type becomes clearer. Enterprise mobile AI features fall into four categories.
AI in the user experience. Smart search, personalized feeds, AI-generated content, predictive inputs. These features touch what users see and interact with. They are the most visible and the most scrutinized by compliance.
AI in operations. Document processing, automated data extraction, anomaly detection. These features run behind the user experience and reduce manual work. They are often lower-risk for compliance because they do not surface AI outputs directly to end users.
On-device AI. Features that run entirely on the device without sending data to a server. These are the most complex to build but the strongest answer to privacy and compliance concerns. They are also the most defensible features in regulated industries.
AI-powered workflows. AI embedded in the development and release process rather than in the product itself. This reduces cost and increases velocity without requiring a user-facing AI feature. Often the fastest way to demonstrate AI adoption to a board.
How to scope the first AI project
The first AI project should meet three criteria: it should be self-contained (not dependent on changes to other parts of the app or the backend), it should be measurable (a specific metric that will tell you whether it worked), and it should be completable in under 16 weeks.
Projects that fail the self-contained test create dependencies that extend timelines. Projects that fail the measurable test produce features with no clear success signal. Projects that fail the 16-week test are usually too ambitious for a first engagement and produce a long development cycle before anything can be evaluated.
For most enterprise mobile apps, the first AI project that meets all three criteria is either an AI search improvement, a document processing feature, or a behavioral notification system. All three are self-contained, measurable in user behavior data, and completable in 12 to 16 weeks.
If you have a board AI mandate and need to turn it into a scoped project, a 30-minute call covers which feature type fits your app and what a realistic timeline looks like.
Book my call →What your vendor needs to deliver it
An AI feature in a mobile app is a different delivery problem than a standard feature. The vendor needs experience with AI model integration, on-device or API latency management, AI-specific quality assurance, and privacy review navigation.
Most mobile vendors do not have all four. Ask for a reference from an AI feature they have shipped - not a pilot, a live feature in production. If they cannot name one, they are not the right vendor for the first AI project.
The vendor selection conversation for an AI mandate is not the same as the conversation for a standard mobile engagement. The questions are different, the evaluation criteria are different, and the risk of selecting the wrong vendor is higher because the timeline pressure from the board is real.
How to report progress to the board
At 90 days, the board expects to see three things: a defined project, evidence that it has started, and a metric that will confirm success.
A defined project means: a named feature, a delivery date, and a vendor with a signed contract. Evidence it has started means: a development update with something built or in progress. A success metric means: a specific number tied to a business outcome.
"We are exploring AI options" is not a 90-day report. "We have scoped an AI-powered document processing feature, development started three weeks ago, and we expect to see a 20 percent reduction in manual review time by Q3" is.
Wednesday has delivered board-mandated AI features for enterprise mobile teams across regulated and non-regulated industries. A 30-minute call covers what the first project should be and what it takes to get it to market.
Book my call →Frequently asked questions
The writing archive has vendor comparison guides, cost benchmarks, and decision frameworks for every stage of the enterprise mobile buying process.
Read more decision guides →About the author
Ali Hafizji
LinkedIn →CEO & Co-founder, Wednesday Solutions
Ali has been building mobile apps for 15 years and is the author of two published iOS development books. He has shipped Flutter, iOS, and Android products across travel, gig economy, and ecommerce, and leads enterprise AI enablement across Wednesday engagements. He co-founded Wednesday Solutions and architects the AI-native engineering workflow the team ships with on every engagement.
Four weeks from this call, a Wednesday squad is shipping your mobile app. 30 minutes confirms the team shape and start date.
Get your start date →Shipped for enterprise and growth teams across US, Europe, and Asia