Writing

What Your Board Actually Wants When They Say Add AI to the App

A board mandate to "add AI" rarely means what engineering teams assume it means. Here is how to translate the mandate into a project your vendor can actually deliver.

Mohammed Ali ChherawallaMohammed Ali Chherawalla · Co-founder & CRO, Wednesday Solutions
8 min read·Published Feb 25, 2026·Updated Feb 25, 2026
4xfaster with AI
2xfewer crashes
10xmore work, same cost
4.8on Clutch
Trusted by teams atAmerican ExpressVisaDiscoverEYSmarshKalshiBuildOps

When a board says "add AI to the app," the mandate arrives without a specification. It is a direction, not a brief. The specific feature, the timeline, the success metric, and the vendor requirement are all undefined. The CTO — or VP of Product, or whoever receives the mandate — has to translate it into something buildable before any work can start.

Most of that translation happens inside the organisation, often under time pressure and without a clear framework. This article gives you that framework.

Key findings

Board AI mandates typically express one of three underlying goals: reduce operating costs, increase user engagement, or demonstrate competitive parity. Each maps to a different type of feature and a different vendor requirement.

The most common scoping mistake is starting with the technology rather than the user problem. Features built to a technology brief rather than a user problem rarely reach production.

A vendor that is genuinely AI-native — one that has shipped AI features in production apps, not just used AI in their development process — is a different category from the typical mobile vendor. The evaluation questions are different too.

What the board is actually asking

Board-level AI mandates come from one of three places.

Competitive pressure. A competitor has announced an AI feature. The board has seen the press release, or a customer has asked why your product does not have the equivalent. The underlying ask is: we need to be seen as keeping pace.

Cost reduction. The board has read that AI reduces operational costs — in customer support, in content moderation, in document processing. The underlying ask is: find a way to use AI to remove manual work from the business.

Board meeting visibility. AI appears on the agenda at the board level because investors, analysts, or advisors expect it to. The underlying ask is: we need an AI story we can tell in the next board pack or investor update.

Each of these mandates maps to a different feature type, a different timeline, and a different definition of success. The critical step before talking to any vendor is identifying which of the three is driving the mandate.

Three things it almost never means

It almost never means "rebuild the app with AI." A mandate to add AI is not a mandate to replace what is already working. An AI layer can be added to an existing app without touching the core user experience. A vendor that responds to a mandate to "add AI" by proposing a substantial rebuild is either misreading the brief or upselling work that is not required.

It almost never means "add a chatbot." Chatbots are the default AI feature for vendors that have not thought carefully about the use case. They are visible, easy to demo, and rarely add meaningful value to enterprise mobile apps. A chatbot answer to a board AI mandate typically produces a feature that users ignore and that the board quickly recognises as a surface response rather than a meaningful one.

It almost never means "ship everything at once." The board wants evidence of progress. A single AI feature that is genuinely useful to users, shipped within 90 days, is a stronger response to an AI mandate than a roadmap of five features that take 18 months. The board wants a story they can tell, and one shipped feature is a better story than a delivery plan.

What it usually does mean

After working through the translation with more than a dozen enterprise leadership teams, the board mandate to add AI most commonly resolves to one of these four features:

A smart search or recommendation layer. Users search for something in the app, and the results are more relevant than a keyword match. Or users are shown content, products, or actions based on their history. This is the most frequently requested and the most deliverable in a short timeline. The data usually already exists in the app.

A document or image analysis feature. Users upload a document, a photo, or a form, and the app extracts, classifies, or summarises the relevant information. This is common in insurance, logistics, healthcare, and field service apps. The complexity depends on the document type and the accuracy required.

An automated support or resolution layer. Common queries or actions that currently require a human are handled automatically. This maps to the cost-reduction version of the mandate and tends to have a measurable ROI outcome that satisfies the board's financial framing.

A real-time monitoring or alerting feature. The app detects a change in user behavior, system state, or external data and surfaces an alert or recommendation. Common in fintech, healthcare, and operations tools.

Each of these is a real feature that delivers real value. None of them require rebuilding the app. All of them have a definable success metric that the board can track.

If you have a board AI mandate and need to scope it into a project your vendor can deliver, a 30-minute call covers the translation.

Book my call

How to scope the response

Before engaging a vendor, answer three questions.

What user problem would an AI feature solve? Not "what AI feature would be impressive" or "what do competitors have." What specific thing does a user currently do manually, slowly, or not at all, that an AI feature could do better? The feature that solves a real user problem gets used. The feature that demonstrates AI capability gets ignored after the demo.

What data does the app already have? Most AI features in mobile apps use data the app already collects — search queries, user history, document uploads, usage patterns. A feature that uses existing data ships faster, costs less, and requires less user behavior change than one that requires new data collection. Audit what the app already knows before deciding what to build.

What is the measurable outcome? The board will ask what happened. "We shipped an AI feature" is not an outcome. "Search success rate increased by 22 percent" is. "Manual claims processing time dropped by 40 percent" is. Define the metric before you scope the feature. This also tells you whether the feature is worth building — if you cannot define a measurable outcome, the feature does not have a clear enough purpose.

The question your vendor cannot skip

Before a vendor can scope an AI feature for your app, they need to answer one question honestly: have they shipped an AI feature to a production mobile app, and what happened after it shipped?

Not "we have AI in our development process." Not "we have experience with machine learning." Shipped. To a production app. With real users. And a track record of what the feature did after it went live.

A vendor that has shipped AI features in production has solved the problems you will face: the App Store disclosure requirements, the latency tradeoffs between on-device and cloud inference, the compliance implications of handling user data through an AI model, the model drift that occurs when user behavior changes over time. These are not theoretical problems. They are practical problems that every AI feature in a production app encounters.

A vendor that has not shipped AI features in production will encounter these problems on your project, for the first time, on your timeline, and on your budget.

The board mandate gives you the authority to make a vendor change if your current vendor cannot answer this question with specifics. Use it before the project starts, not after the pilot stalls.

Wednesday has shipped AI features across healthcare, fintech, edtech, and field operations apps. A 30-minute call covers what is realistic for your mandate, what it costs, and what timeline to tell your board.

Book my call

Frequently asked questions

The writing archive has vendor comparison guides, cost benchmarks, and decision frameworks for every stage of the enterprise mobile buying process.

Read more decision guides

About the author

Mohammed Ali Chherawalla

Mohammed Ali Chherawalla

LinkedIn →

Co-founder & CRO, Wednesday Solutions

Mac co-founded Wednesday Solutions as CTO and has shipped iOS, Android, and React Native apps at scale across fintech and logistics. He is one of the leading practitioners of on-device AI for enterprise mobile, and is the creator of Off Grid - one of the leading on-device AI applications in the world. He now leads commercial strategy while staying close to architecture, AI enablement, and vendor evaluation for enterprise clients.

Four weeks from this call, a Wednesday squad is shipping your mobile app. 30 minutes confirms the team shape and start date.

Get your start date
4.8 on Clutch
4x faster with AI2x fewer crashes100% money back

Shipped for enterprise and growth teams across US, Europe, and Asia

American Express
Visa
Discover
EY
Smarsh
Kalshi
BuildOps
Ninjavan
Kotak Securities
Rapido
PharmEasy
PayU
Simpl
Docon
Nymble
SpotAI
Zalora
Velotio
Capital Float
Buildd
Kunai
Kalsi
American Express
Visa
Discover
EY
Smarsh
Kalshi
BuildOps
Ninjavan
Kotak Securities
Rapido
PharmEasy
PayU
Simpl
Docon
Nymble
SpotAI
Zalora
Velotio
Capital Float
Buildd
Kunai
Kalsi
American Express
Visa
Discover
EY
Smarsh
Kalshi
BuildOps
Ninjavan
Kotak Securities
Rapido
PharmEasy
PayU
Simpl
Docon
Nymble
SpotAI
Zalora
Velotio
Capital Float
Buildd
Kunai
Kalsi