Writing
How to Know If Your Current Vendor Can Deliver Your Board AI Mandate
Before you hand an AI mandate to your existing mobile vendor, run four checks. The answers tell you whether to proceed, add a specialist, or find a new team entirely.
In this article
The board gave you an AI mandate. Your first instinct is to hand it to the team already working on your app. They know the codebase. The relationship is established. The onboarding cost is zero.
Before you do, run four checks. Each one takes under an hour. Together they tell you whether your current vendor can deliver the mandate, needs support to deliver it, or is not the right team for this project.
The cost of skipping the checks is a six-month project that stalls at month three when the capability gap becomes visible. Four hours of assessment now versus three months of delay later is the trade-off.
Key findings
Vendor AI capability is not self-reported accurately. Vendors who cannot deliver AI features consistently describe themselves as AI-capable because they use AI tools in their workflow. The distinction between using AI tools and having delivered AI features in production apps is the one that matters - and it is testable in a single conversation.
The checks are structured to produce a yes, a conditional yes, or a no. A conditional yes - the vendor has partial capability and needs support - is a useful outcome. It tells you exactly what needs to be added rather than leaving the question open.
Running the assessment before committing the budget is the normal practice in well-run technology organizations. If the vendor relationship makes you reluctant to run it, that reluctance is a signal worth examining.
The four checks
The four checks test four distinct capability areas. A vendor can pass three and fail one. The pattern of passes and fails tells you more than the overall score.
Check one tests delivery track record. Check two tests workflow maturity. Check three tests regulated environment experience. Check four tests team composition. A vendor who passes all four is a strong candidate for the AI work. A vendor who passes one or two has a gap that needs to be filled before the project starts.
Check one: shipped AI reference
Ask the vendor to name a production AI feature they have delivered in a mobile app. Not a prototype, not an internal tool - a live feature with real users, currently running in an app on the App Store or Google Play.
The reference should be specific enough that you can download the app and use the feature. If the vendor names a client who they cannot name publicly, ask for the feature description and the metrics it produces. A vendor with genuine AI delivery experience can describe the feature, the model it uses, the latency in production, and at least one metric that shows it is performing.
A vendor who cannot name a shipped AI reference has not delivered AI in production. Their next project for you would be their first.
Check two: workflow evidence
Ask the vendor to show you an AI workflow output from a recent project. Not a description of their workflow - an actual output.
Three artifacts work for this test: an AI-generated code review comment from a recent release, a sample of AI-generated release notes from a shipped version, or a screenshot regression comparison showing AI-detected visual differences between builds. Any of the three demonstrates that AI is integrated into the day-to-day delivery process, not just claimed as a capability.
A vendor who can produce one of these artifacts in the conversation is using AI operationally. A vendor who describes their AI tools without being able to show an output is not.
Check three: compliance track record
If your app operates in a regulated industry - financial services, healthcare, edtech with minors' data - ask specifically about their experience navigating AI-related compliance requirements.
The questions: Have you shipped an AI feature in a HIPAA, SOC 2, or FINRA-regulated environment? How did you handle the privacy review for the AI data flows? What did the App Store review process look like for the AI features? Have you worked with a CISO on an AI feature approval?
A vendor with genuine regulated industry AI experience can answer each of these specifically. A vendor without it will answer in general terms about compliance practices. General terms are not wrong - they are insufficient for a regulated environment project.
If you are assessing whether your current vendor can deliver an AI mandate and want a structured way to run the conversation, a 30-minute call covers the framework and what to look for in the answers.
Book my call →Check four: team composition
Ask who specifically would work on the AI features. Not the team in general - the individual engineers who would own the AI integration, model selection, and AI QA.
Ask what each person has built in the AI space. Ask whether they have worked on on-device AI or only cloud-connected features. Ask whether there is an AI specialist on the team or whether AI work would be distributed across generalist engineers.
A vendor with genuine AI capability can name specific team members with specific AI experience. A vendor who answers this question in the abstract - "our team has strong AI expertise" - has not actually assigned anyone with that expertise to your project.
What the results tell you
Four passes: proceed with the vendor for the AI work. The track record, workflow, compliance experience, and team composition are all in place.
Three passes, one fail: proceed with a structured proof-of-concept before committing the full budget. The gap is specific and testable. If the proof-of-concept meets its success criteria, the fail was a minor capability gap. If it does not, you have six weeks of investment rather than six months.
Two or fewer passes: the vendor is not the right team for the AI mandate. The options are to bring in a specialist vendor to work alongside them, or to evaluate a new vendor for the full engagement. Either path is better than proceeding on the assumption that the gaps will close during the project.
Wednesday steps in when existing mobile vendors reach their AI capability limits. A 30-minute call covers what that transition looks like and how to structure the engagement.
Book my call →Frequently asked questions
The writing archive has vendor comparison guides, cost benchmarks, and decision frameworks for every stage of the enterprise mobile buying process.
Read more decision guides →About the author
Ali Hafizji
LinkedIn →CEO & Co-founder, Wednesday Solutions
Ali has been building mobile apps for 15 years and is the author of two published iOS development books. He has shipped Flutter, iOS, and Android products across travel, gig economy, and ecommerce, and leads enterprise AI enablement across Wednesday engagements. He co-founded Wednesday Solutions and architects the AI-native engineering workflow the team ships with on every engagement.
Four weeks from this call, a Wednesday squad is shipping your mobile app. 30 minutes confirms the team shape and start date.
Get your start date →Shipped for enterprise and growth teams across US, Europe, and Asia