Writing
The Board Wants AI in the App. The CISO Says No. How US Enterprise Teams Resolve This in 2026
This standoff is blocking 61% of enterprise mobile AI projects for 6 months or more. On-device AI resolves the specific objection the CISO is making. Here is how to present it.
In this article
The board mandate came down in Q1. "Put AI in the mobile app." The VP Engineering brought a proposal to the CISO. The CISO said no. The project went back to the queue. It has been there for seven months. This pattern is blocking 61% of enterprise mobile AI initiatives. The resolution is not more negotiation. It is an architecture that removes the specific thing the CISO is blocking on.
Key findings
61% of enterprise mobile AI initiatives are delayed 6 months or more due to CISO or legal review. The delay is caused by cloud AI architecture, not AI in general.
On-device AI eliminates the primary CISO objection — data leaving the device — in 100% of cases. The objection disappears because the data flow that triggered it does not exist.
Wednesday has navigated this conversation for enterprise clients across healthcare, financial services, and logistics. Off Grid is the reference implementation: complete AI suite, no cloud, publicly auditable privacy claims.
Features reviewed and approved before build proceed on normal development timelines. CISO review blocks features presented after architecture commitment, not before it.
The specific standoff
The board mandate is real. "Use AI to reduce costs, increase efficiency" or some variation of it came from the board after reading about AI in every publication they consume. The executive team brought it to the product organisation. The VP Engineering or Chief Product Officer assembled a proposal.
The proposal involved a cloud AI API. An integration with GPT-4o, Claude, or Gemini for text features. Or a cloud speech API for voice transcription. Or a cloud image model for visual features. The proposal was technically sound and moved quickly to the CISO's desk.
The CISO said no, or more precisely: "not until we understand the data implications." That review has been running for months. The AI initiative is blocked.
This is not a story about a CISO being obstructionist. It is a story about a technically sound proposal that created the exact problem the CISO is responsible for preventing: user data leaving the organisation's control and traveling to a third-party server on every AI interaction.
The CISO is right to block it. The problem is that the VP Engineering and the board are left interpreting "not until we understand the data implications" as a delay, when it is actually a direction: build it differently.
What the CISO is actually blocking
The CISO's objection is not to AI. It is to uncontrolled data egress.
A cloud AI API integration creates a data flow: user input travels from the device, across the internet, to an AI vendor's server. The vendor runs the model and returns a result. In that process:
- User data is in transit over the public internet (encryption helps but does not eliminate all risk)
- User data is processed on a third-party server under the vendor's data handling practices
- The vendor's terms govern what happens to that data — retention, secondary use, training use
- Changes to those terms are outside the enterprise's control
- In regulated industries, this flow may violate data residency requirements or require specific agreements that are not yet in place
For a CISO in healthcare, every AI query that contains patient information is a potential HIPAA violation without a BAA in place. For a CISO in financial services, customer financial data traveling to an unapproved third party may violate regulatory obligations. For a CISO in legal services, attorney-client communications on an external server are a privilege risk.
The CISO is not blocking AI. The CISO is blocking an architecture that creates compliance exposure they are responsible for managing.
Why on-device AI resolves the objection
On-device AI removes the data flow that the CISO is blocking.
When AI inference runs on the user's device — using an open-source model like Llama 3 or Phi-4 via llama.cpp, or Whisper for voice — the user's input never leaves the device. There is no API call. There is no data in transit. There is no third-party server processing user data. There are no vendor terms to review for the inference step.
The CISO's checklist for the cloud AI proposal:
- Data leaving the device: yes [blocks]
- Third-party vendor agreement required: yes [review pending]
- Data residency compliance: unknown [review pending]
- Vendor security assessment: not completed [review pending]
- Training use prohibition negotiated: not yet [review pending]
The CISO's checklist for the on-device AI proposal:
- Data leaving the device: no [cleared]
- Third-party vendor agreement required: no (open-source model, no vendor) [cleared]
- Data residency compliance: data stays on device [cleared]
- Vendor security assessment: not applicable [cleared]
- Training use prohibition: not applicable [cleared]
Every item that caused the original block is gone. The CISO review that took seven months for the cloud proposal takes one meeting for the on-device proposal, because the objections are answered before the questions are asked.
How to present on-device AI to a CISO
The presentation to the CISO does not start with AI capabilities. It starts with the architecture.
Lead with the data flow. Show a diagram of the on-device AI architecture: user input stays on the device, inference runs locally, no external API call. One slide, one data flow diagram. The CISO will spend 30 seconds on it and either ask follow-up questions or move on. If the diagram is clear, the follow-up questions are manageable.
Follow with the model provenance. "We are using Llama 3, published by Meta under a commercial use license. The model weights are downloaded once and stored on the device. No interaction data is transmitted to Meta or to any inference service." If the CISO is unfamiliar with open-source AI licensing, explain the distinction: using a model's weights is not a service relationship. There is no ongoing data transmission.
Address the questions before they are asked. Use the section below to prepare answers to the five questions every CISO asks about on-device AI.
Close with the reference. Off Grid is a production on-device AI product serving 50,000+ users with publicly stated no-cloud-inference architecture. For a CISO who wants proof that this architecture works in production before approving it for enterprise deployment, Off Grid is that proof.
Preparing to present an on-device AI proposal to your CISO? A 30-minute call produces the architecture documentation and answers to the objections before your meeting.
Get my recommendation →Questions the CISO will ask
These five questions come up in every CISO conversation about on-device AI.
"What model are you using and who published it?" Llama 3 (Meta), Phi-4 (Microsoft), Gemma 2 (Google), or Mistral 7B (Mistral AI). Each has a commercial use license that permits on-device deployment. The publisher does not receive data when the model is used. The model weights run locally.
"Is any data transmitted during inference?" No. Inference is fully local. No network connection is required or used during AI inference. You can verify this with a network monitor running during AI feature use.
"What happens when the model is updated?" Model updates are delivered in app updates, like any other app component. The update process is the same as updating any part of the app binary. No data is transmitted as part of model updates.
"What if an employee uses the AI feature to process sensitive data? Where does it go?" It stays on their device. On-device AI does not create a mechanism for data exfiltration through the AI feature. The AI operates on data that is already on the device and produces results that stay on the device.
"Is this auditable?" Yes. The on-device inference stack (llama.cpp for text, Whisper for voice) is open-source with auditable code. The data flow can be verified with network analysis tools. There are no obfuscated server calls.
The board presentation that satisfies both
After the CISO approves the on-device architecture, the board presentation becomes straightforward.
The board asked for AI in the app. The CISO said the original cloud proposal had data handling problems. The team designed an alternative that keeps data on the device. The CISO reviewed and approved it. The feature is being built.
That is the sequence. It demonstrates that the executive team handled the board mandate responsibly — not just as a speed execution, but as a governance question. CISOs who have approved proposals often actively support the board presentation, because the approved proposal reflects their risk management work.
The capability case closes the presentation: what the AI feature actually does, how users interact with it, and what outcome it produces for the business. The CISO's approval is the proof that this capability is being delivered correctly.
Off Grid as the reference implementation
Wednesday built Off Grid — a complete on-device AI suite — to answer the question that enterprise CISOs and procurement teams ask most often: "Has this been done at scale, and is there an independently verifiable record of it?"
Off Grid ships text AI (llama.cpp), voice transcription (Whisper), image generation (MNN/QNN/Core ML), and vision-language features from a single React Native app. 50,000+ users. 1,700+ GitHub stars. Zero paid marketing. Zero server calls for AI inference.
The GitHub page is public. The architecture is documented. A security researcher, a CISO's technical team, or an enterprise procurement team can review the implementation and verify the claims independently.
For an enterprise team presenting on-device AI to a CISO who asks for precedent, Off Grid is the precedent. Not a case study described in a vendor's sales deck — a public product with public code and verifiable user numbers.
How Wednesday navigates this conversation
Wednesday works with enterprise teams on both sides of this conversation: the board-mandate side (what AI capability to build and how fast) and the CISO review side (what architecture satisfies the data requirements).
The pattern Wednesday has seen across healthcare, financial services, and logistics is consistent: cloud AI proposals block at CISO review for months; on-device AI proposals clear CISO review in one or two meetings.
The conversation shifts from "how do we negotiate vendor terms" to "how do we build it." That shift turns a 7-month delay into a 10-week build. The CISO gets the compliance outcome they need. The board gets the AI capability they asked for. The team ships.
Ready to turn your CISO objection into a 10-week build? A 30-minute call maps the specific objections in your organisation to an on-device architecture that resolves them.
Book my 30-min call →Frequently asked questions
The writing archive covers AI compliance frameworks, CISO review preparation, and on-device AI architecture for enterprise mobile teams.
Read more decision guides →About the author
Ali Hafizji
LinkedIn →CEO, Wednesday Solutions
Ali has navigated the board mandate vs CISO objection conversation for enterprise mobile AI projects across healthcare, financial services, and logistics, and built the on-device AI reference implementation in Off Grid.
Four weeks from this call, a Wednesday squad is shipping your mobile app. 30 minutes confirms the team shape and start date.
Get your start date →Keep reading
Shipped for enterprise and growth teams across US, Europe, and Asia