Writing

Mobile AI Vendor Risk Scorecard: How to Evaluate Data Practices Before You Commit in 2026

Ten questions, scored 0-3. The average cloud AI vendor scores 14-19 out of 30. On-device AI scores 30.

Mohammed Ali ChherawallaMohammed Ali Chherawalla · CRO, Wednesday Solutions
9 min read·Published Apr 24, 2026·Updated Apr 24, 2026
0xfaster with AI
0xfewer crashes
0xmore work, same cost
4.8on Clutch
Trusted by teams atAmerican ExpressVisaDiscoverEYSmarshKalshiBuildOps

The average mobile vendor proposing a cloud AI feature scores 14 to 19 out of 30 on the data risk scorecard below. A score of 14 means material gaps in at least four of the ten risk areas. A score of 30 — achievable only with on-device AI or a comprehensive enterprise cloud agreement — means every risk area is covered.

This scorecard was built from ten due diligence questions that matter most to CISOs, legal teams, and compliance officers evaluating AI data practices. Use it before any AI vendor agreement is signed.

Key findings

The average mobile vendor offering cloud AI scores 14-19 out of 30 on the ten-question risk scorecard below.

A perfect score of 30 is achievable only with on-device AI or with a cloud AI enterprise agreement that explicitly addresses all ten risk factors.

The questions most commonly failed: change-of-control clauses, model training rights opt-out, and third-party model vendor audit trail.

Wednesday scores 30/30 using on-device AI with no data leaving the device — the risk profile is verifiable before any agreement is signed.

Why AI vendor risk is different from standard vendor risk

Standard vendor risk assessments cover data security, breach notification, and service reliability. For a CRM, an HR platform, or a document storage vendor, these three areas cover most of the risk.

AI vendors introduce four risks that standard vendor assessments do not cover.

Model training rights. Cloud AI vendors may use data submitted during inference to train or improve their models. A clause that permits this means your users' queries, documents, or inputs could become training data for the vendor's next model release. For enterprises handling proprietary strategy, patient records, financial data, or client communications, this is a material risk that standard vendor assessments do not flag.

Data residency opacity. AI models are deployed across distributed cloud infrastructure. The vendor may not be able to confirm which data centre jurisdiction processed a specific user's query. For enterprises with data residency requirements (GDPR, US federal contractor requirements, financial services regulation), "we process in US regions" may be insufficient if regional routing cannot be guaranteed.

Third-party model dependencies. Many mobile vendors integrate AI by calling a model API that itself calls another model API. The chain of processors may include vendors that were never disclosed to you. Your enterprise agreement is with the mobile vendor. The mobile vendor's agreement with the AI API vendor. The AI API vendor's agreement with their compute provider. Your users' data may flow through all of them.

Change-of-control exposure. AI companies are frequent acquisition targets. A vendor that scores well today may be acquired tomorrow by a company with different data practices. Without a protective change-of-control clause, the acquisition can move your users' data to the acquirer's infrastructure without renegotiation.

The ten-question scorecard

Score each question from 0 (not addressed or unknown) to 3 (fully addressed with contractual guarantees). The maximum score is 30.

Question 1: Model training rights opt-out Does the vendor agreement explicitly state that data submitted during inference will not be used to train or improve models, and is this guarantee contractual rather than policy-level?

  • 0: No opt-out available; default training rights apply
  • 1: Policy-level opt-out available but not contractual
  • 2: Contractual opt-out available in standard agreement
  • 3: Contractual opt-out with audit rights to verify compliance

Question 2: Data retention policy How long does the vendor retain data submitted during inference, and is the retention period contractually bounded?

  • 0: Retention period not disclosed or unlimited
  • 1: Retention period disclosed but not contractually bounded
  • 2: Contractually bounded retention with defined maximum period
  • 3: Contractually bounded retention with zero-retention option for enterprise tier

Question 3: Data deletion rights Can you require deletion of all data associated with your account, including inference logs and derived data?

  • 0: No deletion rights stated
  • 1: Deletion available on request with undefined timeline
  • 2: Deletion available with defined timeline (e.g., 30 days)
  • 3: Deletion available with defined timeline and deletion confirmation audit trail

Question 4: Data residency jurisdiction Can the vendor confirm the specific jurisdictions in which your data will be processed?

  • 0: Jurisdiction not disclosed or "various regions"
  • 1: General region stated (e.g., "US East") without specific jurisdiction guarantee
  • 2: Specific jurisdiction contractually guaranteed
  • 3: Specific jurisdiction guaranteed with right to restrict to a single region

Question 5: Change-of-control clause What protection does the agreement provide in the event of vendor acquisition?

  • 0: No change-of-control clause
  • 1: Notification required but no termination rights
  • 2: Termination rights upon acquisition with data deletion guarantee
  • 3: Termination rights plus acquirer must honor agreement terms for a defined period

Question 6: Breach notification timeline What is the contractually committed timeline for notifying you of a breach affecting your data?

  • 0: No contractual breach notification requirement
  • 1: Notification required but timeline undefined
  • 2: Notification within 72 hours (GDPR equivalent)
  • 3: Notification within 24 hours with root cause analysis within 7 days

Question 7: Audit rights Can you audit the vendor's data handling practices, or commission an independent audit?

  • 0: No audit rights stated
  • 1: Vendor provides audit reports on request (e.g., SOC 2 report)
  • 2: Third-party audit rights available
  • 3: Full audit rights with on-site access available under enterprise agreement

Question 8: On-device option available Does the vendor offer an on-device AI option for features where data cannot leave the device?

  • 0: No on-device option; cloud processing only
  • 1: On-device option mentioned but not productised
  • 2: On-device option available for some features
  • 3: Full on-device AI capability with verified production deployment

Question 9: Third-party model vendor audit trail Can the vendor identify all third-party model or compute providers in the inference chain?

  • 0: Sub-processors not disclosed
  • 1: General sub-processor categories disclosed
  • 2: Specific sub-processors identified and listed in agreement
  • 3: Specific sub-processors with their own data agreements available for review

Question 10: Open-source alternatives offered Does the vendor offer implementation using open-source models that eliminate the vendor relationship?

  • 0: Only proprietary cloud models offered
  • 1: Open-source models mentioned but not implemented
  • 2: Open-source models implemented in some engagements
  • 3: Open-source on-device models as default recommendation where feasible

A 30-minute call with a Wednesday engineer walks through this scorecard for your specific vendor shortlist and compliance requirements.

Get my recommendation

How to score each question

For each question, the baseline is what the vendor says publicly. The contractual verification is what the agreement actually contains. Vendors score at the lower of the two — a strong public commitment that is not in the agreement scores as if the commitment does not exist.

Ask for the enterprise agreement, the data processing addendum, and the sub-processor list before scoring. These three documents contain the information needed to score all ten questions. Verbal commitments from a sales engineer are not scoreable.

Interpreting your score

Score rangeInterpretation
25-30Low risk. Material data governance concerns are addressed.
18-24Moderate risk. Specific gaps exist and should be addressed before approval.
12-17High risk. Multiple material gaps. CISO escalation recommended.
Below 12Very high risk. Fundamental data governance commitments are absent.

For regulated industries (healthcare, financial services, legal), the threshold for low-risk should be 27-30, not 25-30. The cost of a data governance failure in a regulated industry is disproportionate to the cost of requiring a higher baseline from vendors.

The questions most vendors fail

Three questions in the scorecard have the highest failure rate in vendor evaluations.

Model training rights opt-out (Question 1). Most AI API vendors include model training rights in their standard terms of service. The enterprise agreement carve-out exists but requires active negotiation. Vendors who offer the standard tier as their "enterprise offering" do not include this carve-out. The failure shows up as a score of 1 — opt-out available but not contractual.

Change-of-control clause (Question 5). The majority of cloud AI vendor agreements do not include protective change-of-control language. The vendor is frequently a startup where acquisition is a strategic goal. A vendor who has not thought through their change-of-control obligations to enterprise customers scores 0 here.

Third-party model vendor audit trail (Question 9). Many mobile vendors integrate AI through intermediary APIs that themselves call other model providers. The mobile vendor's enterprise agreement may not list the downstream AI model vendor as a sub-processor. The mobile vendor may not know the full sub-processor chain. This question exposes opacity that is common and material.

Why on-device AI scores 30/30

On-device AI with open-source models achieves a perfect score because the scoring framework is built around data flow risk. No data flows to a vendor; no vendor-related risks exist.

Question 1 (model training rights): scores 3 — there is no vendor to train on the data. Question 2 (data retention): scores 3 — no data is retained anywhere outside the device. Questions 3-10 follow the same logic.

The only category where on-device AI requires consideration is Question 8 (on-device option available) and Question 10 (open-source alternatives offered) — both of which are scored by design as 3 when on-device is the architecture.

How Wednesday uses this scorecard

Before any AI engagement, Wednesday runs this scorecard against the proposed AI architecture. For on-device engagements, the scorecard takes 20 minutes because the answers to most questions are "not applicable — no external data flow." For cloud AI engagements, the scorecard identifies the specific vendor agreement provisions that need to be addressed before the feature can ship to users.

The scorecard output is included in the engagement scope document given to the client before build starts. The client's legal and compliance teams can review the scorecard alongside the architecture decision and raise any concerns before engineering time is committed.

Wednesday's own AI data practices score 30/30. Off Grid runs on-device with no telemetry, no vendor data relationship, and no cloud fallback. The architecture is the same one Wednesday recommends to every enterprise client for whom on-device AI is technically feasible.

Wednesday runs this scorecard on every AI engagement before build starts. 30 minutes covers your specific vendor shortlist and compliance context.

Book my 30-min call
4.8 on Clutch
4x faster with AI2x fewer crashes100% money back

Frequently asked questions

More vendor evaluation frameworks, compliance guides, and AI architecture decision tools are in the writing archive.

Read more decision guides

About the author

Mohammed Ali Chherawalla

Mohammed Ali Chherawalla

LinkedIn →

CRO, Wednesday Solutions

Mohammed Ali leads business development at Wednesday Solutions and has run vendor risk assessments across fintech, healthcare, and field service enterprises.

Four weeks from this call, a Wednesday squad is shipping your mobile app. 30 minutes confirms the team shape and start date.

Get your start date
4.8 on Clutch
4x faster with AI2x fewer crashes100% money back

Shipped for enterprise and growth teams across US, Europe, and Asia

American Express
Visa
Discover
EY
Smarsh
Kalshi
BuildOps
Ninjavan
Kotak Securities
Rapido
PharmEasy
PayU
Simpl
Docon
Nymble
SpotAI
Zalora
Velotio
Capital Float
Buildd
Kunai
Kalsi
American Express
Visa
Discover
EY
Smarsh
Kalshi
BuildOps
Ninjavan
Kotak Securities
Rapido
PharmEasy
PayU
Simpl
Docon
Nymble
SpotAI
Zalora
Velotio
Capital Float
Buildd
Kunai
Kalsi
American Express
Visa
Discover
EY
Smarsh
Kalshi
BuildOps
Ninjavan
Kotak Securities
Rapido
PharmEasy
PayU
Simpl
Docon
Nymble
SpotAI
Zalora
Velotio
Capital Float
Buildd
Kunai
Kalsi