Writing

How to Prioritize Competing Mobile Projects When Budget Covers Half of Them

You have seven mobile projects on the whiteboard and budget for three. The framework below makes the decision defensible — to your CFO, your board, and yourself.

Shounak MulayShounak Mulay · Technical Lead, Wednesday Solutions
8 min read·Published Feb 22, 2026·Updated Feb 22, 2026
4xfaster with AI
2xfewer crashes
10xmore work, same cost
4.8on Clutch
Trusted by teams atAmerican ExpressVisaDiscoverEYSmarshKalshiBuildOps

Seven mobile projects have been proposed. Budget and team capacity support three. The question — which three — is harder than it looks because every project has a sponsor, every sponsor believes their project is the most important, and the criteria used to evaluate competing projects are rarely agreed on before the list exists.

The result is a decision made by a combination of stakeholder influence, recency bias, and the default assumption that the most visible projects are the most valuable. That combination produces a selection that is hard to defend six months later when one of the funded projects underdelivers and one of the deferred ones turns out to have been urgent.

The framework below makes the selection defensible. Not by removing judgment, but by making it structured.

Key findings

The most common reason the wrong three projects get funded is that the selection criteria are not defined before the list exists. Once proposals are on the table, sponsors argue for their own projects rather than evaluating against a shared standard.

Five criteria cover the decision for most enterprise mobile portfolios: business impact, cost of deferral, vendor readiness, dependency order, and timeline constraint. Each can be scored from 1 to 3. The top three scorers get funded.

The score does not make the decision — it structures the conversation that leads to it. The output is a prioritisation that all relevant stakeholders have seen and that is documented well enough to survive a quarterly review.

Why the wrong three often get picked

Three dynamics consistently produce poor portfolio decisions.

The loudest sponsor wins. The project with the most senior or most persistent sponsor gets funded regardless of its relative priority. This is not cynical — it is a natural outcome of a process that does not have shared scoring criteria. When there is no agreed standard, influence fills the gap.

The most recently proposed project has an advantage. Recency bias means the project presented in last week's leadership meeting is more vivid than the one proposed three months ago. The older project may be more important. It is less top of mind.

Complexity is underestimated for ambitious projects and overestimated for straightforward ones. A project that sounds impressive in a presentation tends to be scoped optimistically. A project that sounds simple tends to be scoped conservatively. Both estimates are wrong in predictable directions. Without a vendor readiness assessment in the scoring, neither error is visible at the decision point.

Five criteria that make the decision

Criterion 1: Business impact. What is the measurable outcome if the project ships? How directly does that outcome connect to a metric the board tracks — revenue, cost, user retention, compliance? Score 3 for a direct, measurable connection to a board-level metric. Score 2 for an indirect connection. Score 1 for a project whose outcome is internal efficiency without a clear external metric.

Criterion 2: Cost of deferral. What does it cost to not do this project for the next 12 months? For some projects, deferral has no cost — the opportunity is not time-sensitive. For others, deferral means a competitor ships first, a regulatory deadline is missed, or a revenue line that could have been opened in Q2 opens in Q4 instead. Score 3 for a hard external deadline or a direct competitor pressure. Score 2 for a soft deadline with a real cost. Score 1 for an internally motivated project with no urgency driver.

Criterion 3: Vendor readiness. Does your current vendor have a demonstrable track record of delivering this specific type of project? A vendor that has shipped a field operations app before will deliver a second one on a more reliable timeline and budget than a vendor attempting it for the first time. Score 3 if the vendor has shipped the same class of project before. Score 2 if the project is adjacent to prior work. Score 1 if it is novel territory for the vendor.

Criterion 4: Dependency order. Does this project need to be done before another project on the list? Some projects are enablers — they create the foundation that makes a subsequent project faster and cheaper. Shipping an offline data layer before building a field reporting app means the reporting app ships on a shorter timeline and with less risk. Score 3 if the project is a prerequisite for one or more other projects on the list. Score 2 if it is complementary. Score 1 if it is independent.

Criterion 5: Timeline constraint. Is there a specific date by which this project must be in production — a product launch, a board presentation, a regulatory deadline? Score 3 for a fixed, non-negotiable date within the next six months. Score 2 for a target date that has meaningful consequences if missed. Score 1 for a project with an open-ended timeline.

How to score the projects

Build a table with each project as a row and each criterion as a column. Score each cell from 1 to 3. Sum the rows. The three projects with the highest scores get funded.

ProjectBusiness impactCost of deferralVendor readinessDependency orderTimelineTotal
Project A3323213
Project B2231311
Project C312219
Project D131139
Project E2222210
Project F3232111
Project G113117

In this example, Projects A, B, and F are funded. C, D, and E are deferred and revisited next quarter. G is deferred without a near-term review.

If you are working through a mobile project portfolio decision and want a second opinion on the scoring, a 30-minute call with a Wednesday engineer covers the assessment.

Book my call

What the score does not tell you

The scoring framework is an input to the decision, not the decision itself. Three things fall outside it.

Interdependencies between funded projects. If two of the three funded projects require the same specialised engineering capability, they cannot run simultaneously on a single vendor team. The scoring does not account for resource contention. After scoring, sequence the funded projects against the available vendor capacity.

Regulatory risk that is not visible from the proposal. A project that scores well on business impact and timeline may have a compliance implication that was not apparent when the proposal was written. HIPAA, SOC 2, PCI DSS, and App Store requirements for specific feature types add timeline and cost. Include a compliance review step before any project moves from approved to scoped.

The difference between a well-scored project and a well-scoped one. A project can score well on business impact and cost of deferral and still be poorly scoped — with an estimate that does not account for the real complexity. A high score gets a project funded; a thorough scoping process gets it delivered on the timeline and budget the business expects.

Making the decision defensible

The scoring table, filled in before the prioritisation conversation, changes the dynamic in the room. Instead of each sponsor arguing for their project, the conversation becomes: do we agree on these scores? Are there criteria factors we have missed?

Share the table with the relevant stakeholders before the decision is made. Give them the criteria definitions and ask them to challenge the scores, not the outcome. A score that survives stakeholder challenge is a score the group owns. A decision that the group owns survives the quarterly review without being relitigated.

Document the final table and the three funded projects with a brief note on why the deferred projects were deferred. Not an apology — a record. "Project C was deferred because it has no timeline constraint and low cost of deferral. It will be revisited when vendor capacity opens in Q3." That sentence is enough to make the decision defensible if anyone asks.

The goal is not a perfect prioritisation. It is a defensible one — made against agreed criteria, shared with the relevant stakeholders, and documented well enough to survive scrutiny six months later.

Wednesday helps enterprise teams scope and sequence mobile project portfolios before vendor selection. A 30-minute call covers the framework and applies it to your current project list.

Book my call

Frequently asked questions

The writing archive has vendor comparison guides, cost benchmarks, and decision frameworks for every stage of the enterprise mobile buying process.

Read more decision guides

About the author

Shounak Mulay

Shounak Mulay

LinkedIn →

Technical Lead, Wednesday Solutions

Shounak is a Technical Lead and mobile strategist at Wednesday Solutions with hands-on depth in Android and Flutter. He has shipped mobile products and enterprise AI solutions across fintech trading, on-demand logistics, and edtech, and brings architectural depth and product strategy to engagements where mobile is central to the business model.

Four weeks from this call, a Wednesday squad is shipping your mobile app. 30 minutes confirms the team shape and start date.

Get your start date
4.8 on Clutch
4x faster with AI2x fewer crashes100% money back

Shipped for enterprise and growth teams across US, Europe, and Asia

American Express
Visa
Discover
EY
Smarsh
Kalshi
BuildOps
Ninjavan
Kotak Securities
Rapido
PharmEasy
PayU
Simpl
Docon
Nymble
SpotAI
Zalora
Velotio
Capital Float
Buildd
Kunai
Kalsi
American Express
Visa
Discover
EY
Smarsh
Kalshi
BuildOps
Ninjavan
Kotak Securities
Rapido
PharmEasy
PayU
Simpl
Docon
Nymble
SpotAI
Zalora
Velotio
Capital Float
Buildd
Kunai
Kalsi
American Express
Visa
Discover
EY
Smarsh
Kalshi
BuildOps
Ninjavan
Kotak Securities
Rapido
PharmEasy
PayU
Simpl
Docon
Nymble
SpotAI
Zalora
Velotio
Capital Float
Buildd
Kunai
Kalsi