The ROI Paradox: Two Truths About AI's Progress
Bridging the gap between AI's promise and AI's profit requires redefining ROI, strategic investment and empowering middle management. Success lies in integrating AI into core systems, fostering trust and treating AI as a capital investment, not an experiment.
Across industries, boards and C-level teams, everyone is asking the same question: When will AI start showing up in the numbers?
Two major 2025 studies seem to offer conflicting answers.
Maybe you have seen the data. The Wharton-GBK "Accountable Acceleration" report finds 74 percent of enterprises claiming positive returns from generative AI (GenAI). In sectors such as technology, banking and professional services, that number climbs above 80 percent.
Yet the much-publicized MIT Media Lab's Project NANDA found that only 5 percent of organizations have realized measurable, sustained impact on their income statements. Of 300 public deployments studied, the vast majority stalled at promising rather than profitable.
Both are true. They're simply capturing different points on the same climb — from promise to profit.
Wharton-GBK captures momentum — perceived progress. MIT measures conversion — actual financial impact[1]. The challenge is bridging the gap between the two.
Both perspectives matter, but only one changes shareholder value.
Why the numbers diverge
The gap between optimism and outcome isn't a contradiction; it's a map of where enterprises stall. Four forces explain the divide:
- Different definitions of ROI
- Different populations
- Different phases of adoption
- Different cultural lenses
We explore each below, and include WWT's recommendation.
Different definitions of ROI
Wharton-GBK includes productivity and enterprise efficiency gains as legitimate returns. If time is saved or quality improves, that's progress.
MIT's bar is higher: Unless results appear in the income statement — reduced cost-to-serve, reduced COGS, higher margins, faster cash flow — it doesn't count.
Recommendation: Define ROI in hard P&L terms.
Different populations
Wharton-GBK surveyed mature, U.S.-based firms already experimenting with GenAI. MIT looked globally, including companies still at the pilot stage. The result: Wharton's "glass half-full" versus MIT's "barely started, glass half-empty."
Recommendation: Know where you are on the maturity curve and sequence investments accordingly.
Different phases of adoption
The Wharton-GBK study sees firms tying spend to KPIs, signaling accountability. Meanwhile, MIT sees a stall between pilot and scale — the value gap where most AI initiatives lose momentum.
This process of moving from pilots to successful production is often described as a chasm. Crossing this chasm requires specialized skills that are in limited supply in many enterprises. Today, these critical skills reside in partner firms, AI development shops or specialized consulting organizations.
WWT provides a range of AI expertise honed over 12 years and hundreds of successful machine learning, computer vision, recommendation engine, synthetic data, digital twin and GenAI deployments.
Recommendation: Invest in capabilities to scale beyond prototypes.
Different cultural lenses
Executives often see acceleration; operators feel friction. Leaders report "significant ROI" much more often than their teams. The difference? Optimism versus operational drag.
For boards, this tension is instructive. Progress feels visible at the top; integration pain is real at the middle. Sustainable ROI requires bridging both worlds.
Culture and trust are essential. We know high-trust organizations consistently report greater enthusiasm and adoption of AI. Employees need to feel safe experimenting with new tools, confident that their contributions matter and supported in continuous learning.
Recommendation: Empower middle managers and build trust.
What the top 5% get right
Among the organizations translating AI into profit, seven shared traits consistently emerge. These shared traits are also part of WWT's experience in developing hundreds of successful AI solutions since 2014.
They start with a business problem, not a technology demo
Winning enterprises begin with a P&L question (e.g., reducing claims cycle time, lowering churn, automating billing, etc.), not by asking "What can this model do?"
Every project has a business owner, a baseline and a measurable KPI.
Example: WWT's AI Studio framework helps clients align strategic use cases directly with business outcomes and quantify value upfront.
Use case
For example, an Investment Advisory firm with $40Bn AUM used AI to automate client meetings and portfolio compliance preparation, reducing prep time by nearly 90 percent. The solution improved marketing agility, enhanced the quality of client-facing materials, and strengthened existing relationships.
They integrate, not experiment
The most successful AI efforts don't sit on the sidelines — they're built into the core systems that run the business: ERP, CRM, call centers, etc. Anything "adjacent to the workflow" is already on borrowed time. In fact, a study found that 80 percent of failed pilots remain "adjacent to the workflow."
Example: Service design principles and workflow analysis enable organizations to embed AI into operational infrastructure at scale, not just a prototype. These analyses serve to reduce friction and, with upfront human investment in monitoring, refinement and guardrails, become reliable within 60-90 days.
They buy fast, then build smart
Smaller firms often reach ROI faster because they deploy pre-built tools first, then customize once value is proven. The graveyard of AI initiatives is full of bespoke projects that died waiting for APIs and data harmonization.
Example: The decision to build or buy should align with your organization's strategic goals, core competencies and the specific outcomes you aim to achieve with GenAI. For instance, if you're convinced GenAI success will be central to your competitive advantage, building a custom solution may be more appropriate. Conversely, for non-core applications, buying a standalone GenAI solution is likely the best route.
Watch: See how Scott Data Center embraced AI to offer GPU-as-a-Service
They empower middle managers
AI adoption lives or dies in the workflows owned by middle management. When governance is overly centralized, innovation suffocates. Leading companies push decision rights down while maintaining clear guardrails.
Improving adoption at this level is accelerated when employees are rewarded for integrating AI into their daily work. Financial incentives — such as bonuses or recognition tied to verified AI-driven improvements — motivate teams to embrace new tools and processes.
Equally important, involving employees directly in the "build" or "buy" decision for AI solutions fosters a sense of ownership. When users participate in selecting or shaping the tools they'll use, adoption becomes a natural outcome. Which makes sense. It's hard to reject a solution you helped approve.
Example: WWT emphasizes cross-functional stakeholder alignment in its AI Studio workshop model, bringing together business, technology and data leadership. Some AI leaders have already operationalized and federated common AI platforms with approved tools, guardrails, security layers, role-based access control (RBAC) and AI agents available to citizen developers, who optimize tasks, streamline roles, enable greater human achievement, improve work quality and accelerate time-to-decision.
These are examples of steps that move organizations from internal process optimization to acceleration and scale, where significant ROI is achieved.
They measure like finance, not IT
The leaders treat AI like a capital investment rather than an experiment. "Time saved" is translated into fully loaded labor cost; "faster insights" are translated into working capital velocity and cost of capital. Monthly dashboards tie results directly to the CFO, not to the AI lab.
Example: WWT's "Calculating the Financial Potential of AI in the Contact Center" research underscores use-case-specific ROI calculators that map to cost savings and efficiency gains.
They sequence use cases strategically
Data-heavy, process-driven sectors see returns first because their work is easier to codify. Boards should follow that logic: Begin with structured workflows (procurement, legal, finance) before venturing into creative or physical domains.
A well-governed roadmap and investments for step-by-step AI development build organizational trust in AI and skills among teams responsible for developing AI applications or curating the data, tools and platforms that enable the safe development of AI applications by LOB teams.
Use case
For example, a PE Firm prioritized a structured, data-driven workflow as an early AI use case. They integrated an AI knowledge assistant that reduced research time from hours to minutes and improved decision quality through faster, contextual insights.
They build trust as a capability
ROI-positive firms invest in training, transparency and "AI citizenship." Employees who trust the system use it more, and that usage compounds their returns. Train teams, explain models, surface bias. Trust drives usage, and usage compounds ROI.
Examples: WWT's AI Studio, AI Foundry and AI Security team emphasizes governance, security and compliance up front as part of the AI roadmap, helping establish the credibility and security needed for scaled deployment.
As of October 2025, more than 3,000 WWT employees have earned their AI Driver's License, a form of adoption training that encourages AI literacy, responsible AI usage, and continuous feedback to improve AI performance.
A board playbook for sustainable AI ROI
Boards don't need to understand every model parameter. But they do need a disciplined governance framework to ensure AI creates lasting financial value.
Require an "ROI Charter" for every AI initiative
One page, four essentials, succinctly stated:
- The business outcome (cost, revenue, quality or risk)
- The accountable owner
- Baseline and target metrics
- Timeline for ROI
Both the CIO, CAIO (Chief AI Officer) and CFO should sign off. This helps to merge technology delivery with financial accountability.
Move AI oversight under the CEO, CFO or COO
AI is now a financial transformation tool, not an IT experiment. A C-level chaired steering committee ensures ROI is reviewed with the same rigor as CapEx or M&A.
Standardize ROI reporting
A common reporting template allows comparison across units, capturing spend, realized benefit and verified ROI over time.
Embed AI into risk and audit committees
Model drift, data leakage and ethical bias are not theoretical — they're operational risks. Audit committees should mandate periodic validation, data lineage documentation and explainability checks.
Fund transformation, not tools
Boards should direct 70 percent of AI budgets toward process redesign and workforce enablement. Buying models is easy; integrating them into the enterprise is the work.
Sequence investment to build learning curves
The most mature organizations evolve in three stages:
- Optimize: Pilot measurable back-office processes.
- Accelerate: Scale proven workflows.
- Transform: Integrate autonomous decision-making tied to customer outcomes.
Tie compensation and recognition to verified ROI
Just as many transformational initiative metrics have moved from progress reports to bonuses, AI accountability should follow suit. Link a portion of variable pay to cost-reduction or efficiency gains validated by finance. Recognize AI innovation and those modeled behaviors by individuals and teams that use AI to achieve results.
The new leadership equation
AI is not just automation — it's the reorganization of enterprise intelligence.
Wharton-GBK's data shows AI use is nearly universal; MIT reminds us that effectiveness is not. Without structure, technology amplifies inefficiency.
Technology alone doesn't deliver ROI; it just scales whatever is already there. Without structure, AI amplifies inefficiency. The future belongs to firms that redesign how work, intelligence and accountability flow — not just those who deploy models.
ROI isn't a technological milestone — it's a test of management discipline.
The shift looks like this:
Old model | New model |
|---|---|
| AI as experiment | AI as production-grade system |
| IT-led innovation | CEO[2], CFO or COO-governed transformation |
| Productivity anecdotes | Financial dashboards |
| Tech budget | Enterprise capital allocation |
| Model accuracy | Business-outcome accuracy |
| Centralized control | Distributed empowerment |
The future belongs to firms that redesign how human tasks, roles, intelligence and accountability flow through the organization.
The board and leadership AI Scorecard
Boards and executive leadership teams can use a simple scorecard to assess readiness and progress:
| Dimension | Leading indicators | Board action |
|---|---|---|
| Strategy | AI linked to KPIs; Return Charters in place | Mandate ROI charter approval per investment |
| Finance | Monthly ROI to CFO | Embed AI in management reporting |
| Governance | Risk frameworks applied to AI | Extend audit scope |
| Operations | AI embedded in workflows | Require integration milestones |
| Culture | Workforce trained; trust measured | Fund enablement and adoption |
| Innovation | Continuous learning loops | Support R&D with ROI discipline |
If it cannot be measured, it cannot be scaled.
From perception to profit
AI's first act was enthusiasm, CapEx investment in AI factories and foundation models, and widespread consumer adoption. Its second must be improved economics at an enterprise scale.
The Wharton-GBK study shows progress; the MIT study shows how far there is still to go. The opportunity lies not in choosing between them, but in connecting them — by turning perception into profit through disciplined governance.
For boards, the mandate is clear:
- Define ROI in P&L terms.
- Demand ROI Charters for every AI dollar.
- Shift governance toward finance.
- Invest in people and process, not just platforms.
- Measure relentlessly.
The numbers start moving when you treat AI like a capital program
AI's future won't belong to the firms that build the biggest models, but to those that turn intelligence into income.
Thanks to Travis Diepenbrock and Nani McDaniel for their contributions to this WWT Research Note.
References
[1] The MIT Media Lab study analyzed results during a period of six months. For very large enterprises with tens of thousands of employees, six months is a brief period in which to achieve measurable ROI.
[2] WWT's AI transformation is CEO-led and includes weekly updates by its executive leadership team detailing how AI is being used by each business function.
This report may not be copied, reproduced, distributed, republished, downloaded, displayed, posted or transmitted in any form or by any means, including, but not limited to, electronic, mechanical, photocopying, recording, or otherwise, without the prior express written permission of WWT Research.
This report is compiled from surveys WWT Research conducts with clients and internal experts; conversations and engagements with current and prospective clients, partners and original equipment manufacturers (OEMs); and knowledge acquired through lab work in the Advanced Technology Center and real-world client project experience. WWT provides this report "AS-IS" and disclaims all warranties as to the accuracy, completeness or adequacy of the information.