Canada’s economy is being reshaped by technologies that traditional statistics barely register. As Erik Bohlin observed in his opening remarks, relying on legacy measures like GDP to understand an AI-enabled economy is like navigating a modern megacity with a 1930s street map: key roads, neighbourhoods, and connections are missing. Bohlin linked this metaphor to a deeper idea about maps themselves, noting that some older maps were not only outdated but also shaped by what they chose to include or exclude. In the age of AI, he argued, Canada needs new conceptual maps of its economy. Romel Mostafa extended this image by reminding participants that policymakers depend on measurement the way engineers depend on sensors; if those sensors are calibrated for another era, even well-intentioned policy will miss the mark.
These questions framed the December 3rd workshop Measuring the Digital Economy in the Age of AI, held at the Ivey Donald K. Johnson Centre in Toronto and convened by Erik Bohlin, Ivey Chair in Telecommunications Economics, Policy and Regulation, and Romel Mostafa, Director of the Lawrence National Centre for Policy and Management. The workshop explored how Canada can build a more complete measurement ecosystem—one that can capture not only production, but also usage, skills, consumer welfare, infrastructure impacts, and the growing strategic importance of sovereign AI capabilities.
Read the Ivey Communications article featuring the workshop
Rethinking the digital divide
The discussion began by reframing the digital divide as a question of how people use technology, not just who is connected. Shane Greenstein drew on Microsoft telemetry from roughly 40 million laptops to show that once people are online, their skills and behaviours diverge in ways that standard access statistics simply miss. His team’s indices of basic and advanced digital skills point to only a modest urban–rural gap, but reveal striking pockets of low engagement inside major cities and unexpected high-use communities outside them.
Equally important, the drivers of different kinds of use are not the same. Income and education are closely tied to basic digital activities, while more advanced content-creation work clusters in urban areas and follows its own dynamics. The key insight is that connectivity is only the starting point; without targeted efforts to build skills, encourage richer forms of engagement, and respond to local patterns of use, infrastructure investments alone will leave major divides intact.
Lessons from China’s digital engines
A complementary perspective came from Yu-li Liu, who examined how China’s vast platform economy and livestreaming sector complicate traditional measurement. She described platforms such as WeChat, Alibaba, and Meituan as an “operating system” for everyday life, handling communication, payments, logistics, and local services at national scale.
Livestreaming commerce sits on top of this infrastructure, turning shopping into real-time performance. Hosts demonstrate products, interact with audiences, and drive immediate purchases, often selling thousands of items in minutes, while simultaneously building community and trust. Liu argued that this dual-engine model, platform infrastructure plus livestreamed engagement, creates powerful feedback loops that drive innovation and consumption, but largely escape production-based measures like GDP.
Much of the value is generated through free services, recommendation algorithms, and behavioural data rather than priced output. That reality underscores the need for measurement frameworks that can account for data, algorithms, trust, and ecosystem effects, as well as the growing importance of privacy and data integrity in all markets, including Canada’s.
Measuring consumer benefits from digital tools and AI
Avinash Collis zeroed in on a profound measurement paradox: the digital revolution’s biggest value, consumer welfare from free tools, remains almost entirely invisible to policymakers. Echoing Robert Solow’s classic line that computers changed everything except the productivity numbers, Collis showed how GDP captures production well but systematically misses the massive benefits people extract from zero-price services like search, messaging, and now generative AI.
His innovation, large-scale choice experiments, asks people how much cash they’d need to forgo tools like Google Search or ChatGPT for a month. This lets researchers reverse-engineer demand curves and calculate “consumer surplus” even without market prices. The numbers are staggering: digital platforms delivered roughly $1.3 trillion in U.S. consumer benefits and $125 billion in Canada in 2022 alone. Early AI estimates suggest users gained nearly 100 times more value ($97 billion) than firms captured in revenue ($7 billion). This 97-3 consumer-producer split mirrors Bill Nordhaus’s historical finding across major innovations. It reveals that citizens, not corporations, reap nearly all the gains from digital transformation.
The deeper insight cuts to policy core: GDP isn’t broken, but over-relying on it blinds regulators to where value actually flows. Collis’s “dashboard” prescription, pairing production stats with direct welfare and well-being measures, equips governments to tackle AI’s true questions: Does it genuinely improve lives? How does it reshape inequality? And are the productivity gains worth potential cognitive or social costs? Without this balanced view, nations risk regulating shadows while the real economic action happens off the balance sheet.
Measuring AI in a complex ecosystem
If consumer benefits are hard to see, the impact of AI on markets and institutions is even harder. Drawing on work at the Weizenbaum Institute, Volker Stocker described a “paradox of knowledge”: policy actors often do not know what they need to measure, because many AI-driven dynamics are opaque by design or deeply embedded in digital infrastructure.
He highlighted three layers of complexity. First, AI systems built on deep neural networks are inherently hard to interpret, limiting transparency even when companies are willing to share information. Second, AI literacy has not kept pace with usability; natural language interfaces make these systems easy to adopt but difficult to use wisely. Third, as AI-generated content becomes harder to distinguish from human output, it becomes difficult to assign responsibility or trace causal chains in areas like information integrity and political discourse.
Stocker also drew attention to the structure of AI ecosystems. Large cloud and platform providers now occupy central positions in the “substrate” layer, data centres, networks, and core models, on top of which many other firms build applications. These arrangements, often involving overlapping partnerships and forms of co-opetition, blur familiar boundaries between infrastructure and service, between domestic and foreign assets, and between private and quasi-public functions.
In such a setting, traditional metrics, such as market shares, sectoral output, or advertising impressions, capture only part of the story. Stocker’s own work on sycophancy in large language models illustrates another challenge: researchers can use the same term, measure different things, and arrive at incompatible conclusions, making it difficult for regulators to compare evidence or design interventions. He argued that responding to AI’s measurement challenges will require a multi-stakeholder ecosystem, combining access to data, methodological expertise, and regulatory authority, rather than relying on any single institution.
Where digital metrics stand today
The panel moderated by Jennifer Withington highlighted a tension at the heart of Canada’s AI statistics: official numbers understate how deeply AI is already woven into everyday business tools. Statistics Canada’s new TechStat program shows relatively few firms reporting AI use, yet practitioners pointed out that many of those same firms are using AI inside platforms like CRM systems and e-commerce tools without recognising or reporting it as “AI.” The emerging insight is that Canada is not just facing an adoption gap, but a recognition gap, and measurement systems that rely on self-report risk missing much of what is happening under the hood.
Panellists also noted that Canada looks stronger on capability than on diffusion: it performs well on indicators such as AI-related patents, but lags on broad, economy-wide adoption. That suggests the real policy challenge is turning concentrated strengths into widespread productivity gains. To do that, metrics must move beyond a simple yes/no on AI use and begin to capture intensity, use cases, and impacts on jobs and firm performance. At the same time, the group stressed that definitions and indicators need to be stable enough to track change over time; if measures are constantly redesigned to chase the next AI buzzword, Canada will lose the ability to see whether policies are actually closing gaps or simply renaming them.
Reimagining Canada’s digital economy
A second panel cut to the strategic heart of Canada’s digital opportunity: leveraging AI not just as a tool, but as a foundation for competitive advantage in global value chains. Representatives from Cisco, TELUS, and Bell described AI’s practical integration into networks; self-healing systems, automated configuration, and intelligent customer support, as well as emerging “AI fabrics” and sovereign data centres that keep compute power and data under domestic control. The insight here is sharp: Canada’s true edge lies in bundling clean energy, elite technical talent, and geopolitical trust into a package that attracts AI infrastructure investment, turning natural advantages into exportable “green compute” and trustworthy services.
Yet sovereignty emerges less as technological autarky than as deliberate design choice; prioritising privacy-by-design, domestic partnerships, and control over critical assets like location and operation, even while sourcing components globally. This reframing challenges traditional measurement paradigms: policymakers must now track not only adoption rates, but ownership concentration, supply-chain vulnerabilities, jurisdictional risks, and carbon footprints, ensuring statistics reveal who really controls the digital economy’s plumbing and how sustainably it flows.
From connectivity to effective participation
In his closing remarks, Ian Scott cut through a persistent policy misconception: access is not the same as participation. Canada has achieved impressive broadband coverage, with roughly 96 per cent of households able to reach high-speed service, but true digital inclusion hinges on affordability, device ownership, skills, and trust, which remain unevenly distributed across communities. The insight is stark: recent gains in rural and Indigenous connectivity are real, yet without granular data on who stays offline, why, and at what cost to opportunity and equity, governments risk celebrating infrastructure wins while missing deeper human barriers.
Scott’s deeper challenge was regulatory. Just as GDP is rightly being augmented with consumer welfare and well-being measures, telecom rules forged in an analog era cannot govern AI-driven ecosystems. The path forward involves parallel evolution: new data transparency mandates paired with consumer safeguards, shifting focus from rate regulation to ecosystem health, ensuring platforms, skills pipelines, and trust mechanisms support broad participation rather than concentrating gains.
The workshop converged on a precise agenda: Canada needs measurement systems that illuminate usage patterns, risk distributions, and infrastructure control, backed by cross-sector data-sharing and durable indicators that outlast technology fads. The ultimate insight is strategic restraint; don’t scrap old maps in haste, but redraw them incrementally to reveal the digital economy’s true contours, guiding policies that build the inclusive, innovative Canada now within reach.
This article draws on perspectives compiled by Noor Us Sahar, MSc '25, from the December 3, 2025 workshop Measuring the Digital Economy in the Age of AI, the fifth in Ivey’s telecommunications policy dialogue series, held at the Ivey Donald K. Johnson Centre in Toronto and convened by Erik Bohlin, Ivey Chair in Telecommunications Economics, Policy and Regulation, and Romel Mostafa, Director of the Lawrence National Centre for Policy and Management. The workshop brought together more than 65 participants, including international scholars, senior statisticians, regulators, and industry leaders, and builds on a multi-year series of conversations on the future of telecommunications policy, infrastructure, and digital transformation.
Previous workshops in this series include: – May 2025 | Innovation and Telecommunications Policy: Shaping Tech, Markets & Networks – October 2024 | New Frontiers for Broadband and Resilience in Telecommunications: Satellites and Beyond – May 2024 | Building Resilience in Telecommunications – In Canada and Beyond – October 2023 | Comparative Perspectives on Broadband Regulation and Access
The Ivey Telecom Workshops are co-funded by the Lawrence National Centre for Policy and Management and the Ivey Chair in Telecommunication Economics, Policy and Regulation. The Centre gratefully acknowledges the generous support of the Power Corporation of Canada, the Jack Lawrence Family, and the Mitchell and Kathryn Baran Family Foundation. The Chair is funded by Ivey Business School as well as by support, from Bell Canada and TELUS, to Western University.