One of the central areas of discussion at this year’s World Economic Forum in Davos surrounded what seems to be the only topic capturing international media over the past four years: artificial intelligence. Showcased by the over 1,060,000,000 search results for an “artificial intelligence” query on Google, it has firmly established itself as a central element of the global digital imagination –hailed by some as a world-changing innovation and flagged by others as a source of growing concern for policymakers and regulators.
However, the intensity of this worldwide fascination is hardly driven by altruistic ambitions such as simplifying workers’ lives or freeing up people’s time. At Davos, we caught a revealing glimpse of where the real interests of heads of state and AI corporations converge, set against a backdrop of hype, incomplete information, and subtle promotional content that make it increasingly difficult for users to grasp a full picture in a single sitting –or even a single article.
Recently, at Davos 2026, the World Economic Forum (WEF) aimed, among other goals, to position itself as the stage where this noise would be clarified and experts could offer insight into the financial and industrial future for the technology in all its varied applications. Nevertheless, it instead exposed a deep fault line between AI developers and leading companies and ever more impatient shareholders and investors eager to seek a return on their investments, exemplified by events like the resignation of Zoë Hitzig, former researcher at Open AI, over ethical concerns following the forum.
Davos was overcome with AI-enthusiasm from experts and company moguls. In the “Day After AGI” panel (WEF Annual Meeting), Anthropic CEO Dario Amodei projected superhuman systems in one to two years while Google DeepMind CEO Demis Hassabis estimated a 50% chance by decade’s end, both identifying “AI systems building AI systems” as the key signal. Nvidia CEO Jensen Huang described AI as a five‑layer system (from energy and chips to applications), triggering “the largest infrastructure build-out in human history” with real‑time unstructured data processing for productivity in radiology and nursing.
Nonetheless, the most recent data paints a more nuanced picture. WEF data shows $1.5 trillion in AI investment, with nearly 60% of companies planning to scale in 2025, yet pilots remain the major hurdle. On top of this, Deloitte-linked findings show over 50% of organisations are currently testing AI pilot access to their workforce, but only 25% to 30% end up scaling to enterprise impact. This article traces that shift across sectors, from agentic workflows to white-collar automation, exploring AI tensions and dynamics as technological deployment advances and institutional and investment frameworks evolve.
At the same time, investors are showing signs of caution, as repeated over‑promising by AI leaders has tempered market enthusiasm. Real economic gains appear highly asymmetrical – concentrated in sectors such as supply chain management, logistics, and manufacturing, while others, particularly in services like consulting or creative work, experience more limited or speculative benefits. The uneven distribution of returns raises deeper questions: why aren’t companies deploying scalable AI tools more broadly? Are limitations technical, managerial, or ethical? Recent government measures restricting AI‑related mass layoffs and public statements from financial leaders, such as the JP Morgan CEO’s remarks on responsible automation, indicate that the answer may lie at the intersection of policy uncertainty, organisational resistance, and evolving boundaries over what kinds of work AI should replace.
Trade & Financial Sectors: From Experimentation to Enterprise Impact
Artificial intelligence entered a clear pivot in 2026, AI-enthusiasm saturates forums, yet the path from pilots to impact remains uneven for most organisations. AI companies remain confident, and at times overly eager, in projecting revolutionary gains for businesses. Hewlett Packard Enterprise (HPE) Chief Financial Officer Marie Myers illustrated this optimism, declaring that “AI isn’t on the horizon; it’s here,” and predicting that in 2026 it will evolve from experimentation to a “core enabler” of finance operations, driving real-time insights and decisions. However, the data tells a more reserved story. According to Deloitte’s 2026 State of AI report, while workforce access to AI tools has increased by 50%, only about a quarter of organizations have successfully scaled more than 40% of their pilots into production, suggesting that widespread transformation remains more aspiration than reality.
On the same stage as finance leaders at Davos 2026, industry experts showcased how they successfully shifted their operations from the development stage of AI tools to the deployment of fully fledged AI products, showing other companies what they believe is the path to higher profits. “Frontier Firms” such as Fujitsu and Lenovo reported dramatic gains from early, large‑scale AI deployment, with Fujitsu’s supply‑chain agents cutting warehousing costs by USD 15 million dollars in a single year, while slower competitors only began piloting comparable tools in late 2025.

Credit: Evangeline Shaw via Unsplash
With AI expanding into materials, industrials, energy, and consumer staples throughout 2026, WEF framed AI as the future core economic engine that will determine companies’ success in boosting productivity, improve labor dynamics, and structure more efficient leadership models. However, AI companies disagree on when and in how long exactly these supposed transformations should manifest; pioneers like Anthropic’s CEO Dario Amodei offer more optimistic predictions(1-2 years) contrarily to others, like Google’s DeepMind CEO Demis Hassabis, who predicts a 50% chance by decade’s end, fueling uncertainty among investors.
Understanding where the potential gains from deploying AI lie becomes even more challenging when factoring in sectoral challenges. An area which came under intense focus during the forum was how easy and open industries are to make AI an integral part of their business, moving from considerably low cost pilots to high stakes level investments. AI offers overburdened sectors like healthcare and manufacturing a path to efficiency through real-time diagnostics and smart factories, yet the proposed benefits from AI used in these sectors pose important legal and ethical challenges, like data silos, privacy regulations, etc.
As time goes by, the data control gap widens across industries. Manufacturing demands “converging digital and physical systems where safety and reliability are non-negotiable,” forcing custom AI beyond generic models. And precisely, Davos stressed workforce reinvention as the linchpin: 39% of skills obsolete by 2030, 63% of employers stalled by talent shortages, and 50% of workers needing reskilling soon. Entry-level jobs face 30% automation while healthcare confronts a 10 million worker shortfall by 2030, linking back to IMF inequality warnings about uneven access.
These pressures reflect the insights hidden within the timeline splits from Amodei and Hassabis. Advanced firms like Nvidia’s infrastructure vision pull ahead on productivity, while others risk polarisation without re-skilling or governance catch-up. On this note, insights and WEF’s Global Risks Report 2026 ranked AI-driven economic divides high in global risks, tying sectoral hurdles to the broader “day after Artificial General Intelligence (AGI)” unpreparedness, while decision-makers across the world remain vigilant about how these sectoral plays branch out into serious governance gaps.
Policymaking and User Concerns: The Governance Gap and the Uncertainty Loop
Aside from expert claims and sectoral debate, the discussions at the 2026 World Economic Forum indicated a marked divergence between the pace of technological capability development in artificial intelligence and the speed of institutional and regulatory adaptation. This divergence raises the possibility that increasingly capable AI systems, including those with transformative or systemic impact, may emerge within governance frameworks that are only partially developed or unevenly enforced. In this sense, the AGI timeline debate at Davos functioned not only as a technical question, but also as an indicator of how compressed the window is for regulatory design and implementation.
The most evident concerns have been voiced by how AI is governed, meaning how decisions are made and monitored in AI-led systems and at what stage (if ever) is a human supervisor involved. The European Union has consistently pioneered tech-regulation in the last years, with its AI Act entering enforcement for high-risk systems on 2 August 2026, imposing risk categorisation, transparency, and rights-impact assessments. The United States still employs executive orders and voluntary commitments under a market-oriented model, prioritising “minimal burdens” in an attempt to bolster sectoral growth. While middle-ground approaches have arisen from countries like India, which adopts a collaborative regulatory approach via the IndiaAI Mission, focusing on domestic infrastructure and data governance with corporations.
On a similar note, misinformation and disinformation fueled by AI tools has been at the centre stage social discourse and political campaigns alike. Amplifying concerns that dominated Davos 2026 policy sessions. Generative models now produce hyper-personalised deepfakes, synthetic audio clips of candidates saying things they never uttered, and targeted narratives that spread faster than human fact-checkers can rebut, often tailored to micro-audiences via social media algorithms.

Illustration by Hartono Creative Studio on Unsplash
This escalates longstanding tensions between information integrity and free expression, urging regulators to face a dilemma: overly broad rules risk chilling legitimate speech, while narrow ones fail against AI’s speed and scale. The 2024 U.S. elections already saw AI-generated robocalls mimicking President Biden’s voice to suppress turnout, prompting emergency FCC fines; similar tactics hit India’s 2024 polls with fake videos of candidates.
Nevertheless, strong regulation has been penned to prevent and deter these issues. In the United States, more than 40 states, including California, Texas, and New York, now prohibit the distribution of “materially deceptive” AI‑generated media of political candidates in the weeks before an election, typically within a 30–90 day window. In the European Union, the EU AI Act, which becomes fully applicable in August 2026, requires that any deepfake content be clearly and visibly labeled, and failing to disclose AI‑generated content can trigger penalties of up to €35 million or 7% of a company’s global annual revenue.
Trust in core institutions has been pressuring policymakers toward hybrid approaches (digital watermarks, real-time detection APIs, electoral blackouts on synthetic media), but none fully resolves the consent paradox being flagged: users interacting with language learning models like Grok or ChatGPT unwittingly fuel the training data for tomorrow’s fakes. This circle closes the consumer uncertainty loop, where daily AI use undermines the very discourse needed for informed governance.
Nonetheless, stark regulatory divergence across the globe characterises another increasingly more palpable customer-borne cost: that of not being able to make calculated risks. Either while investing in AI-forward enterprises to keep up with the markets or by using the technology itself in every other feature and server, AI now seems engrained in all interactions of our daily lives, (from corporate AI assistants to AI-powered washing machines). Be it consciously or inadvertently, users are entering into one-click agreements that establish a “customer-supplier” relationship between them and software providers.
With AI-human interactions embedded in a global data-driven economy, where transactions dealing with people’s personal data and pattern behaviours are auctioned across continents, legal certainty seems like a Sisyphean feat – while one regulator safeguards data-selling, the other loosens up on big-tech limitations leaving customers unable to make fully informed choices about their own consumption and its long-term implications. Who owns the data people share with AI tools, how is it safeguarded and encrypted? What barriers have been put into place to ensure global companies are compliant with local regulation? All questions that lead to user-fatigue and uncertainty.
Places like California and the European Union are taking a stance towards reinforced consumer privacy, allowing people to have more control over their data and how its processed and profited from. But traceability, legitimate use, and responsibility, powered by countless daily interactions with the likes of ChatGPT, Copilot, Grok, Claude, Perplexity, or even Siri (which Apple is trying to overhaul into a fully-fledged language learning neural model), and their diverging data processing and storing policy that leave people wondering what exactly they are signing in to.

Image by Conny Schneider on Unsplash
AI’s trajectory at the 2026 World Economic Forum reveals a tripartite tension: positioned as friend through transformative productivity gains in finance, healthcare, and manufacturing; as fad amid hype-driven investment cycles that outpace proven enterprise returns; and as foe through governance inadequacy and distributional inequalities risking economic polarisation.
These governmental efforts highlight persistent challenges: first, regulation of AI-mediated misinformation, particularly in electoral contexts; balancing information integrity against expression protections; second, competition scrutiny of concentrated compute/data power; third, accountability for agentic systems acting across networks; fourth, compliance regimes for unpredictable frontier models via stress-testing and safety evaluations; and fifth, transparency mandates like explainability reports and bias audits under jurisdictional debate.
It could be argued, like experts did at Davos 2026, that technological capability has outpaced institutional adaptation across sectors -from pilot purgatory to regulatory divergence- positioning AI tensions as a systemic global dynamic. The “day after AGI” could arrive before governance, workforce, and distributional frameworks mature, leaving legacy structures to manage fast-evolving systems.
Suggested further reading:
‘The Companies Making the Most Money from AI’ by The Verge (2025)
‘The State of AI in the Enterprise’ by Deloitte (2026)
image sources
- zach-m-hao2eQnBG4w-unsplash: Unsplash | CC0 1.0 Universal
- evangeline-shaw-MXJ9oRlevtw-unsplash: Unsplash | CC0 1.0 Universal
- hartono-creative-studio-ahUbbd-b5_s-unsplash: Unsplash | CC0 1.0 Universal
- conny-schneider-3hkKv6WzjcE-unsplash: unsplash | CC0 1.0 Universal



