There is a striking divide between countries that love AI and countries that fear it — with many nuances in between, of course, but the contrast is fascinating.
Recent data from the Stanford AI Index and global AI adoption surveys reveal how differently countries perceive the impact of artificial intelligence (AI), with reactions ranging from extremely optimistic to deeply anxious.
And here’s the most interesting part: these differences are not strictly tied to technical leadership or levels of investment. Instead, they reflect how citizens interpret the societal consequences of AI in relation to government policy, media framing, and historical context.
“…attitudes toward AI are […] influenced by national governance, historical memory, and how AI is publicly discussed.”
Let’s look at some examples:
Optimism toward AI is remarkably high in countries like China (83%), Indonesia (80%), and Thailand (77%), where the majority believe AI products and services are more beneficial than harmful.
By contrast, countries such as Canada (40%), the United States (39%), and the Netherlands (36%) show far less enthusiasm.
But these attitudes are not set in stone. Since 2022, optimism has grown significantly in several previously skeptical nations, including Germany (+10%), France (+10%), Canada (+8%), Great Britain (+8%), and the United States (+4%).
People’s attitudes toward AI are shaped by more than just exposure to the technology — they’re also influenced by national governance, historical memory, and how AI is publicly discussed. Let’s take a dive.
The “Very Optimistic” (e.g., China, Indonesia, Thailand, Saudi Arabia)
Countries with the most optimistic views toward AI share a few traits: strong central governments, top-down technology strategies, and nationalistic narratives around innovation.
For instance, China’s state-led AI strategy — backed by a planned $150 billion investment through 2030 — has been aggressively promoted as a path to economic modernization and global leadership. This top-down approach reduces uncertainty for citizens by framing AI as a guaranteed success story, managed and controlled by the state.
In these nations, AI is frequently presented through government-run media and education campaigns as a utopian solution — one that will enhance efficiency, eliminate inequality, and create wealth. The government’s involvement in both promoting and regulating AI often gives citizens a sense that ethical issues and job displacement will be handled responsibly.
As a result, fears about AI replacing human labor or reinforcing bias appear less pronounced.
Is this optimism purely organic, though? In countries with tight media controls and limited public debate, citizens have fewer opportunities to engage critically with AI’s risks. Moreover, the close ties between government and major AI firms often raise concerns of collusion, with AI advancement serving political as much as technological goals.
The “Cautiously Optimistic” (e.g., India, Singapore, South Korea, Brazil, Mexico)
In these countries, public sentiment is generally positive but more balanced.
Their governments actively promote AI adoption but also acknowledge its risks. Media portrayals tend to be more nuanced, presenting both the potential benefits (economic growth, service efficiency, global competitiveness) and the possible drawbacks (job automation, inequality, surveillance).
For instance, Singapore has integrated AI into 90% of its public services, while also leading discussions on AI ethics and transparency.
South Korea, India, and Brazil have launched major public-private initiatives to foster AI development while engaging civil society in discussions about fairness, bias, and job security.
The result is a cautiously optimistic population that sees AI as a tool for progress but remains attentive to its societal implications.
Relatively open media and participatory political environments help citizens critically assess new technologies. Yet the general willingness to adopt AI also reflects a developmental mindset — where disruptive technologies are seen as opportunities to “leapfrog” into the next stage of modernization.
The “Pessimistic” (e.g., United States, United Kingdom, Canada, France, Japan)
They are home to many of the world’s leading AI firms and researchers — yet they are the most skeptical.
Countries like the United States, United Kingdom, Canada, France, and Japan all display deep-rooted concerns about automation, inequality, privacy, and regulatory failure.
In these nations, governments often frame AI not as a national opportunity but as a potential threat. Media coverage further amplifies public anxiety, often highlighting job loss, algorithmic bias, surveillance, and misuse in warfare or disinformation.
The United Kingdom’s establishment of the world’s first AI Safety Institute, Canada’s strong ethical frameworks, and Japan’s Hiroshima AI Process all reflect a precautionary approach. These efforts focus on slowing adoption until robust ethical, legal, and safety frameworks are in place.
In Japan, public skepticism is especially high among younger generations, who see AI as a threat to job prospects and personal agency. Older generations tend to be more accepting — perhaps due to greater trust in institutions or less exposure to today’s labor market.
Why Technical Leaders Are Often AI Skeptics
So, why are nations that lead in AI development often the most pessimistic?
A key reason lies in their historical experiences with past technological disruptions. For example, the U.S. still bears the psychological legacy of Cold War-era nuclear tensions, where advanced technology was both a source of pride and a threat. Japan’s skepticism partly stems from the Fukushima disaster, which shook public trust in government-led tech.
Such experiences contribute to what some sociologists call “technological ambivalence” — the idea that technological progress is viewed as both powerful and dangerous.
This ambivalence fosters stronger calls for regulation, especially in liberal democracies where freedom of expression and civil society activism are robust. Governments are less able to shape sentiment through propaganda. Citizens are exposed to competing narratives and often resist top-down optimism, especially if it serves corporate interests over public good.
Authoritarian Optimism vs. Democratic Skepticism
In more centralized regimes, AI is framed as an extension of national power. These states coordinate with domestic tech firms, present AI in idealized terms, and suppress dissenting voices. Whether sincere or manufactured, this leads to more optimism.
In contrast, democracies often maintain adversarial relationships with emerging technologies. Regulatory agencies, watchdogs, and media scrutinize development. Public sentiment is less shaped by state narratives, and more by real-world cases of AI misuse — from biased algorithms to job automation.
In middle-income democracies like Brazil, India, and Mexico, the picture is mixed. Governments champion AI for development, but citizens remain wary due to inequality and weak regulation.
Toward a Collaborative AI Future: A New Materialist Perspective
So, what could shift public attitudes?
New materialist thinkers argue that AI shouldn’t be seen as a savior or a threat. Instead, they ask how humans and machines can co-evolve — not by replacing one another, but by reshaping intelligence, ethics, and agency together.
Just as cars and computers redefined life — not by displacing humans but by changing how we live — AI is part of an evolving sociotechnical ecosystem.
Governments can do more than regulate AI in isolation. They can design policies that support human-AI collaboration: lifelong learning, inclusive design, and governance frameworks rooted in both innovation and public trust.
That’s how you build a healthy, forward-looking relationship between society and its machines.