
How is AI shaped for a digital cold war?
By Philip L | Published: 2025-01-12 16:00:00 | Source: The Future – Big Think
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
We are heading toward a world where artificial intelligence is built to operate under national flags – as pawns (or perhaps chess engines) in support of competing national goals.
Should we be angry about this trend, or be grim realists? Resigned or rebellious? Either way, this forced recruitment of AI should be more widely known, whether or not it can be opposed.
However, few will want to discuss governance or politics unless there is something urgent really going on – which is the case with the human mind – so let’s first talk about timelines for an AI worthy of the word “intelligence”.
Those who lead are generally the best AI labs today He believes That a model “smarter than a Nobel Prize winner in most relevant fields — biology, programming, mathematics, engineering…” would likely be due by “2026 or 2026.” 2027“, according to Anthropic CEO Dario Amodei. That is 12 to 36 months from the time of writing.
Broader 2024 survey of 2,000 AI researchers puts Opportunity to set a 2027 date for human-level machine intelligence merely 10% (!), which was still a significant update toward these early dates compared to previous iterations of the same survey.
Note that these more general AIs (or Nobel Prize-worthy architectures) are distinct from so-called “superintelligence”—an action-taking agent more capable and intelligent than entire organizations, or, in some definitions, the sum of humanity.
When such superintelligence will emerge, according to OpenAI Sam Altmanfrom four to 15 years from now (2028 to 2039). Nobel Prize winner Demis Hassabis, CEO of Google DeepMind, whose stated goal is to “solve intelligence first, and then use it to solve everything else.” He believes Their mission will be completed “within a decade.”
This could all be an elaborate sales pitch or groupthink. Or the dates may be a few years behind. But the prospect of automating sophisticated human intelligence—the genius IQ of a flash drive (or rocket)—isn’t something that can be completely ruled out anymore.
In other media landscapes, news That a model like OpenAI’s o1 “exceeds human PhD-level accuracy by the standard of physics, biology and chemistry problems” will dominate the headlines.
Well, LLM programs and AI more broadly are becoming impressive and improving rapidly (eg Verification Their own answers and reflection). But why do we think that these developments have been nationalized?
After all, Alphafold 3, which could “(predict) the structure and interactions of all the molecules of life,” was just… to make Available for free, and maintained by Demis Hassabis of Deepmind. Almost anyone can access versions of ChatGPT for free, and Meta’s AI is open source. So, shouldn’t we expect AI to be surrounded by it less border?
There’s not much to go on. Anthropic’s Amodei, whose company is behind one of the most advanced language models, Claude 3.5 Sonnet, Argue In October 2024, “If we want AI to favor democracy and individual rights, we will have to fight for this outcome” and that “it seems very important that democracies have the upper hand on the world stage when strong AI is created. AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms under which strong AI is brought into the world, to avoid overpowering autocrats and preventing human rights abuses within authoritarian countries.”
It’s not entirely clear what this entails, but it is as if AI has been conscripted into the service of winning Cold War II.
Not excited about this possibility? We have no choice, according to Sam Altman. And in the month of July he rose He said “The burning question of our time” is “Who will control the future of artificial intelligence?” He claimed that America should be: “The United States currently has the lead in developing artificial intelligence, but continued leadership is by no means guaranteed. Authoritarian governments around the world are willing to spend enormous sums of money to catch up and eventually overtake us.”
In fact, “if we want a more democratic world, history tells us that our only choice is to develop an AI strategy that will help create that world, and that the nations and technologists who have the lead have a responsibility to make that choice — now.”
Altman proposed some banal policies: investment in cybersecurity and infrastructure, as well as clear rules for international investment and exports. But the basic argument is that “democratic AI” must be protected, supported, and regulated in order to stay ahead of “authoritarian AI.”
This view of the conflict may have practical implications. the New York Times newly I mentioned Meta has now allowed its models to be used by the US military “in a reversal of its policy prohibiting the use of its technology in such efforts.” OpenAI quietly It revised its policy in a similar direction in January. Microsoft, Amazon, and Even Anthropic They now work with US defense and intelligence agencies.
Demis Hassabis, a Briton, was more circumspect in determining how advanced AI is in terms of national security. But the White House did not do that.
A note The Biden administration’s recent release boldly stated in its title the goal of “harnessing artificial intelligence to achieve national security goals.” (Who expects the next Trump administration to be less interested in America first?)
According to the White House, the race with China, among others, continues: “Although the United States has benefited from a head start in AI, competitors are working hard to catch up… and may soon devote resources to research and development, which US AI developers cannot match without appropriately supportive government policies and actions. It is therefore the policy of the US government to foster innovation… by strengthening key drivers of AI progress, such as technical talent and computational power.”
It goes without saying that more primitive “artificial intelligence” has already been used for years on battlefields, starting with semi-autonomous regions. Drone swarms In the Russian-Ukrainian war for the Israeli army is used According to an “AI targeting system with little human oversight and a lenient policy toward victims.” +972a magazine founded in Tel Aviv.
Technology has always been co-opted for war, but truly intelligent, let alone superintelligent, AI is a whole different beast, and one we would be wise not to unleash on the battlefield.
Is it too naïve to take a moment to imagine a world in which cooperating nations pool their best talent and proportionately pool their resources to progressively develop and understand powerful AI? Where isn’t artificial intelligence used to advance a world view, or win a war? Don’t we owe it to future generations, and to ourselves, to at least try to do so? CERN For artificial intelligence?
Even if you disagree, the ongoing “nationalization” of AI is a rapidly evolving story that deserves more attention.
this condition Originally published by our sister site Freethink.
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
ــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــ





