Russian boss Vladimir Putin believes that in the future, the
country that leads in synthetic comprehension (AI) could
dominate the world.
to a report by Russian state-funded organization RT (which
we first saw around The Verge), Putin told students that
“artificial comprehension is the future, not only for Russia, but
for all of humankind.”
“It comes with gigantic opportunities, but also threats that are
formidable to predict,” he said.
“Whoever becomes the personality in this globe will turn the ruler
of the world.”
That’s because Russia will share its expertise in artificial
comprehension with other nations. “It would be strongly
unattractive if someone wins a monopolist position,” Putin said.
Currently, AI is being used by companies like Google, Facebook,
Microsoft, and Apple to energy some of their cutting-edge software
and services. But technological advancements in the military
margin means that AI-powered weapons competence be the next step in the
expansion of warfare.
The Russian President believes that,
as CNBC first reported, drones will be at the forefront of
the battlefields in the future. “When one party’s drones are
broken by drones of another,” he said, “it will have no other
choice but to surrender.”
China, Russia, shortly all countries w clever mechanism science. Competition for AI supremacy at inhabitant turn many likely means of WW3 imo.
— Elon Musk (@elonmusk) Sep 4, 2017
Putin’s claims also stirred Tesla CEO Elon Musk, to warn
about the risks of “competition for AI superiority”, which he
believes could “most likely cause” a third universe war.
More than countries’ leaders, Musk worries that AIs could
trigger a fight by themselves. That, in his
opinion, could occur “if [an AI] decides that a preemptive
strike is [the] many illusive trail to victory”.
Musk is a follower in AI, but he has warned about its potential
dangers in the past, even picturing calamity scenarios with
robots “going down the streets killing people”. More than
anything else, he advocates for regulation. “AI is a singular case
where we consider we need to be active in law instead of
reactive,” he pronounced in
a new interview.
“I consider by the time we are reactive in AI regulation, it’s too