Mankind had better pay attention to the massive and sudden growth of AI (Artificial Intelligence).
Check out this disturbing article from BUSINESS INSIDER on this subject
(formatted to fit the blog):
“An
AI weapons race may create a world where everyone stays inside out of fear of
being chased down by swarms of
slaughterbots, warns founding Skype engineer”
Short introduction to this rather long but interesting
article:
· Jaan Tallinn helped build Skype and
is the founder of the Future of Life Institute.
· He recently warned of the risks of an AI arms
race, describing theoretical anonymous “slaughterbots.”
· This year hundreds in the AI space signed an
open letter calling for a pause on AI development.
Those words of warning came from Tallinn, a founding
engineer of Skype, in a recent video interview with
Al Jazeera that letter in part saying: “We might just be creating a world where
it's no longer safe to be outside because you might be chased down by swarms of
slaughterbots.”
Tallinn is an Estonian billionaire computer programmer and a founder of
the Cambridge Centre for the “Study of Existential Risk” and “The Future of
Life Institute,” two organizations dedicated to the study and mitigation of
existential risks, particularly risk brought about from the development of
advanced AI technologies.
Tallinn's reference to killer robots draws from the 2017 short film: Slaughterbots, which was released by the Future of Life Institute as part of a campaign warning about the dangers of weaponized artificial intelligence.
The
film depicts a dystopian future in which the world has been overtaken by
militarized killer drones powered by AI.
We could be reaching an “Oppenheimer moment” that is in other words as researchers question their responsibility for developing technology that might have unintended consequences just like the first A-bomb had.
Tesla’s Elon Musk; Apple’s
co-founder Steve Wozniak; Stability AI CEO Emad Mostaque; researchers at
Alphabet's AI lab DeepMind; and notable AI professors signed an open letter
issued by the “Future of Life Institute” calling for a six-month pause on
advanced AI development saying part: “Advanced
AI could represent a profound change in the history of life on Earth, and
should be planned for and managed with commensurate care and resources. Unfortunately,
this level of planning and management is not happening, even though recent
months have seen AI labs locked in an out-of-control race to develop and deploy
ever more powerful digital minds that no one — not even their creators — can
understand, predict, or reliably control.”
Side reference on this same subject here from SCIENTIFIC AMERICAN.
My 2 Cents: I've posted about
this AI topic for a awhile now at these four sites with various kinds of information: here; here; here; and here FYI.
This is a hot topic not
apt to go away anytime soon.
Thanks for stopping by.
No comments:
Post a Comment