Thursday, July 20, 2023

AI Exposed: AI's Existential Risks Now Widely Recognized and Very Worrisome

The door to mankind's future is wide open
(AI processors & programmers hold the keys) 

TIME has excellent AI story here that relates to all my previous posts (all linked below). This is truly a scary issue. The headline:

“An AI Pause is Humanity's Best Bet for Preventing Extinction”

The introduction:

The existential risks posed by artificial intelligence (AI) are now widely recognized.

The U.N. Secretary-General recently echoed their AI concerns after hundreds of industry and science leaders warned: “That mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The UK PM also did, and he is investing 100 million pounds ($129 million US) into AI safety research that is mostly meant to prevent existential risk.

Other world leaders are likely to follow in recognizing AI’s ultimate threat.

In the scientific field of existential risk, which studies the most likely causes of human extinction, AI ranked at the top of the list

In the book: “The Precipice” Oxford existential risk researcher Toby Ord sets out to quantify human extinction risks. The likeliness of AI leading to human extinction exceeds even what he writes about like: (1) climate change, (2) pandemics, (3) asteroid strikes, (4) super-volcanoes, and (5) nuclear war – all combined.

One would expect that even for severe global problems, the risk that they lead to full human extinction is relatively small, and this is indeed true for most of the above risks. 

AI, however, may cause human extinction if only a few conditions are met. Among them is human-level AI, defined as an AI that can perform a broad range of cognitive tasks at least as well as humans can.

Studies outlining these ideas were previously known, but new AI breakthroughs have underlined their urgency: AI may be getting close to human level already.

Recursive self-improvement is one of the reasons why existential-risk academics think human-level AI is so dangerous. Because human-level AI could do almost all tasks at our level, and since doing AI research is one of those tasks, advanced AI should therefore be able to improve the state of AI.

Constantly improving AI would create a positive feedback loop with no scientifically established limits: “An intelligence explosion.”

The endpoint of such an “intelligence explosion” could be a superintelligence, e.g., God-like AI that outsmarts us the way humans often outsmart insects, but we would be no match for it.

My earlier posts on this hot topic listed below:

My 2 Cents: My combined earlier posts on this subject listed in order from the earliest to the latest and with pertinent links within each post are listed here; here; here; and here FYI.

A critical topic for sure … enjoy. More later as updates are published. 

Stay tuned which I hope you will.

Thanks for stopping by.


No comments: