Sunday, July 2, 2023

AI Can Fake Elections: Spotting Disinformation Down from 99% to 3% — Bad for 2024

 

Hello AI: Now good riddance to bad rubbish

This update of a more worrisome AI related article and headline from BUSINESS INSIDER:

“The most reliable AI image detectors ‘can be tricked by simply adding grain to an image.’ A worrying find as AI disinformation plagues the internet and threatens political campaigns”

· Adding grain (simple texture) to any AI-generated image drops the likelihood of detection from 99% to 3.3% making them harder to identify as fake.

· The likelihood of any detection drops from 99% to 3.3% when any “pixelated” noise is added to images.

· These new finding comes as users in the U.S. and abroad begin to use AI images to influence election campaigns.

· From falsified campaign ads to stolen artwork, AI-generated images have been responsible for a wave of disinformation.

The New York Times reports that AI detection software — one of the frontline defenses against the spread of AI-generated disinformation — can be easily fooled by simply adding grain (simple texture) to AI-generated images.

The Times' analysis shows that when an editor adds grain to an AI-generated photo, the likelihood of software identifying the image as AI-generated goes down from 99% to just 3.3%.

Watch this NY TIMES video – a real eye-opener:

Click here to view: 

Even the software Hive — which showed one of the best success rates in the Times' report — could no longer correctly identify an AI-generated photo after editors made it more pixelated.

You've probably already experienced calling or chatting with a company's customer service, and having a robot answer. ChatGPT and related technologies could continue this trend. 

Related to that above is this fine article extract from CNET in this growing, and somewhat more worrisome, AI industry:

AI could lead to a bad ending for humanity — or not. In March, prominent AI researchers and tech executives, including Apple co-founder Steve Wozniak and Twitter owner Elon Musksigned an open letter asking for a six-month pause on the development of AI to give the industry time to set safety standards around the design and training of these powerful and potentially harmful systems.    

AI pioneer Yoshio Bengio, director of the University of Montreal's Montreal Institute for Learning Algorithms, told The Wall Street Journal in an interview at the time: “We've reached the point where these systems are smart enough that they can be used in ways that are dangerous for society. And we don't yet understand.” 

In the past two months, we saw dueling posts about the potential threats and joys of AI. In a stark, one-sentence open letter signed by notables including OpenAI CEO Sam Altman and Geoffrey Hinton, who's known as the godfather of AI, experts said: “AI could pose a risk of extinction along with pandemics and nuclear war.

In contrast, venture capitalist and Internet pioneer Marc Andreessen, whose company has backed numerous AI startups, penned a nearly 7,000-word post on: Why AI Will Save the World.

Original post continues: A 2022 study from the tech research company Gartner predicted that chatbots will be the main customer service channel for roughly 25% of companies by 2027. As a result, experts warned that detection software should not be the only line of defense for companies trying to combat misinformation and prevent the distribution of these images. 

Cynthia Rudin, a computer science and engineering professor at Duke University, told the NY Times:Every time somebody builds a better generator, people build better discriminators, and then people use the better discriminator to build a better generator.”

That NY Times' analysis comes at a time when users are increasingly deploying AI-generated misinformation online to influence political campaigns (Insider report).

For example Gov. Ron DeSantis (R-FL) presidential campaign distributed fake images of Donald Trump and Anthony Fauci earlier in June.

My 2 Cents: I posted on this AI issues here (June 3) – FYI. 

The info above and cases of using AI as mis- and disinformation is truly worrisome and more so now on top of the 2020 mess that Trump’s “Big Lie” generated and still is very active across MAGA and GOP La-La land.

Using it to totally disrupt 2024 is not only possible but very damn likely based on the GOP moves to make sure they never lose another election.

This AI movement to disrupt and control the outcome as well as states voting rule changes is truly a huge threat to our voting rights and more so to our entire democratic way of life. That as I’ve said lately is NOT hyperbole – it’s the truth and the facts show it is growing.

Recall that Steve Bannon once said that he and Trump were going to “Deconstruct America – our entire system (CNN report).” 

Or when DeSantis said recently a recent speech: “We need to fundamentally re-constitutionalize the government.”

That is scary talk folks and we had better wake up to that and the “new” MAGA-GOP under Trump who sees this as an advantage to them and their goals – not the nation's. 

More on this hot topic later I am sure. 

Meanwhile, CONGRESS must move and act fast to head off this AI upgrade as described above. 

All that is really bad news. 

With the data opinions from AI experts, that leads me to think, and I wonder: Who is behind the generation of and entering of the data, codes, etc., “that AI machines think and work?” 

Surely, they can’t or don’t think on their own, so who enters the data for them to do so? 

Thanks for stopping by.

No comments: