Monday, May 29, 2023

AI Hot Topic: Some Say AI May Lead to "Terminator 7" and Human Extinction

AI Simple to Define With Complex Repercussions

Stephen Hawking stated reasons to fear AI

MAJOR UPDATE: I stated below in my original post on this subject, that a lot of people (AI experts and other hi-tech fields) are worried about uncontrolled AI. This update from ROLL CALL points out more in their fine article with this headline:

“Experts say AI poses ‘extinction’ level of risk, as Congress seeks legislative response”

Mitigating the risk of extinction from AI should be a global priority.

Lawmakers and regulators gearing up to address risks from artificial intelligence technology got another boost this week from experts warning of potential “extinction” and calling on governments to step up regulations.

Senate Majority Leader Charles E. Schumer (D-NY) said he and his staff have met with more than 100 CEOs, scientists and other experts to figure out how to draw up legislation. 

Schumer appears to have heard the message, saying on the Senate floor on May 18:We can’t move so fast that we do flawed legislation, but there’s no time for waste or delay or sitting back. We’ve got to move fast.”

The National Telecommunications Information Administration (NTIA) is gathering comments from industry groups and tech experts on how to design audits that can examine AI systems and ensure they’re safe for public use.

Former FTC officials are urging the agency to use its authority over antitrust and consumer protection to regulate the sector. More than 350 researchers, executives, and engineers working on AI systems added to the urgency in a statement released by the Center for AI Safety. 

Among those are: Geoffrey Hinton, a top Google AI scientist until he recently resigned to warn about risks of the technology; Sam Altman, CEO of OpenAI, the company that has developed ChatGPT; and Dario Amodei, the CEO of Anthropic, a company that focuses on AI safety. 

The group said collectively: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The experts listed eight broad categories of risk posed by AI systems that digest vast quantities of information. 

Those systems can create text, images, and video that are difficult to distinguish from human-created content. 

The Center for AI Safety says AI systems can help criminals and malicious actors create chemical weapons and spread misinformation, perpetuate inequalities by helping small groups of people gain a lot of power, and deceive human overseers, and to seek power for themselves. 

The full ROLL CALL article continues here.

My 2 Cents: Never want to be a worry wart and such, but this AI business poses a threat to all if not carefully regulated and most of all, controlled for good not devious or nefarious reasons.

Stay tuned for updates and thanks for stopping by.

MY ORIGINAL POST FOLLOWS:

AI (Artificial Intelligence) and the danger of “Deepfakes” in this excellent article from ROLL CALL. Good source of important voter information with this headline:

“AI could sway the 2024 elections, campaign pros say — but not like you think”

“Deepfake AI” is one type of artificial intelligence that is used to create convincing images, along with audio, and video that is used in hoax posts that can be both good and bad, depending on who is posting it and more so, who is viewing it. 

AI can easily transform existing source content or bogus, or it can swap one person’s face and bio for another that is damaging or false – that is using your face on a different body for example in either a negative or positive story or campaign Ad that could be (and usually is damaging).

Artificial Intelligence (AI) could transform politics as profoundly as television or radio, providing the early masters of the nascent technology a sizable — and perhaps decisive — advantage in any and or all upcoming elections. 

But even as campaign professionals embrace AI, they worry that the newfound ability to quickly and cheaply generate convincingly deceptive audio and video has troubling implications for a political system already beset by misinformation.

Two key questions:

(1) How can voters hold politicians accountable for their failings if they believe those failings are fake?

(2) How will campaign professionals respond when their candidates are smeared with fabricated recordings?

Despite the widespread anxiety over “Deepfakes” and their effect on democracy, political consultants say they are more excited about generative AI’s potential to tackle boring grunt work and expand their ability to deploy big-race tactics in down-ballot contests.

Tom Newhouse, vice president of digital marketing at “Converge Media,” a Republican advertising and consulting firm said:AI’s real impact on campaigning will be behind the scenes. It’s going to be improving fundraising capabilities by better targeting, whether that is location targeting, income, hobbies, or even habits, providing campaigns with up-to-date voter data, more personalized advertising, and, or messaging.”  

Newhouse then concluded:Campaigns that can innovate and lean into these tactics are going to have a strategic advantage.”

Larry Huynh, a partner at “Trilogy Interactive,” a Democratic digital marketing firm said:There are many small campaigns that I think can potentially leverage the tools to [not just] save time, but to create content that may not have been possible otherwise. Campaign professionals across the country are now racing to see how they can use these new machine-learning tools to supercharge their work in advance of their first big test: the 2024 presidential elections. 

Huynh is also the incoming president of the American Association of Political Consultants, added: “Anyone who wants to do their job better here — and in any industry, let alone politics — is trying to see how the tool can be beneficial to their work.” 

The election pros that CQ Roll Call spoke to all expect AI to give some tech-savvy candidates a big leg up on their opponents.  

Regulation to the rescue? Given the prevalence of dark money groups, the First Amendment’s broad protections of political speech, and the fact that defamation lawsuits take years to be resolved, schemers behind a libelous deepfakes could potentially avoid any liability by hiding behind the organization’s corporate veil. 

Think about our adversaries that have been tampering with our elections for the past few cycles, this is another tool that they can access, knowing that we’re vulnerable in an open society to misinformation and disinformation. AI’s ability to generate election-swaying deceptions is far more obvious than social media influence on votes.

Deepfake’s ability to fool voters is readily apparent to lawmakers, as Sen. Richard Blumenthal (D-CT) demonstrated in his remarks at a recent Senate Judiciary subcommittee hearing by playing an AI-generated audio of his own voice reading an AI-generated script.

At that hearing, Republicans sounded receptive to Open AI CEO Sam Altman’s calls for a new federal agency to regulate AI and support for disclosure rules. But even if Congress does act quickly, there are questions of how much it’ll help.

Here is a follow up on that last point here in part from this article from TPM (Talking Points Memo) with their headline – key points follow below:

“When it Comes to AI in Elections, We’re Unprepared for What’s Coming”

AI is advancing at a ferocious speed. Experts warn that lawmakers are not treating this issue with the seriousness they should given the role the unprecedented technology could play as soon as the 2024 election. 

As with all aspects of society that may be impacted by AI, the precise role it may play in elections is hard to game out. Rep. Clarke’s legislation focuses in particular on her concerns about AI-generated content supercharging the spread of misinformation around the upcoming elections. The need to create transparency for the American people about what is real and what is not is more urgent than ever, Clarke told TPM, in part because the technology is so cheap and easy for anyone to use

Experts respond to TPM’s article and this issue:

Darrell West, senior fellow at the Center for Technology Innovation at Brookings Institution said:AI puts very powerful creation and dissemination tools in the hands of ordinary people. And in a high stakes and highly polarized election, people are going to have incentives to do whatever it takes to win. That can and may include lying about the opposition, suppressing minority voter turnout, and using very extreme rhetoric in order to sway the electorate. This is not really a partisan issue. People on every side of the political spectrum should worry that this stuff might be used against them.”

Some Republicans have expressed concern about the technology, but have not yet signed on to legislation.

Rep. Yvette Clarke (D-NY) is one of the handful of Democrats in the House who has been trying to get ahead of the possible threats — some that may seriously disrupt elections and threaten democracy — posed by this ever-more-rapidly evolving AI technology. 

Earlier this month, Clarke introduced the The REAL Political Ads Act, legislation that would expand the current disclosure requirements, mandating that AI-generated content be identified in political ads. Clarke said she is happy to see that the interest to implement guardrails is there, but she is worried that it might be too little too late.

Clarke also said: Experts have been warning members of Congress about this and we’ve seen the rapid adoption of the use of the technology. I don’t think we’ve acted quickly enough. We want to get stakeholders on board. We want to make sure that the industry is to a certain extent cooperative, if not, neutral, in all of this so we’re not fighting an uphill battle with respect to erecting these guardrails and protective measures. But when you keep seeing signs of the usage of deceptive video and how rapidly it can be circulated online that should make everyone uneasy and willing to do the work to erect guardrails.”

My 2 Cents: AI (Artificial Intelligence) to me is simple: Artificial means NOT REAL like artificial sugar, sunlight, chocolate, etc. 

In a word, I don’t like it unless it is carefully controlled for the good is can do but NOT for the bad or harm it can inflict. 

Besides people have to develop it, too, so we have to wonder what their goals and motivations truly are. 

Do the research – I believe it is not good or beneficial for society as a whole and especially running amok like explained in examples above. 

AI comes across as “free speech” just like most of social media posts today and look how bad a lot of that is without any control standards. 

Gullible people will simply lap it up as true when it is not and that truly does put 2024 in grave jeopardy – worse than 2020 some experts say. AI could make social media posts and such far worse because it seems real and thus believable and in the end: More harmful. 

We need action now to prevent another 2020 election mess or that which followed with the January 6 riot that could be AI-generated and peddle another “Big (or Bigger) Lie” after 2024 if does that outcome does not go a certain way like in 2020, or AI proves what one candidate or party wants the outcome to be over the actual winner and results. We simply do not need that again. 

Thanks for stopping by.

No comments: