Michigan tries to curb political ‘deepfakes,’ artificial intelligence ads – Bridge Michigan

Under legislation approved by the committee Tuesday and now headed to the House floor for further consideration, failure to include disclaimers could lead to criminal penalties for anyone, whether they’re campaign operatives or average citizens trying to support their candidate. 

Related:

Penalties would be stiffer for political professionals who could face up to 93 days in jail for a first offense, and longer for subsequent offenses. Average citizens would initially be subject to a $250 fine, which Tsnergolou called a warning to deter additional violations that could lead to jail time

Any individual that repeatedly produces deepfake videos or other intentionally deceptive media within 90 days of an election could eventually face up to five years in prison. 

Four states have already adopted similar legislation — Texas, Washington, California and Minnesota — and the Federal Election Commission is considering regulations to crack down on “deliberately deceptive artificial intelligence campaign ads.”

While rare, artificial intelligence has already been used in political content, including a social media video distributed by presidential candidate Ron DeSantis’ campaign in June that falsely depicted former President Donald Trump hugging Dr. Anthony Fauci

In another instance, a DeSantis super PAC ran a TV ad that used an AI-generated audio version of Trump attacking Republican Gov. Kim Reynolds of Iowa, a state that will hold the nation’s first caucus and play an important role in deciding next year’s Republican presidential nomination. 

“Although deepfakes are not yet prevalent in American politics, they have been used, and I think it’s a near certainty that they will become prevalent absent intervention from legislators in Michigan and around the country,” said Robert Weissman of Public Citizen, a national democratic advocacy group that supports the new regulations. 

The Michigan bills would require political deepfakes or other “materially deceptive media” created, published or distributed in the state within 90 days of an election — including on social media — to include a disclaimer that they contain images, audio or video that “has been manipulated by technical means and depicts speech or conduct that did not occur.” 

Television, radio or print political ads paid for by campaigns or ballot committees would have to include similar disclaimers if they are created in whole or part using artificial intelligence, which would be defined as “a machine-based system that can make predictions, recommendations, or decisions influencing real or virtual environments for a given set of human-defined objectives.” 

While the package is co-sponsored by GOP Rep. Matthew Bierlein of Vassar, the two Republicans on the House Elections Committee “passed” on each bill Tuesday rather than vote on them and questioned the rush. 

The bills were introduced last week and “I just think that there is a lot of things that could be fleshed out still,” said Rep. Rachelle Smit, R-Martin. “AI is here, but we definitely need to take some time to work on the measures and maybe hear a little bit more from the experts.”

Democrats on the committee supported the bills despite some concerns, particularly over potential criminal penalties.

Rep. Jamie Churches, a Wyandotte Democrat and former teacher, said she feared young people “may not understand the gravity of this” and could be so focused on going “viral” that they won’t consider the political or criminal implications. 

Source link

Source: News

Add a Comment

Your email address will not be published. Required fields are marked *