DOJ says it will seek tougher penalties for election interference, threats

U.S. Attorney General Merrick Garland and Deputy Attorney General Lisa Monaco promised Monday they will seek tougher sentences for those who interfere with elections. File photo by Bonnie Cash/UPI

May 13 (UPI) — The federal government will seek tougher sentences for those who threaten, intimidate election workers or utilize artificial intelligence to manipulate or influence voters, Deputy Attorney General Lisa Monaco said Monday.

In a speech, Monaco noted “a particularly disturbing trend” in the “way perpetrators use new technologies to mask their identities and communicate their threats.”

In another speech, Attorney General Merrick Garland outlined multiple and recent cases of election workers receiving threats and reiterated the department’s commitment to prosecuting those cases.

“Each of these cases should serve as a warning,” he said ahead of Monday’s meeting of DOJ’s Election Threats Task Force.

“If you threaten to harm or kill an election worker, volunteer, or official, the Justice Department will find you. And we will hold you accountable,” he said.

But election offices across the United States “continue to deal with threats and harassment for doing their jobs, and in many places, this behavior has been nearly nonstop since mid-2020,” Amy Cohen, the executive director of the National Association of State Election Directors, told CNN.

The Justice Department has been in the midst of ongoing attempts to combat AI-generated efforts to influence voters ahead of this year’s November presidential election.

In February, DOJ had appointed Jonathan Mayer as its first chief artificial intelligence officer amid rising concerns about the ethics of AI to help the Justice Department “keep pace with rapidly evolving scientific and technological developments.”

“Because as criminal tools get more sophisticated, so do our investigations,” Monaco said after Garland had spoke.

“Over the past several years, our democratic process and the public servants who protect it have been under attack like never before, as threats evolve and spread,” she added.

Also in February, 20 of the world’s leading technology companies announced a combined effort to fight “deep fake” artificial intelligence misinformation during the 2024 election year, which included Adobe, AmazonGoogle, IBM, LinkedIn, McAfee, Meta, Microsoft, OpenAI, Snap, TikTok and X.

Last month in April, Microsoft had warned in a report that China and North Korea pose artificial intelligence threats aimed at influencing U.S., South Korean and Indian elections this year with AI-generated false content.

The New Hampshire Attorney General’s Office in February had identified an operation in Texas as the source of illegal artificial intelligence robocalls imitating the voice of President Joe Biden to dissuade Democrats from voting in the state’s Jan. 23 presidential primary election.

Facebook’s parent Meta said in February that it has been working with numerous countries in Europe to root out disinformation ahead of the EU’s upcoming parliamentary elections this year, particularly with the rise of artificial intelligence.

“Election workers are on the front lines of this threat-accelerated landscape,” Monaco said as she wrapped up her comments, saying how the Election Threats Task Force “will continue to pursue and hold accountable those who threaten these public servants, their families and the functioning of our democratic process.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here