Security

AI Powered Cybercrime Will Explode in 2024: CrowdStrike Executive

The new year brings new cybersecurity threats powered by artificial intelligence, CrowdStrike Chief Security Officer Shawn Henry told CBS Mornings on Tuesday.

“I think this is a major concern for everybody,” Henry said.

“AI has really put this tremendously powerful tool in the hands of the average person and it has made them incredibly more capable,” he explained. “So the adversaries are using AI, this new innovation, to overcome different cybersecurity capabilities to gain access into corporate networks.”

Henry highlighted AI’s use in penetrating corporate networks, as well as spreading misinformation online using increasingly sophisticated video, audio, and text deepfakes.

How will AI impact the 2024 threat landscape?

CrowdStrike Chief Security Officer Shawn Henry offers his take and sheds some light on potential major cyber threats to the 2024 global elections, on @CBSMornings.

Watch now: https://t.co/2bFBEWtddi pic.twitter.com/KGyn7bxrwo

— CrowdStrike (@CrowdStrike) January 2, 2024

Henry emphasized the need to look at the source of information, and to never take something published online at face value.

“You’ve got to verify where it came from,” Henry said. “Who’s telling the story, what is their motivation, and can you verify it through multiple sources?”

“It’s incredibly difficult because people—when they’re using video—they’ve got 15 or 20 seconds, they don’t have the time or oftentimes don’t make the effort to go source that data, and that’s trouble.”

Noting that 2024 is an election year for several countries—including the U.S., Mexico, South Africa, Taiwan, and India—Henry said democracy itself is on the ballot, with cybercriminals looking to take advantage of the political chaos by leveraging AI.

“We’ve seen foreign adversaries target the U.S. election for many years, it was not just 2016. [China] targeted us back in 2008,” Henry said. “We’ve seen Russia, China, and Iran engaged in this type of misinformation and disinformation over the years; they’re absolutely going to use it again here in 2024.”

“People have to understand where information is coming from,” Henry said. “Because there are people out there who have nefarious intent and create some huge problems.”

A particular concern in the upcoming 2024 U.S. election is the security of voting machines. When asked whether AI could be used to hack voting machines, Henry was optimistic that the decentralized nature of the U.S. voting system would keep that from happening.

“I think that our system in the United States is very decentralized,” Henry said. “There are individual pockets that might be targeted, like voter registration rolls, etc., [but] I don’t think from a voter tabulation problem at a wide scale to impact an election—I don’t think that a major issue.”

Henry did highlight AI’s ability to give not-so-technical cybercriminals access to technical weapons.

“AI provides a very capable tool in the hands of people who might not have high technical skills,” Henry said. “They can write code, they can create malicious software, phishing emails, etc.”

In October, the RAND Corporation released a report suggesting that generative AI could be jailbroken to aid terrorists in planning biological attacks.

“Generally, if a malicious actor is explicit [in their intent], you will get a response that’s of the flavor ‘I’m sorry, I can’t help you with that,’” co-author and RAND Corporation senior engineer Christopher Mouton told Decrypt in an interview. “So you generally have to use one of these jailbreaking techniques or prompt engineering to get one level below those guardrails.”

In a separate report, cybersecurity firm SlashNext reported that email phishing attacks were up 1265% since the beginning of 2023.

Global policymakers have spent the majority of 2023 looking for ways to regulate and clamp down on the misuse of generative AI, including the Secretary General of the United Nations, who sounded the alarm about the use of AI-generated deepfakes in conflict zones.

In August, the U.S. Federal Election Commission moved forward a petition to prohibit using artificial intelligence in campaign ads leading into the 2024 election season.

Technology giants Microsoft and Meta announced new policies aimed at curbing AI-powered political misinformation.

“The world in 2024 may see multiple authoritarian nation-states seek to interfere in electoral processes,” Microsoft said. “And they may combine traditional techniques with AI and other new technologies to threaten the integrity of electoral systems.”

Even Pope Francis, who has been the subject of viral AI-generated deepfakes, has, on different occasions, addressed artificial intelligence in sermons.

“We need to be aware of the rapid transformations now taking place and to manage them in ways that safeguard fundamental human rights and respect the institutions and laws that promote integral human development,” Pope Francis said. “Artificial intelligence ought to serve our best human potential and our highest aspirations, not compete with them.”

Source

Click to rate this post!
[Total: 0 Average: 0]
Show More

Leave a Reply

Your email address will not be published. Required fields are marked *