AI threat landscape could include automated propaganda bots, sophisticated email attacks: Security experts

Security and intelligence experts say that artificial intelligence could create new cyberthreats, but stressed the importance of international collaboration.

Artificial intelligence (AI) will become a "fundamental game changer" throughout the world, enabling scalable disinformation campaigns and online scams, but global cyber-cooperation and traditional security hygiene should provide significant protection for companies and individuals, according to experts.

Center for a New American Security CEO Richard Fontaine told Fox News Digital that until now, humans have primarily created disinformation. While it may have been propagated through digital means, it was not made through digital means. But these new AI applications could now allow a government to propagate and originate disinformation at scale. On the defensive side of the equation, AI could provide an additive benefit to existing scanning systems, helping to detect anomalies, find vulnerabilities and recommend ways to fix them.

"There's a lot that's happening very quickly with that aspect of the threat landscape," Fontaine said.

WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

From a dictatorship perspective, Fontaine highlighted the kind of possibilities that generative AI enables, such as scalable automated bots that put forth propaganda, disinformation and misinformation. This information, generated by AI, could be easily translated into a multitude of languages to help better spread that message, with the potential to drown out dissident voices and flood the social media ecosystem.

Speaking to Russian government attempts to gain access to a Florida-based voting software supplier and the private accounts of DNC election officials leading up to the 2016 presidential election, Fontaine highlighted how phishing email scams were used as an attack vector. Often, these phishing scams contain misspellings, grammatical errors and questionable punctuation. Many of these errors arise from the language barrier between the foreign attacker and the American target.

However, ChatGPT and other large language models (LLM's) are highly adept at English. As a result, a bad actor could prompt the system to provide them with a more sophisticated scam that is harder to decipher.

"That kind of thing would allow potentially the attacker, whether it's a nation-state or anyone else, more ways in than they would have otherwise," Fontaine added.

Deepfakes are also a prime concern regarding fabricated audio and video artifacts. Fontaine said that there would be various forms of deepfake detection that would emerge. Still, it will always be a cat-and-mouse game where technology constantly tries to circumvent emerging detection methods. To combat these new falsified pieces of data, Fontaine acknowledged that an educational process that teaches people how to deal with online information would be necessary.

Jeff Greene, the senior director for cybersecurity programs at Aspen Digital, agreed that with AI, phishing and other types of attack emails will begin to come with much better grammar, tailoring to an individual's interests and even more believable representations of messaging from the US government.

CHINA XI JINPING TELLS NATIONAL SECURITY TEAM TO PREPARE FOR ‘WORST-CASE SCENARIO’ AS LEADERS WARN OF AI RISKS

He also concurred with the belief of a continuing technological cat-and-mouse game that includes tools that can scan those emails to detect if an LLM has written them.

"I believe that AI is going to be a fundamental game changer in how things operate, but I think it's going to move incrementally. The cat and mouse game won't change with AI is my current belief, but I'm willing to be proven wrong," Greene said. "I do think though that every time the criminals make it and make an advance; the defenders will respond and vice versa."

Zohar Palti, an international fellow at the Washington Institute and the former leader of the Mossad Intelligence Directorate, told Fox News Digital that countries have to be a bit more modest and understand they are limited at this time when it comes to AI detection.

"I think it's a lot of education, how we teach our children between right and wrong, true and false moral standards and things like that and to give credit to the young generation that eventually, they're intelligent enough and bright enough to understand whenever it's fake news or not. It's another pillar that the education system, whether it's over here or another democratic country like Israel or Europe, will have to deal with that," Palti said. "But no doubt, it's a huge challenge. And I'm not sure that we have a good answer for it right now."

Palti also noted that current advancements in the cybermarket, mainly AI, are "positive."

"It can create lots of good things from sharing data regarding disease in a tiny village in South America that in a minute it can be, you know, all over the world and to solve and save lives," he said.

CRYPTO CRIMINALS BEWARE: AI IS AFTER YOU

While Palti acknowledged that not everybody has the same vision for AI and may disagree on the moral conundrums of how technology may interfere with our lives, he believes that the process cannot be stopped and people will find the correct regulations or laws to deal with such problems.

Regarding misinformation, the integrity of technology, supply chains, or cybersecurity itself, Fontaine added that greater cooperation, certainly at the government level, is needed as AI rapidly progresses. Fontaine and his colleague proposed this idea of a G7, or a similar type of summit that would bring together "techno democracies," a term used to describe the advanced technology economies that are also liberal democracies. The cooperation, Fontaine said, is essential given the knowledge that other actors, like China, Russia, North Korea and Iran, are furthering their goals.

"If we do nothing or if we do things in such a fragmented way without cooperation, then we're going to be subject to what they would like to see obtained in the world rather than what we would like to see," Fontaine added.

Greene said that a reasonably broad deployment of zero trust architecture across nations, wherein you assume bad actors and compromise, will help to adapt and maneuver within our current "bifurcated digital world" without compromising intellectual property and technology.

YES, AI IS A CYBERSECURITY 'NUCLEAR' THREAT. THAT'S WHY COMPANIES HAVE TO DARE TO DO THIS

"I don't think the kind of shifts you're likely to see with changing alliances are not going to be as dramatic as, you know, Cold War era when you had, or you had an Iranian regime pre-1980 using all American weapons and oh my gosh, what do we do after the revolution when we have an enemy flying F-14s?" Greene said. "I think digital is different enough and things evolve fast enough that we're not likely to have that type of problem."

Greene also echoed the words of Jigsaw CEO Yasmin Greene, who made the point that alliances are often based on common ideals, alluding to a distinction in cooperation motivations among global powers.

"When you look at China, Russia, Iran, there aren't really any common ideals. It is common deals. That is the common theme you have across there, which I would suggest is probably less stable long term than having a consistent approach to democracy, human rights, the belief in the dignity of the individual, which you see across the Western nations," Greene said.

Homing in on the impact of cyber warfare, Greene noted there is not a clean line between the activities of criminal hackers and nation-state attackers because it is ultimately "a game of psychology" that has existed for several decades.

"We will be using different words to have the same type of cybersecurity conversation, is my gut right now," he added.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.