The swift development of artificial intelligence tools over the past year or so has opened a whole world of digital possibilities for us all – from massive strides across the entire spectrum of technological R&D to simply creating entertaining images for our families and friends.
Unfortunately, however, those same tools are also now available to threat actors – those who would use them to attack individuals and institutions for criminal, financial or political gain.
Exploring the darkest reaches of the internet, Israeli security company Cybersixgill exposes and combats AI dangers posed by such threat actors. It recently published a comprehensive paper titled “Cybersecurity in 2024: Predicting the next generation of threats and strategies,” which details the biggest dangers posed by the new range of artificial intelligence tools and how to counter them.
The document outlines five key future safety issues for AI: data protection; attack threats; regulation; proactive cybersecurity; and geopolitical concerns.
“As cybercriminals aggressively employ AI, they gain more efficiency and accuracy than ever, making new types of cyber attacks a dynamic challenge that calls for proactive and adaptive cybersecurity strategies,” Cybersixgill CEO Sharon Wagner warns in his introduction to the paper.
Wagner tells NoCamels that the AI threat actors – those who use the technology for any kind of malicious activity – run the gamut from small-time criminals to states such as North Korea and Iran.
“They all have new tools that they can use in order to attack and they are using them,” he cautions.
“They’re taking advantage of them very quickly; it’s up to us as the security community to develop these tools as fast as possible so we can protect ourselves against them.”
And Wagner says that while this may all sound “a bit alarming,” the introduction of any new technology requires a thorough understanding of the threats they bring in order to counter them successfully.
“All of a sudden OpenAI [which created ChatGPT] and the other companies that have been developing this technology for a long time commercialize their product and now we have new technology. We can use the new technology in order to attack, but it’s always been like that,” he says.
He draws a comparison between cloud technology, whose arrival about a decade ago led to the development of new measures to protect it, and the recent arrival of generative AI.
“People developed the tools to protect the cloud,” he says. “So I don’t think it should be alarming.”
Furthermore, Wagner explains, the tools used by security companies are becoming “more and more sophisticated” at identifying threats as technology advances.
He defines the battle between attackers and defenders as a game of cat and mouse, where each side is trying to get the upper hand.
“There’s always someone who’s leading in terms of technology and the other one follows,” Wagner explains.
But while the number of threat actors has increased, so have the tools to stop them. And it is this preventative action that Cybersixgill specializes in – trawling the dark web to root out potential AI threats.
In fact, company lore has it that the name derives from the sixgill sharks that hide in the deepest parts of the ocean.
Sign up for our free weekly newsletter
SubscribeWagner explains that everyone who uses even the most basic internet-connected technology – be it a cell phone or just a chip in their pet – has what is termed an “attack surface” that is vulnerable to hackers.
“Through these interfaces, a threat actor can find the breach and sneak into your network,” he says. The bigger our attack surface, and the more internet-connected devices we have, the more vulnerable we are.
Cybersixgill, Wagner says, detects and eliminates potential AI threats before they happen or while they are still emerging, and also creates prevention measures that stop developed threats from penetrating a company’s attack surface.
And it is not just companies like Tel Aviv-based Cybersixgill that are working to thwart AI threat actors, Wagner says. Countries also have their own security agencies acting to counter these threats and are in the process of legislating new regulations both domestically and internationally.
“There must be protocols and standards in place… global standardization of protocols for protection,” he says, “It’s going to take some time. As we said, when new technology comes first, it takes time for the regulation and for the security protocols to come: typically a few quarters and in some cases a few years.”
The US is leading the way in this effort, he says, through institutions such as the Cybersecurity and Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST).
In Israel, he says, such issues are handled by the National Cyber Directorate.
“People in some cases tend to underestimate these organizations,” he says. “But these organizations know their work very, very well – they see all the threats coming from all different types of verticals.”
AI, he says, actually is a crucial tool for detecting threats given its ability to sort through billions of pieces of data to generate conclusions or insights.
One of the biggest cybersecurity challenges today, he explains, is the “maturity level” of those engaged in protecting our networks, and this is where AI is of enormous assistance.
He gives the example of young security professionals at a bank, fresh out of school, who are trying to deter threat actors with many years of experience of hacking into systems.
“It can help them look at the data much more focused, much more prioritized, much clearer and help them increase their maturity levels so they can protect their assets faster,” he explains. “AI can definitely help me increase my maturity level.”
Nonetheless, he warns, that data could be open to manipulation by threat actors – primarily on a state level – and ultimately, there has to be human interpretation of the information.
“You cannot only rely on AI that is based on statistical models,” he says.
“AI can help you better understand the data, AI can help you better mine the data, bubble up potential threats, prioritize them for you. But eventually the decision requires human intervention.”
Facebook comments