What Is the ‘Most Pressing Concern’ for Cyber Professionals?


Generative AI was top of mind at the ISC2 Security Congress conference in Las Vegas in October 2024. How much will generative AI change what attackers — and defenders — can do?

Alex Stamos, CISO at SentinelOne and professor of computer science at Stanford University, sat down with TechRepublic to discuss today’s most pressing cybersecurity concerns and how AI can both help and thwart attackers. Plus, learn how to take full advantage of Cybersecurity Awareness Month.

This interview has been edited for length and clarity.

When small or medium businesses face large attackers

TechRepublic: What is the most pressing concern for cybersecurity professionals today?

Stamos: I’d say the vast majority of organizations are just not equipped to deal with whatever level of adversary they’re facing. If you’re a small to medium business, you’re facing a financially motivated adversary that has learned from attacking large enterprises. They are practicing every single day breaking into companies. They have gotten quite good at it.

So, by the time they break into your 200-person architecture firm or your small regional hospital, they’re extremely good. And in the security industry, we have not done a good job of building security products that can be deployed by small regional hospitals.

The mismatch of the skill sets you can hire and build versus the adversaries you’re facing is faced by almost every level at the large enterprise. You can build good teams, but to do so at the scale necessary to defend against the really high-end adversaries of the Russian SVR [Foreign Intelligence Service] or the Chinese PLA [People’s Liberation Army] and MSS [Ministry of State Security] — the kinds of adversaries you’re facing if you’re dealing with a geopolitical threat — is extremely hard. And so at every level you’ve got some kind of mismatch.

Defenders have the advantage in terms of generative AI use

TechRepublic: Is generative AI a game changer in terms of empowering adversaries?

Stamos: Right now, AI has been a net positive for defenders because defenders have spent the money to do the R&D. One of the founding ideas of SentinelOne was to use what we used to call AI, machine learning, to do detection instead of signature-based [detection]. We use generative AI to create efficiencies within SOCs. So you don’t have to be highly trained in using our console to be able to ask basic questions like “show me all the computers that downloaded a new piece of software in the last 24 hours.” Instead of having to come up with a complex query, you can ask that in English. So defenders are seeing the advantages first.

The attackers are starting to adopt it and have not got all the advantages yet, which is, I think, the scarier part. So far, most of the outputs of GenAI are for human beings to read. The trick about GenAI is that for large language models or diffusion models for images, the output space of the things that a language model can put out that you will see as legitimate English text is effectively infinite. The output space of the number of exploits that a CPU will execute is extremely constrained.

SEE: IT managers in the UK are looking for professionals with AI skills.

One of the things that GenAI struggles with is structured outputs. That being said, that is one of the very intense areas of research focus: structured inputs and outputs of AI. There are all kinds of legitimate, good purposes for which AI could be used if better constraints were placed on the outputs and if AI was better at structured inputs and outputs.

Right now, GenAI is really just used for phishing lures, or for making negotiations easier in languages that ransomware actors don’t speak … I think the real concern is when we start to have AI get really good at writing exploit code. When you can drop a new bug into an AI system and it writes exploit code that works on fully-patched Windows 11 24H2.

The skills necessary to write that code right now only belong to a couple hundred human beings. If you could encode that into a GenAI model and that could be used by 10,000 or 50,000 offensive security engineers, that is a huge step change in offensive capabilities.

TechRepublic: What kind of risks can be introduced from using generative AI in cybersecurity? How could those risks be mitigated or minimized?

Stamos: Where you’re going to have to be careful is in hyper automation and orchestration. [AI] use in situations where it’s still supervised by humans is not that risky. If I’m using AI to create a query for myself and then the output of that query is something I look at, that’s no big deal. If I’m asking AI “go find all of the machines that meet this criteria and then isolate them,” then that starts to be scarier. Because you can create situations where it can make those mistakes. And if it has the power to then autonomously make decisions, then that can get very risky. But I think people are well aware of that. Human SOC analysts make mistakes, too.

How to make cybersecurity awareness fun

TechRepublic: With October being Cybersecurity Awareness Month, do you have any suggestions for how to create awareness activities that really work to change employees’ behavior?

Stamos: Cybersecurity Awareness Month is one of the only times you should do phishing exercises. People that do the phishing stuff all year build a negative relationship between the security team and folks. I think what I like to do during Cybersecurity Awareness Month is to make it fun and to gamify it and to have prizes at the end.

I think we actually did a really good job of this at Facebook; we called it Hacktober. We had prizes, games, and t-shirts. We had two leaderboards, a tech one and a non-tech one. The tech folks, you could expect them to go find bugs. Everybody could participate in the non-tech side.

If you caught our phishing emails, if you did our quizzes and such, you could participate and you could get prizes.

So, one: gamifying a bit and making it a fun thing because I think a lot of this stuff ends up just feeling punitive and tricky. And that’s just not a good place for security teams to be.

Second, I think security teams just need to be honest with people about the threat we’re facing and that we’re all in this together.

Disclaimer: ISC2 paid for my airfare, accommodations, and some meals for the ISC2 Security Congres event held Oct. 13 – 16 in Las Vegas.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top