With all the recent fuzz on Gen AI, we see AI everywhere. Every company is trying to bring Gen AI to the table – even in the infrastructure scene, in the creation of apps or software. Companies are looking for ways to let their employees become more efficient. And today, the same is happening in the domain of cybersecurity.
The human factor: a bottleneck in security
A persistent challenge in the domain of security has always been the human factor: a shortage of personnel to fill vital security roles. This issue has persisted over the years, dating back to the emergence of cybersecurity. With the introduction of the Generative AI component, the idea is to let experts focus on critical responsibilities, leveraging AI to streamline routine tasks and consequently reducing the manpower required for these simpler tasks. So, AI serves as a valuable addition that can greatly assist numerous companies in addressing their talent shortages.
Google’s efforts to integrate AI into the security tools
The question is: how is Google introducing AI into their security stack? For that, they’ve created the LLM model called SecPaLM. The security stack will integrate seamlessly with SecPaLM. And it is going to spark a significant transformation across its various tools.
The integration of this model is now being extended to Google’s security tools, including Chronicle, Mandiant, VirusTotal, and Security Command Center. This development will create a more streamlined experience for security analysts and engineers. They will find their tasks simplified. For example: understanding ongoing security events will be easier, as AI will swiftly create summaries for quick issue identification. Moreover, it aids in issue resolution by pinpointing and addressing problems. Chronicle, in particular, will enable users to inquire about events, their occurrence, their impact on users, and whether they indicate an issue. Notably, this innovation extends further by assisting analysts in rule-writing, a skill that typically requires time and language mastery. With this enhancement, the learning curve is less steep.
Furthermore, other cloud engineers without a background in security stand to benefit as well. Those not specialising in security will be empowered to conduct investigations and formulate rules, even without an in-depth understanding of security intricacies. Whether someone is involved in data or utilising tools like Security Command Center, Chronicle, or Mandiant, this advancement democratises access to security knowledge and capabilities for all.
So how can you use these Gen AI integrations? The best part is that you won’t need to take any extra steps; it will happen effortlessly. Once we speak about personalised integrations, using your own data, there will most probably be an integration through APIs (to be confirmed).
Here’s an overview of Google’s main security stack and how they’ll use Gen AI:
|Security Command Center
|The tool helps you monitor and investigate your Google Cloud Platform (GCP) environment. It enables you to discover any security flaws.
This is how it will use Gen AI: Generation of findings summaries: In cases where a security finding is identified, such as an open firewall, Gen AI will automatically generate a concise summary detailing the potential impact on your infrastructure-impacted resources and suggested remedial actions.
Graphical representation of attack paths: Additionally, a complementary feature provided by Gen AI depicts potential attack paths in a visual format. This is particularly valuable for people new to the Google Cloud environment. The graphical representation vividly illustrates the plausible outcomes of a potential attack, even if it was just from an open firewall configuration
|This tool is a repository of security telemetry, that helps you correlate between the different security tools and create use cases and alerts that help you monitor and detect threats to your whole environment.
This is how it will use Gen AI: Your investigations will go significantly faster through the utilisation of Gen AI. This involves integrating a large language model (LLM) directly into Chronicle, with an interface similar to ChatGPT or Bard. You will engage in interactive conversations with the tool to request information. This isn’t a one-time occurrence. Instead, you can maintain ongoing dialogues, enabling deeper insights into issues. Behind the scenes, you’ll communicate with Chronicle, which will autonomously generate queries on your behalf. There will be no requirement for in-depth knowledge of the Chronicle product, as the querying process will be handled for you.Another thing is making rules. In Chronicle, like in any other similar tool, you need to write detection rules. With Gen AI, you’ll say the rule you want in simple words, and Chronicle will change it into the right computer language (YARA-L). This will make rules in seconds, not hours or days. It will save loads of time and let everyone do security things. Even if you don’t know much about security, you can just type what you think, and it makes the rule for you.
|Chronicle SOAR automates how you react to security problems. Work together with your team to come up with fixes and other solutions, as well as keep an eye on your system to spot any issues.
This is how it will use Gen AI: Just like the Security Command Center, you’ll get an AI-made summary that explains the entire incident. Chronicle alerts are separated into cases, which represent potential security issues. These summaries will give you more details about what happened and how various alerts connect. Armed with this information, you can create more effective solutions and remediate them faster.
|Mandiant is the number one platform in threat response and a big player in threat intelligence.
This is how it will use Gen AI: Using Gen AI capabilities, Mandiant has introduced a new feature that connects with Chronicle. This feature automatically alerts you if there’s an ongoing security breach. The tool provides you with a summary and helps you understand the context of the situation.
|Virus Total analyses suspicious files, domains, IPs and URLs to detect malware and other breaches, and automatically share them with the security team.
This is how it will use Gen AI: Introducing a new feature called “code-insight.” In the past, when using VirusTotal, you would check if an IP address or file is harmful. Now, they’ve made it even better: you can input a section of code, and it will explain how it affects security. This is very helpful if you’re trying to understand what various parts of code do and whether it’s going to be malicious to your environment or not.
Why will Google Cloud be the player in the scene?
Many major security companies are incorporating Gen AI into their tools. However, the issue they face is that they lack the extensive data that Google possesses.
It’s important to highlight Google’s potential impact and its significant role in the field. As you are aware, Google stands as the largest search engine worldwide. This position allows them to accumulate vast amounts of information, containing both positive and negative aspects. Looking at it from an intelligence perspective, Google possesses enormous volumes of data relating to malware, threat actors, and more. This data is readily available without any specific effort, thanks to the widespread use of the Google search engine. This means you can tap into the wealth of information from Google’s searches and overall internet activity.
By integrating VirusTotal, the largest repository of threats globally, and partnering with Mandiant, a prominent player in the field, you gain access to this collection of intelligence. This collaboration holds immense significance when considering the question of who holds the leading position in the emerging AI security and threat intelligence landscape.
Data Privacy and Gen AI
Another significant aspect to consider is people’s persistent concerns about safeguarding their personal data due to privacy reasons. Within the domain of security tooling and the AI workbench, the same privacy principles that apply to the Google Cloud Platform (GCP) will also be upheld. This means that you will have the ability to possess your own SecPalm, allowing you to utilise your data within this framework, leading to a personalised user experience.
When it comes to managing data, envision a scenario where you are a major financial institution. You might have various tailored tools storing sensitive details such as credit card information and personal data. With SecPalm, you’ll have the capability to utilise this information within your own secure environment, without sharing it with others. Naturally, the desire to keep confidential information confidential is crucial.
Conclusion: Gen AI will enable security teams to work efficiently, with deeper and faster insights
Incorporating Gen AI into cybersecurity, driven by Google’s SecPaLM model, will definitely make an impact on security teams by enhancing efficiency, enabling deeper – and faster – insights, and addressing talent shortages.
From streamlined investigations to intuitive rule creation, Gen AI will empower both specialists and non-security experts. Google’s extensive data resources position and continued focus on data privacy will ensure you can leverage your data securely for a personalised user experience in this transformative era of AI-enhanced protection.