Google announces Google Security AI Workbench

Google announces Google Security AI Workbench

Google Cloud has announced the Google Cloud Security AI Workbench, which uses generative AI models to obtain improved insight into the threat environment, following in the tracks of Microsoft.

The Security AI suite was driven by Sec-PaLM, a security platform with a large language model (LLM) created exclusively for cybersecurity use cases.

The goal is to utilize the latest advances in AI to supplement point-in-time incident analysis, threat identification, and analytics to counter and prevent new infections by offering trustworthy, relevant, and actionable information.

To that aim, the Security AI Workbench includes a slew of new AI-powered tools, such as VirusTotal Code Insight and Mandiant Breach Analytics for Chronicle, that analyse potentially malicious scripts and notify clients about current breaches in their environments. Mandiant Threat Intelligence AI, which is built on top of Mandiant’s huge threat graph, will also use Sec-PaLM to swiftly identify, summarise, and act on threats relevant to each organisation.

This announcement comes just days after Google announced the formation of a new unit dubbed Google DeepMind, which brings together DeepMind’s AI research groups with Google Research’s Brain team to “build more capable systems more safely and responsibly.”

In a press interview, Eric Doerr, VP of Engineering at Google Cloud, said, “Imagine a world where you know, as you’re generating your infrastructure, there’s an auto-generated security policy, security control, or security config that goes along with that.” “That’s one example of something we’re working on that we think will be transformative in the world of security operations and security administration.”

Read also: South African +OneX acquires cybersecurity firm, MMC

Google is also making use of machine-learning models 

VirusTotal’s Code Insight tool, on the other hand, is intended to provide natural language summaries of code snippets in order to identify and mitigate possible risks. It may also be used to identify and remove false negatives and positives.

Another major feature is Security Command Centre AI, which uses Sec-PaLM to give operators “near-instant analysis of findings and possible attack paths,” as well as shaped assets and proposed mitigations.

Google is also using machine learning models to spot and react to API abuse and business logic attacks, in which an adversary weaponizes a lawful capability to accomplish a bad aim while avoiding a security notice.

“With our AI advances, we can now combine world-class threat intelligence with point-in-time incident analysis and novel AI-based detections and analytics to help prevent new infections,” Sunil Potti, vice president and general manager of Google Cloud Security, said. “These advances are critical in helping to counter a potential surge in adversarial attacks that use machine learning and generative AI systems.”

SLVA Cybersecurity, ContraForce to provide African SMEs with security operations technologies

Generative AI is evolving with engagement from various firms

The announcement also comes at a time when more firms are beginning to explore defensive generative AI use cases as part of a market that MarketsandMarkets projects will be worth $51.8 billion by 2028.

Microsoft is a major competitor that is exploring defensive generative AI use cases with Security Copilot, an AI assistant that evaluates danger signals and generates a textual summary of possible breach activities using GPT-4 and Microsoft’s proprietary data.

Other firms, such as cloud security provider Orca Security and Kubernetes security supplier ARMO, have been experimenting with integrations that leverage generative AI to automate SOC processes.

SentinelOne, one such company, also introduced an LLM security solution at RSAC today that employs algorithms like GPT-4 to speed up human-led threat-hunting investigations and organise automated responses.

Other companies, such as cloud security supplier Orca Security and Kubernetes security firm ARMO, have been experimenting with integrations that use generative AI to automate SOC processes.

Doerr, on the other hand, claims that Google Cloud’s data sets it unique from other generative AI-powered security solutions. 

“I believe we have an unprecedented amount of security data to train the model to speak security very well,” Doerr said, referring to data collected throughout the Google product ecosystem via Mandiant threat intelligence, Chrome, Gmail, and YouTube.

Furthermore, Doerr emphasizes that Google Cloud clients will be able to use the LLM as it is offered out-of-the-box or plug in their own data to refine the mode.