DEV Community

Cover image for RSA Conference 2024: AI and the Future Of Security
Dwayne McDaniel for GitGuardian

Posted on • Originally published at blog.gitguardian.com

RSA Conference 2024: AI and the Future Of Security

The first week of May saw security practitioners from all over the globe come to the city by the bay to participate in RSA. In 1991, just a handful of security researchers got together for a single panel discussion about DES versus DSS. From those humble beginnings, the event has grown to over 41,000 participants, more than 500 sessions, panels, and trainings, and over a dozen co-located events, each with their own schedules and sessions. It would be impossible to summarize the entire event, from the opening keynotes with leaders like US Deputy Secretary of State Antony J. Blinken and Kevin Mandia to the unforgettable closing celebration with Alicia Keys.

https://www.linkedin.com/posts/ayacodyestrin_rsac-activity-7195930599239335936-GyJE

In the following three sections, I will try to encapsulate a small part of the sessions, the RSA Sandbox experience, and the view from the massive expo floor that makes up the event. 

The Sessions at RSAC

A focus on AI from a governance and tooling perspective

Day one of RSA once again brought the co-located event from Techstrong: DevOps Connect. This year's theme was "Security in an AI Universe." Throughout the day, several speakers from companies such as AWS, Google, and Anthropic highlighted tools that have recently evolved to help make deployment more secure using AI. Other speakers talked a higher level about regulatory compliance and where we, as an industry, are headed. Starting things out though, was a look at how we can shape the future of AI and LLMs from a futurist and Hugo and Nebula-winning author, David Brin.

The keynote, "Anticipation, Resilience and Reliability: Three Ways AI Will Change DevSecOps ... If we Do it Right," walked us through the dual nature of AI, its potential to elevate humans, and the dystopian consequences if mishandled. Fortunately, we can learn from nature and the evolution of human society to find solutions to keep AI in check, even while AI is very rapidly becoming more sophisticated.

He showed how we have faced attacking hyper-smart predatory entities with uncanny language manipulation skills. We call them lawyers. We protect ourselves from lawyers by getting our own lawyers to hold the other one accountable. The key to this system is proper credentials that identify each party as a real lawyer. Similarly, with proper human-sponsored IDs, we can let AIs evolve within practical bounds, as humans are still going to be responsible for the real-world resources they need to consume, like electricity. 

Author and Futurist David BrinOpenAI: Scaling Security Programs Using LLMs

How OpenAI is using LLMs

In his session "OpenAI: Scaling Security Programs Using LLMs." Matt Knight, Head of Security at OpenAI, showed us how large language models (LLMs) are rapidly evolving security operations inside his company. He began by looking at how they use AI to summarize complex data and automate mundane tasks. One use case was to help route inbound helpdesk message requests, allowing users to basically write "I don't know where to route this, but..." and describe their issue, saving a lot of time over manually performing triage.  

Some more advanced cyber defense use cases included using an LLM to decode Russian cybercrime jargon and analyze complex IaC and IAM configurations. It is not all good news, though, as hallucinations still present some real hurdles. There is also the issue of limited prompt length, leading to context limitations and less-than-optimal results. He concluded that they are working on this element at OpenAI all the time. 

OpenAI: Scaling Security Programs Using LLMs by Matt KnightGoogle's Approach to Securing AI: SAIF Framework

The bad news is that there is little agreement on good LLM governance models across the companies they surveyed. The good news is Google has been working on this issue for a while now. In their talk "SAIF from Day One: Google's Approach for Securing AI," Dr. Anton Chuvakin and Taylor Lehmann from Google introduced the Secure AI Framework, SAIF. This approach focuses on a proactive approach to AI security. 

No matter what your LLM is doing, robust encryption is seriously needed, as interference attacks and corruption are happening daily. We need to employ monitoring everywhere, as soon as possible, to protect against model theft and other threats. Given the nature of LLMs, you can not predict the outcome AI produces; it is non-deterministic. This makes observability and governance all that more important, as we need to ensure we are not causing harm to users.

SAIF from Day One: Google's Approach for Securing AI

10 steps to better cloud security

The session from Shai Morag, SVP, General Manager Cloud Security at Tenable, "Cloud Security Novice to Native in 10 Steps: A CNAPP Approach," provided exactly what that title promised and more. The reality is that we are trying to secure new attack vectors while at the same time dealing with staffing and skill shortages. 

If we take an end-to-end view, understanding that context is king, a common pattern emerges that shows how we can secure any cloud-based infrastructure. Shai broke the pattern down into three stages: Discover, Manage, and Scale. Without going into detail per item, here are the ten steps to CNAPP Shai shared:\
1. Discover: All assets\
2. Discover: All relationships\
3. Discover: All access to resources\
4. Manage Risk: Understand your compliance and enterprise policy needs\
5. Manage Risk: List all risks and compliance violations\
6. Manage Risk: Prioritize based on full-stack context\
7. Manage Risk: Visualize and deep-dive into findings\
8. Manage Risk: Remediate and drive least privilege/Zero Trust\
9. Scale: Integrate findings with your CI/CD pipeline\
10. Scale: Automate findings in DevSecOps workflows

Cloud Security Novice to Native in 10 Steps: A CNAPP Approach from Shai Morag

The state of the CISO in 2024

Nicholas Kakolowski, Senior Research Director at IANS, and Steve Martano, Partner at Artico Search, explored the shifting expectations of the CISO role in their session, "State of the CISO 2024: Doing More With Less." Their survey-based research shows that the scope of responsibility continues to widen, now including third-party risk management, business continuity, and IAM ownership in many orgs. CISOs are increasingly seen as business risk leaders rather than just technical experts. 

If there is a silver lining in their research, it is that Security budgets are not being decreased, as in money is being subtracted from their budgets. Instead, we are seeing the annual increase go from around 12% to only 6% year-over-year in 2023. They said companies should do their best to cross-train and hire from within to make the best use of the budget and fill skill gaps. The more you invest in your team and make them feel there is a career path available, the more likely they are to stick around, which is good for everyone. 

State of the CISO 2024: Doing More With Less

Tales from the RSA Sandbox

While sessions at RSAC are a great way to learn, many attendees want a more hands-on experience. That is where the Sandbox comes in. Much like the villages at DEF CON, the sandbox features a variety of stages and pods where attendees can interact with various areas of cybersecurity. This year's areas of focus included aerospace, IoT, AI, and AppSec. GitGuardian is proud to be a sponsor of AppSec Village, which is the organization that set up three areas where various groups could run exercises, practical demos, and even a capture-the-flag (CTF).  

Manual code review vs. using the right tool

We were thrilled this year to run a tabletop card-based exercise called "Spot the Secrets: Finding The Valid Secrets Throughout Your Environments." Over the course of four sessions, 64 attendees got to be the first people to experience our simulation of what it is like to solve the issue of secrets sprawl using manual code review. Based on our research 27% of IT decision-makers say they rely on exactly this approach, and many of the security practitioners who sat down with us at RSA admitted they had never tried it themselves.

Aside from looking through code commits, participants also looked through stacks of Jira tickets, Slack messages, and log files, all while timing themselves to see how long the experience felt under time pressure. When they were done, we introduced a tool that showed exactly where secrets existed in the cards and which were valid, meaning usable by an attacker. Every player had at least one false positive or negative, often more. With the generous feedback we received, stay tuned for more news about this exercise at future AppSec Villages.

Spot the secret at AppSec Village at RSAC

The largest security expo on earth

When most people think of RSA, they immediately think of the massive expo floor where vendors set up their booths. Some of these booths feature multi-story construction, actual race cars, and screens that could take up a whole room in most other venues. Others are much more humble and provide just enough space for a few representatives to give quick demos of their tech. And let's not forget about the swag; some attendees come with an extra suitcase just to haul it back home.

While it can be bewildering to navigate the 600 exhibitors present on the main floor, it is one of the fastest ways to get up to date on the trends in the market and the latest products from industry leaders. Very importantly, it is a place where customers can meet and interact with the teams behind their chosen tech, putting a face to a brand and getting connected with the folks you might never otherwise get to meet in person. Feedback is heard loud and clear without a screen to get in the way.

The buzz at GitGuardian's booth

Of course, GitGuardian was one of the vendors on the expo floor. We met hundreds of folks over the course of the conference and got to catch up with so many familiar faces, sometimes for the first time in person. A good number of folks were drawn in by our eye-catching stickers and superhero-themed shirts, but many people came to booth with questions in hand and the team was glad to answer them all.

This was the first time we were able to talk about our latest offering, GitGuardian Source Composition Analysis (SCA). People were impressed by the efficient SBOM creation through the tool and our innovation of effortlessly exposing all the licenses across repos with the ability to filter by license type for copyleft. A lot of people were blown away by the developer experience through the command line and the speed of the solution as well. We are very proud to have had the chance to show it off along with our other advancements since the last RSA Conference. 

The GitGuardian Team at the booth 

RSA is all about the people

There is so much else that could be said about RSA, like the amazing parties and social events, learning labs, or the "Birds of a Feather" sessions. Beyond all of those parts, RSA Conference 2024 was about coming together as humans to try to get a handle on security. Securing our organizations and ourselves for life in an AI-driven world is going to take a group effort, where we can learn from each other and share ideas. While we might not ever get to the goal of perfectly safe environments and tools, we can take some comfort in knowing that so many people are also on this same journey. We can't wait until next year to see many of the new friends we made along the way.

Top comments (0)