How evolving AI regulations impact cybersecurity

By Ram Movva and Aviral Verma

While their business and tech colleagues are busy experimenting and developing new applications, cybersecurity leaders are looking for ways to anticipate and counter new, AI-driven threats.

It’s always been clear that AI impacts cybersecurity, but it’s a two-way street. Where AI is increasingly being used to predict and mitigate attacks, these applications are themselves vulnerable. The same automation, scale, and speed everyone’s excited about are also available to cybercriminals and threat actors. Although far from mainstream yet, malicious use of AI has been growing. From generative adversarial networks to massive botnets and automated DDoS attacks, the potential is there for a new breed of cyberattack that can adapt and learn to evade detection and mitigation.

In this environment, how can we defend AI systems from attack? What forms will offensive AI take? What will the threat actors’ AI models look like? Can we pentest AI—when should we start and why? As businesses and governments expand their AI pipelines, how will we protect the massive volumes of data they depend on?

It’s questions like these that have seen both the US government and the European Union placing cybersecurity front and center as each seeks to develop guidance, rules, and regulations to identify and mitigate a new risk landscape. Not for the first time, there’s a marked difference in approach, but that’s not to say there isn’t overlap.

Let’s take a brief look at what’s involved, before moving on to consider what it all means for cybersecurity leaders and CISOs.

US AI regulatory approach – an overview

Executive Order aside, the United States’ de-centralized approach to AI regulation is underlined by states like California developing their own legal guidelines. As the home of Silicon Valley, California’s decisions are likely to heavily influence how tech companies develop and implement AI, all the way to the data sets used to train applications. While this will absolutely influence everyone involved in developing new technologies and applications, from a purely CISO or cybersecurity leader perspective, it’s important to note that, while the US landscape emphasizes innovation and self-regulation, the overarching approach is risk-based.

The United States’ regulatory landscape emphasizes innovation while also addressing potential risks associated with AI technologies. Regulations focus on promoting responsible AI development and deployment, with an emphasis on industry self-regulation and voluntary compliance.

For CISOs and other cybersecurity leaders, it’s important to note that the Executive Order instructs the National Institute of Standards and Technology (NIST) to develop standards for red team testing of AI systems. There’s also a call for “the most powerful AI systems” to be obliged to undergo penetration testing and share the results with government.

The EU’s AI Act – an overview

The European Union’s more precautionary approach bakes cybersecurity and data privacy in from the get-go, with mandated standards and enforcement mechanisms. Like other EU laws, the AI Act is principle-based: The onus is on organizations to prove compliance through documentation supporting their practices.

For CISOs and other cybersecurity leaders, Article 9.1 has garnered a lot of attention. It states that

High-risk AI systems shall be designed and developed following the principle of security by design and by default. In light of their intended purpose, they should achieve an appropriate level of accuracy, robustness, safety, and cybersecurity, and perform consistently in those respects throughout their life cycle. Compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application.

At the most fundamental level, Article 9.1 means that cybersecurity leaders at critical infrastructure and other high-risk organizations will need to conduct AI risk assessments and adhere to cybersecurity standards. Article 15 of the Act covers cybersecurity measures that could be taken to protect, mitigate, and control attacks, including ones that attempt to manipulate training data sets (“data poisoning”) or models. For CISOs, cybersecurity leaders, and AI developers alike, this means that anyone building a high-risk system will have to take cybersecurity implications into account from day one.

EU AI Act vs. US AI regulatory approach – key differences

FeatureEU AI ActUS approachOverall philosophyPrecautionary, risk-basedMarket-driven, innovation-focusedRegulationsSpecific rules for ‘high-risk’ AI, including cybersecurity aspectsBroad principles, sectoral guidelines, focus on self-regulationData privacyGDPR applies, strict user rights and transparencyNo comprehensive federal law, patchwork of state regulationsCybersecurity standardsMandatory technical standards for high-risk AIVoluntary best practices, industry standards encouragedEnforcementFines, bans, and other sanctions for non-complianceAgency investigations, potential trade restrictionsTransparencyExplainability requirements for high-risk AILimited requirements, focus on consumer protectionAccountabilityClear liability framework for harm caused by AIUnclear liability, often falls on users or developers

What AI regulations mean for CISOs and other cybersecurity leaders

Despite the contrasting approaches, both the EU and US advocate for a risk-based approach. And, as we’ve seen with GDPR, there is plenty of scope for alignment as we edge towards collaboration and consensus on global standards.

From a cybersecurity leader’s perspective, it’s clear that regulations and standards for AI are in the early levels of maturity and will almost certainly evolve as we learn more about the technologies and applications. As both the US and EU regulatory approaches underline, cybersecurity and governance regulations are far more mature, not least because the cybersecurity community has already put considerable resources, expertise, and effort into building awareness and knowledge.

The overlap and interdependency between AI and cybersecurity have meant that cybersecurity leaders have been more keenly aware of emerging consequences. After all, many have been using AI and machine learning for malware detection and mitigation, malicious IP blocking, and threat classification. For now, CISOs will be tasked with developing comprehensive AI strategies to ensure privacy, security, and compliance across the business, including steps such as:

Keeping pace with the AI threat landscape

As AI regulations continue to evolve, the only real certainty for now is that both the US and EU will hold pivotal positions in setting the standards. The fast pace of change means we’re certain to see changes to the regulations, principles, and guidelines. Whether its autonomous weapons or self-driving vehicles, cybersecurity will play a central role in how these challenges are addressed.

Both the pace and complexity make it likely that we’ll evolve away from country-specific rules, towards a more global consensus around key challenges and threats. Looking at the US-EU work to date, there is already clear common ground to work from. GDPR (General Data Protection Regulation) showed how the EU’s approach ultimately had a significant influence on laws in other jurisdictions. Alignment of some kind seems inevitable, not least because of the gravity of the challenge.

As with GDPR, it’s more a question of time and collaboration. Again, GDPR proves a useful case history. In that case, cybersecurity was elevated from technical provision to requirement. Security will be an integral requirement in AI applications. In situations where developers or businesses can be held accountable for their products, it is vital that cybersecurity leaders stay up to speed on the architectures and technologies being used in their organizations.

Over the coming months, we’ll see how EU and US regulations impact organizations that are building AI applications and products, and how the emerging AI threat landscape evolves.

Ram Movva is the chairman and chief executive officer of Securin Inc. Aviral Verma leads the Research and Threat Intelligence team at Securin.

Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.

© Info World