What The US’s Foggy AI Regulations Mean For Today’s Cyber Compliance

Brought to you by Renelis Mulyandari    

Tech leaders are uneasy that the US government is failing to take the initiative in the artificial intelligence arms race, due to its laissez faire approach to regulation.

Whereas jurisdictions like the EU and China are busy introducing more robust rules for AI development and stiff penalties for those who breach them, the US seems more inclined to let AI developers do their own thing, and it’s a growing concern for the vast majority of businesses.

Evidence of this comes from a recent Harris Poll survey in collaboration with Collibra, which shows an alarming lack of trust in the US government’s attitude towards AI regulation. A staggering 99% of the data management, privacy and AI specialists surveyed in the poll said they’re concerned about potential threats arising from AI that necessitate regulation.

“Without regulations, the US will lose the AI race long term,” Collibra’s co-founder and Chief Executive Felix Van de Maele said. “While AI innovation continues to advance rapidly, the lack of a regulatory framework puts content owners at risk, and ultimately will hinder the adoption of AI.”

According to the study, 84% of respondents would like to see the US government update its copyright laws to protect content creators from having their work stolen by AI, while 81% want to see laws in place that force AI companies to compensate individuals for using their data to train their AI algorithms.

But it’s not just data privacy and copyright protection at issue here, with 64% of survey respondents also citing the need for AI regulation to prevent security risks and increase safety. For instance, AI can be used to create and manage massive botnets to carry out automated distributed denial of service attacks, or sophisticated fraud campaigns. AI can potentially usher in a new breed of malware and ransomware that’s able to evolve on the fly to evade detection and mitigation. There have also been reported incidents of AI chatbbots leaking sensitive data.

Too Much Emphasis On Innovation

To date, the US government’s response to the demand for AI regulation has been less than reassuring, reflecting the country’s long-held emphasis on innovation, which traditionally comes at the expense of rigid rules and frameworks.

For one thing, the regulatory requirements today are confusing, with various competing initiatives announced, including President Joe Biden's Executive Order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” the Office of Science and Technology Policy’s "Blueprint for an AI Bill of Rights," and the National Institute of Standards and Technology’s "Artificial Intelligence Risk Management Framework.” While these initiatives present different types of guidelines, one common theme is that they’re all focused on the responsible development of AI, emphasizing self-regulation and voluntary compliance.

Arik Solomon, co-founder and CEO of the cyber risk and compliance automation company Cypago, argues that the US needs to strike a balance that gives companies enough room to innovate, while ensuring concrete rules are in place to ensure everyone is protected and knows what controls need to be in place to remain compliant over time.

“Regulating AI is both necessary and inevitable to ensure ethical and responsible use,” Solomon told MinuteHack. “While this may introduce complexities, it need not hinder innovation. By integrating compliance into their internal frameworks and developing policies and processes aligned with regulatory principles, companies in regulated industries can continue to grow and innovate effectively.”

But the US has so far struggled to strike this tricky balance. In practice, what it’s really doing is devolving the issue to individual states, which only adds further confusion, as evidenced by California’s own AI Bill, which has been subject to intense debate. It initially proposed a heavier-handed approach to AI regulation, only to face intense opposition from big companies like Meta Platforms and Google, which denounced it for “stifling innovation,” eventually being vetoed by Governor Gavin Newsom in September.

The US approach is in stark contrast to the path carved out by the governments of the EU and China, which have laid out clear-cut, binding rules in their own policies, complete with stiff penalties to enforce them. The EU’s AI Act is more focused on ensuring transparency and protections for users and content creators, while China has announced several policies geared towards ensuring the state has robust control over both the data and the AI models that arise from it.

For instance, the AI Act clearly outlines four major risk categories in AI – namely, minimal risk, limited risk, high risk and unacceptable risk. It lumps various AI applications into each category, with increasing prohibitions for each one. A generative AI chatbot like ChatGPT is deemed to be minimal risk and subject to very few regulations, while a system designed for subliminal manipulation to try and sway elections is said to be unacceptable and banned outright.

Going It Alone

In light of AI’s staggering pace of development and the lack of any real regulation present in the industry, US companies have little option but to try and define their own regulatory standards. As a starting point, businesses need to think about compliance. They can look at existing frameworks that regulate and govern AI development, and use these as the basis of their own AI governance, ensuring that they’ll be more or less in line with global standards.  

An example of this kind of framework might be the November 2023 Bletchley Declaration, which was agreed upon by 29 countries during the first ever global summit on AI safety. Signatories included the US, China, Australia, Germany and the UK.

In a nutshell, the Bletchley Declaration aims to balance the need for innovation with the implementation of guardrails to mitigate the risks posed by AI, and it provides a solid roadmap  for US businesses to follow.

Striking A Balance

To strike a balance between AI innovation and safety, compliance provides a good starting point. Compliance is key to cybersecurity, and it can form a strong barrier against AI-based threats. Traditional governance frameworks provide structured guidelines that allow companies to align security practices with their business objectives. As such, they can form a custom roadmap for AI regulation, enabling companies to identify threats and create strategies to mitigate them.

“AI can't function as a black box when compliance is involved,” noted Kannan Venkatraman, GenAI Services Exec and CTO at Capgemini. “At several organizations, I developed governance frameworks that foster communication across departments, regularly auditing AI’s outputs to ensure alignment with privacy and compliance policies. Finance and HR teams now co-design AI systems with legal and compliance experts, ensuring transparency and traceability.”

Compliance can be combined with a basic set of principles to follow, with a priority on fairness, accountability, privacy and transparency, which are used to guide all decisions regarding AI development.

At the same time, companies need to focus on implementing processes and tools to detect and mitigate AI bias, plus regular audits to ensure their systems are not discriminating against certain groups, vulnerable to misuse, or leaking sensitive data.

In addition, organizations must emphasize a user-centric design to AI that respects user’s privacy and personal preferences, while communicating to individuals what data they will collect and how that information will be used. AI teams should also focus on adopting a flexible and adaptive approach that allows them to adjust to evolving ethical standards and technological advances

Finally, businesses must collaborate with regulatory bodies and other organizations to ensure they remain up to date with evolving AI regulations. By doing this, they’ll have the opportunity to actively participate in the conversation and play a role in the creation of regulations governing AI development.

Responsible AI Wins the Race

By taking the initiative on responsible AI development and adopting a flexible, transparent and user-centric approach, companies will be able to benefit from the incredible pace of innovation while staying on the right side of any regulatory requirements.

They’ll be able to minimize AI security risks, protect users and content creators, and encourage the development of responsible and trusted AI systems without sacrificing their ability to innovate.

Image: Greggory DiSalvo

You Might Also Read: 

Five Best Practices For Secure & Scalable Cloud Migration:


If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

« The Problem With Generative AI - Leaky Data
Meta Deletes 2 Million Fake Social Media Accounts  »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

IT Governance

IT Governance

IT Governance is a leading global provider of information security solutions. Download our free guide and find out how ISO 27001 can help protect your organisation's information.

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

TÜV SÜD Academy UK

TÜV SÜD Academy UK

TÜV SÜD offers expert-led cybersecurity training to help organisations safeguard their operations and data.

Coalfire

Coalfire

Coalfire specialises in cyber risk management and compliance. Our services span the cybersecurity lifecycle from advisory and compliance, to testing and engineering, monitoring and optimization.

Red Hat

Red Hat

Red Hat is a leader in open source software development. Our software security team proactively identifies weaknesses before they become problems.

France Cybersecurity

France Cybersecurity

France Cybersecurity represents the French cybersecurity industry to raise international awareness of French cybersecurity capabilities and solutions.

CSIRT Malta

CSIRT Malta

CSIRT Malta supports critical infrastructure organisations in Malta on how to protect their information infrastructure assets and systems from cyber threats and incidents.

Saviynt

Saviynt

Saviynt is a leading provider of Cloud Security and Identity Governance solutions.

AKATI Sekurity

AKATI Sekurity

AKATI Sekurity is a security-focused consulting firm providing services specializing in Information Security and Information Forensics.

Blue Cedar

Blue Cedar

Blue Cedar's mobile app security integration platform secures and accelerates mobile app deployment for enterprises and government organizations around the world.

SafeGuard Cyber

SafeGuard Cyber

The SafeGuard Cyber SaaS platform empowers enterprises to adopt the social and digital channels they need to reach customers, while reducing digital risk and staying secure and compliant.

Software Diversified Services (SDS)

Software Diversified Services (SDS)

SDS provides the highest quality mainframe software and award-winning, expert service with an emphasis on security, encryption, monitoring, and data compression.

Ross & Baruzzini

Ross & Baruzzini

Ross & Baruzzini delivers integrated technology, consulting, and engineering solutions for safe, sustainable, and resilient facilities.

East Midlands Cyber Resilience Centre (EMCRC)

East Midlands Cyber Resilience Centre (EMCRC)

The East Midlands Cyber Resilience Centre is set up to support and help protect businesses across the region against cyber crime.

Varen Technologies

Varen Technologies

Varen Technologies is an innovative consulting partner with highly respected cyber security, analytics, Agile Software Development and IT/maintenance expertise.

Trusted Cyber Solutions

Trusted Cyber Solutions

Trusted Cyber Solutions is an independent Cyber Security and Risk Management consultancy.

FPG Technologies & Solutions

FPG Technologies & Solutions

FPG Technology is a technology solutions provider and systems integrator, specializing in delivering IT Consulting, IT Security, Cloud, Mobility, Infrastructure solutions and services.

BugProve

BugProve

BugProve offers a firmware analysis tool that speeds up security testing processes and supports compliance needs by automating repetitive tasks and detecting 0-day vulnerabilities.

Black Bison Cyber

Black Bison Cyber

Black Bison Cyber is a premier cybersecurity firm specializing in elite, discreet, and highly personalized digital protection for high-profile individuals and executives.