Exploring How Generative AI Is Contributing To Cybersecurity Threats & Risks

Generative AI applications like ChatGPT, Brad, Stable, Diffusion, and Bing have captured unprecedented attention and popularity from all across the globe. These innovative tools can perform routine tasks like data classification, improving customer service, and providing real-time data analysis. But, they make headlines due to their ability to write text, create images, and other digital art forms.

Generative AI technology is growing and progressing rapidly without any chance of slowing down soon. ChatGPT is the fastest-growing AI tool released in November 2022, and four months later, OpenAI released a much improved and larger language model (LLM) popularly known as ChatGPT-4. The latest data reveals that ChatGPT has over 100 million users, while the platform crossed 1 million in just five days of its launch. This popularity is also extended to other platforms; in May 2023, Google announced new features for generative AI like Search Generative Experience and another LLM called PaLM-2.

 Amidst the growing popularity, it's also increasing the threat landscape as organizations see many threats and risks that come into fulfillment. Therefore, it's crucial to understand the AI-based risks and develop effective measures to mitigate the potential damages. 

The Darker Implications of Generative AI

Despite all the advances, AI technology has enhanced the capabilities of cybercriminals and enables them to launch more sophisticated and targeted attacks. Bad actors significantly misuse ChatGPT to improve phishing attacks or create polymorphic malware. The famous Blackberry Software company has shared examples of compromised business messages and phishing hooks that ChatGPT can craft. However, OpenAI has effectively put in measures to prevent it from responding to such requests.

The discovery of a fake ChatGPT Chrome browser extension that hijacked Facebook accounts and created rogue admin accounts is another example of how hackers can exploit ChatGPT to spread malware and misinformation. In addition, various generative AI tools, including ChatGPT, are vulnerable to prompt injection attacks. In this technique, attackers craft prompts that trick the model into performing tasks like writing malicious codes, providing inaccurate outputs, or creating phishing websites. 

Another significant issue around generative AI is that any information entered into ChatGPT becomes part of its training dataset. Though it sounds helpful, it isn't because if cybercriminals get hold of such data, they misuse it to fulfill their malicious intentions. For example, Samsung engineers entered the company's top-secret data, like proprietary code and internal meeting notes relating to their hardware, resulting in the leaking of sensitive data.  

Moreover, generative AI tools are much more affordable, and therefore, malicious actors can easily use them to spread misinformation and propaganda. The AI tool's impressive search capabilities help bad actors create convincing content that they can use to manipulate public opinion or cause significant harm. One may not forget the incident when the OpenAI tool ChatGPT defamed a mayor in Australia as being jailed for bribery when he was the whistleblower in a case.  

Defending Against Generative-AI Threats

Organizations need a proactive approach to defend against generative AI threats. This includes practicing robust security measures such as using unique passwords, beware of phishing and unsolicited communications, updating the software, and enabling phishing-resistant MFA instead of only MFA. Besides this, other safety measures that prove helpful to businesses are as follows:

  • Security Assessments and Audits: Regular security assessments and audits help identify potential vulnerabilities and security gaps within the generative AI models. Doing so allows organizations to address the issues before the bad actors exploit them.
  • Data Privacy and Protection: Prioritize data privacy and protection by implementing access controls and strong encryption via a VPN to protect sensitive information that the employees entered in generative AI models.
  • Model Validation: Enterprises must implement adversarial training techniques to recognize and eliminate the ethical concerns around generative AI models.
  • Ethical Considerations: Organizations must be mindful of the ethical implications of generative AI. They should establish clear guidelines regarding the ethical use of AI technology and keep a check on employees to ensure the guidelines are followed.
  • Cyber Awareness and Education: Cybersecurity is a shared responsibility of all individuals across the organization, and therefore, they need to educate the employees about the potential dangers of generative AI. Also, they must teach them to detect and respond to threats to mitigate risks.

Staying updated about the latest developments in generative AI technology and adopting provocative measures helps organizations minimize the challenges and risks to which generative AI exposes them.

The Future of Generative AI

The misuse of AI has affected businesses, consumers, and even the government. To combat the dangers of this technology in the future, the White House has taken the initiative and announced new investments in AI research and promoted responsible AI innovation that protects the rights and safety of American citizens. However, more innovative actions are needed to ensure that generative AI technology is used ethically and responsibly. 

Organizations looking to defend against emerging AI-related threats can adopt the Zero-Trust Network Access (ZTNA) approach now and in the future. This security framework eliminates the trust from the network perimeter and continuously monitors, authenticates, and authorizes each user and device within the network. This ensures that no unauthorized user or system can access the generative AI models.

As AI-based threats become more sophisticated, adopting AI-driven security solutions will become essential in detecting and preventing the threats. For example, the HackerOne Bounty program offers continuous adversarial testing and detects security loopholes within the attack surface, including those arising from poor implementation of generative AI. 

Similarly, a cybersecurity startup also announced the launch of an AI-powered search platform. It aims to help security teams improve their mean time to detection (MTTD) and mean time to response (MTTR) by 80% and offer correlation and risk validations for potential vulnerabilities. Overall, generative AI tools will continue to evolve and help shape a more technologically developed future. 

Final Words

Generative AI is trang significant risks and dangers in the hands of hackers. They can misuse this technology to craft phishing emails, spread propaganda and malware, and produce biased outputs.

As organizations increasingly rely on this technology, they must adopt cybersecurity measures to mitigate the risks and leverage the benefits they offer.

Farwa Sajjad is a Cyber Security Journalist & Product Marketing Writer               Image: Steve Johnson

You Might Also Read:

Numerous Organisations Are Banning ChatGPT:

___________________________________________________________________________________________

If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

 

« Australian Government Suffers A Widespread Ransom Attack
Jargon Buster: Untangling The Complexity In Cybersecurity  »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

ZenGRC

ZenGRC

ZenGRC - the first, easy-to-use, enterprise-grade information security solution for compliance and risk management - offers businesses efficient control tracking, testing, and enforcement.

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

The PC Support Group

The PC Support Group

A partnership with The PC Support Group delivers improved productivity, reduced costs and protects your business through exceptional IT, telecoms and cybersecurity services.

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

North Infosec Testing (North IT)

North Infosec Testing (North IT)

North IT (North Infosec Testing) are an award-winning provider of web, software, and application penetration testing.

National Crime Agency (NCA)

National Crime Agency (NCA)

The NCA's Cyber Crime Unit focuses on critical cyber incidents in the UK as well as longer-term activity against the criminals and the services on which they depend.

Athena Dynamics

Athena Dynamics

Athena Dynamics focuses on Cyber Security, especially in Critical Information Infra-structure Protection and Enterprise IT Operation Management products and Services.

Exonar

Exonar

We enable organisations to better organise their information, removing risk and making it more productive and secure.

Carson & SAINT

Carson & SAINT

Carson & SAINT is an award-winning consulting firm with deep experience in cybersecurity technology, software, and management consulting.

BankVault

BankVault

BankVault is a new type of cyber technology (called remote isolation) which sidesteps your local machine and any possible malware.

Blancco Technology Group

Blancco Technology Group

Blancco Technology Group is a leading global provider of mobile device diagnostics and secure data erasure solutions.

Keyavi Data

Keyavi Data

With Keyavi’s evolutionary data protection technology, your data stays within the bounds of your control in perpetuity.

Avertium

Avertium

Avertium is the managed security and consulting provider that companies turn to when they want more than check-the-box cybersecurity.

ARIA Cybersecurity Solutions

ARIA Cybersecurity Solutions

The ARIA ADR Automatic Detection & Response solution was designed to find, verify, and stop all types of attacks - automatically and in real time.

US Marine Corps Forces Cyberspace Command (MARFORCYBER)

US Marine Corps Forces Cyberspace Command (MARFORCYBER)

US Marine Corps Forces Cyberspace Command (MARFORCYBER) conducts full spectrum military cyberspace operations in order to enable freedom of action in cyberspace and deny the same to the adversary.

BaaSid

BaaSid

BaaSid is next generation security technology for data security & security authentication based on De-centralized & Blockchain.

SafePaas

SafePaas

SafePaas is a leading Enterprise Risk Management Platform. One source of truth for all your Audit, Risk, and Compliance requirements. Complete governance across your systems.

Keytos

Keytos

Keytos has revolutionized the Identity Management and PKI industry by creating cryptographic tools that allow you to go password-less by making security transparent to the user.

Benchmark IT Services (BITS)

Benchmark IT Services (BITS)

BITS is a leading cyber security company in Australia. Our certified professionals work with you to keep your data assets safe and secure.

Leostream

Leostream

Leostream's Remote Desktop Access Platform enables seamless work-from-anywhere flexibility while maintaining security and constant visibility of users.

Zluri

Zluri

Zluri is a cloud-native SaaSOps platform enabling modern enterprises with SaaS Management and Identity Governance.