Exploring How Generative AI Is Contributing To Cybersecurity Threats & Risks

Generative AI applications like ChatGPT, Brad, Stable, Diffusion, and Bing have captured unprecedented attention and popularity from all across the globe. These innovative tools can perform routine tasks like data classification, improving customer service, and providing real-time data analysis. But, they make headlines due to their ability to write text, create images, and other digital art forms.

Generative AI technology is growing and progressing rapidly without any chance of slowing down soon. ChatGPT is the fastest-growing AI tool released in November 2022, and four months later, OpenAI released a much improved and larger language model (LLM) popularly known as ChatGPT-4. The latest data reveals that ChatGPT has over 100 million users, while the platform crossed 1 million in just five days of its launch. This popularity is also extended to other platforms; in May 2023, Google announced new features for generative AI like Search Generative Experience and another LLM called PaLM-2.

 Amidst the growing popularity, it's also increasing the threat landscape as organizations see many threats and risks that come into fulfillment. Therefore, it's crucial to understand the AI-based risks and develop effective measures to mitigate the potential damages. 

The Darker Implications of Generative AI

Despite all the advances, AI technology has enhanced the capabilities of cybercriminals and enables them to launch more sophisticated and targeted attacks. Bad actors significantly misuse ChatGPT to improve phishing attacks or create polymorphic malware. The famous Blackberry Software company has shared examples of compromised business messages and phishing hooks that ChatGPT can craft. However, OpenAI has effectively put in measures to prevent it from responding to such requests.

The discovery of a fake ChatGPT Chrome browser extension that hijacked Facebook accounts and created rogue admin accounts is another example of how hackers can exploit ChatGPT to spread malware and misinformation. In addition, various generative AI tools, including ChatGPT, are vulnerable to prompt injection attacks. In this technique, attackers craft prompts that trick the model into performing tasks like writing malicious codes, providing inaccurate outputs, or creating phishing websites. 

Another significant issue around generative AI is that any information entered into ChatGPT becomes part of its training dataset. Though it sounds helpful, it isn't because if cybercriminals get hold of such data, they misuse it to fulfill their malicious intentions. For example, Samsung engineers entered the company's top-secret data, like proprietary code and internal meeting notes relating to their hardware, resulting in the leaking of sensitive data.  

Moreover, generative AI tools are much more affordable, and therefore, malicious actors can easily use them to spread misinformation and propaganda. The AI tool's impressive search capabilities help bad actors create convincing content that they can use to manipulate public opinion or cause significant harm. One may not forget the incident when the OpenAI tool ChatGPT defamed a mayor in Australia as being jailed for bribery when he was the whistleblower in a case.  

Defending Against Generative-AI Threats

Organizations need a proactive approach to defend against generative AI threats. This includes practicing robust security measures such as using unique passwords, beware of phishing and unsolicited communications, updating the software, and enabling phishing-resistant MFA instead of only MFA. Besides this, other safety measures that prove helpful to businesses are as follows:

  • Security Assessments and Audits: Regular security assessments and audits help identify potential vulnerabilities and security gaps within the generative AI models. Doing so allows organizations to address the issues before the bad actors exploit them.
  • Data Privacy and Protection: Prioritize data privacy and protection by implementing access controls and strong encryption via a VPN to protect sensitive information that the employees entered in generative AI models.
  • Model Validation: Enterprises must implement adversarial training techniques to recognize and eliminate the ethical concerns around generative AI models.
  • Ethical Considerations: Organizations must be mindful of the ethical implications of generative AI. They should establish clear guidelines regarding the ethical use of AI technology and keep a check on employees to ensure the guidelines are followed.
  • Cyber Awareness and Education: Cybersecurity is a shared responsibility of all individuals across the organization, and therefore, they need to educate the employees about the potential dangers of generative AI. Also, they must teach them to detect and respond to threats to mitigate risks.

Staying updated about the latest developments in generative AI technology and adopting provocative measures helps organizations minimize the challenges and risks to which generative AI exposes them.

The Future of Generative AI

The misuse of AI has affected businesses, consumers, and even the government. To combat the dangers of this technology in the future, the White House has taken the initiative and announced new investments in AI research and promoted responsible AI innovation that protects the rights and safety of American citizens. However, more innovative actions are needed to ensure that generative AI technology is used ethically and responsibly. 

Organizations looking to defend against emerging AI-related threats can adopt the Zero-Trust Network Access (ZTNA) approach now and in the future. This security framework eliminates the trust from the network perimeter and continuously monitors, authenticates, and authorizes each user and device within the network. This ensures that no unauthorized user or system can access the generative AI models.

As AI-based threats become more sophisticated, adopting AI-driven security solutions will become essential in detecting and preventing the threats. For example, the HackerOne Bounty program offers continuous adversarial testing and detects security loopholes within the attack surface, including those arising from poor implementation of generative AI. 

Similarly, a cybersecurity startup also announced the launch of an AI-powered search platform. It aims to help security teams improve their mean time to detection (MTTD) and mean time to response (MTTR) by 80% and offer correlation and risk validations for potential vulnerabilities. Overall, generative AI tools will continue to evolve and help shape a more technologically developed future. 

Final Words

Generative AI is trang significant risks and dangers in the hands of hackers. They can misuse this technology to craft phishing emails, spread propaganda and malware, and produce biased outputs.

As organizations increasingly rely on this technology, they must adopt cybersecurity measures to mitigate the risks and leverage the benefits they offer.

Farwa Sajjad is a Cyber Security Journalist & Product Marketing Writer               Image: Steve Johnson

You Might Also Read:

Numerous Organisations Are Banning ChatGPT:

___________________________________________________________________________________________

If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

 

« Australian Government Suffers A Widespread Ransom Attack
Jargon Buster: Untangling The Complexity In Cybersecurity  »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

ZenGRC

ZenGRC

ZenGRC - the first, easy-to-use, enterprise-grade information security solution for compliance and risk management - offers businesses efficient control tracking, testing, and enforcement.

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

Watch this webinar to hear security experts from Amazon Web Services (AWS) and SANS break down the myths and realities of what an NGFW is, how to use one, and what it can do for your security posture.

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

Perimeter 81 / How to Select the Right ZTNA Solution

Perimeter 81 / How to Select the Right ZTNA Solution

Gartner insights into How to Select the Right ZTNA offering. Download this FREE report for a limited time only.

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

BruCON

BruCON

Brucon is Belgiums premium security and hacking conference.

Mellanox Technologies

Mellanox Technologies

Mellanox Technologies is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure.

Boxcryptor

Boxcryptor

Boxcryptor encrypts your sensitive files before uploading them to cloud storage services.

Flexera

Flexera

Flexera is reimagining the way software is bought, sold, managed and secured.

Templar Executives

Templar Executives

Templar Executives is a leading, expert and dynamic Cyber Security company trusted by Governments and multi-national organisations to deliver business transformation.

Innovative Solutions (IS)

Innovative Solutions (IS)

Innovative Solutions is a specialized professional services company delivering Information Security products and solutions for Saudi Arabia and the Gulf region.

Forum of Incident Response & Security Teams (FIRST)

Forum of Incident Response & Security Teams (FIRST)

FIRST is the global Forum of Incident Response and Security Teams.

Cybersecurity Manufacturing Innovation Institute (CyManII)

Cybersecurity Manufacturing Innovation Institute (CyManII)

CyManII was established to create economically viable, pervasive, and inconspicuous cybersecurity in American manufacturing to secure the digital supply chain and energy automation.

Aura

Aura

Aura is a mission driven technology company dedicated to creating a safer internet for everyone. We’re making comprehensive digital security that's simple to understand and easy to use.

Lavabit

Lavabit

Lavabit's Dark Internet Mail Environment is a secure, open-source, secure end-to-end communications platform for asynchronous messaging across the internet.

SYN Ventures

SYN Ventures

SYN Ventures invests in disruptive, transformational solutions that reduce technology risk.

Triangle

Triangle

Triangle enable innovative business transformation by ensuring critical hybrid infrastructures are optimised, interoperable and secure.

Safe Decision

Safe Decision

Safe Decision is an information technology company offering Cyber Security, Network, and Infrastructure Services and Solutions.

Labaton Sucharow

Labaton Sucharow

Standing on the horizon of law and technology, our Cybersecurity and Data Privacy Practice helps to protect consumers who have been harmed by businesses’ failures to safeguard their customers' data.

PeoplActive

PeoplActive

PeoplActive is an IT consulting and recruitment services organization with leading capabilities in digital, cloud and security.

Cybecs Security Solutions

Cybecs Security Solutions

Cybecs was founded to address rapid technological advancement, changing business models, global privacy regulations, and increasing cyber threats for global organizations.