Exploring How Generative AI Is Contributing To Cybersecurity Threats & Risks

Generative AI applications like ChatGPT, Brad, Stable, Diffusion, and Bing have captured unprecedented attention and popularity from all across the globe. These innovative tools can perform routine tasks like data classification, improving customer service, and providing real-time data analysis. But, they make headlines due to their ability to write text, create images, and other digital art forms.

Generative AI technology is growing and progressing rapidly without any chance of slowing down soon. ChatGPT is the fastest-growing AI tool released in November 2022, and four months later, OpenAI released a much improved and larger language model (LLM) popularly known as ChatGPT-4. The latest data reveals that ChatGPT has over 100 million users, while the platform crossed 1 million in just five days of its launch. This popularity is also extended to other platforms; in May 2023, Google announced new features for generative AI like Search Generative Experience and another LLM called PaLM-2.

 Amidst the growing popularity, it's also increasing the threat landscape as organizations see many threats and risks that come into fulfillment. Therefore, it's crucial to understand the AI-based risks and develop effective measures to mitigate the potential damages. 

The Darker Implications of Generative AI

Despite all the advances, AI technology has enhanced the capabilities of cybercriminals and enables them to launch more sophisticated and targeted attacks. Bad actors significantly misuse ChatGPT to improve phishing attacks or create polymorphic malware. The famous Blackberry Software company has shared examples of compromised business messages and phishing hooks that ChatGPT can craft. However, OpenAI has effectively put in measures to prevent it from responding to such requests.

The discovery of a fake ChatGPT Chrome browser extension that hijacked Facebook accounts and created rogue admin accounts is another example of how hackers can exploit ChatGPT to spread malware and misinformation. In addition, various generative AI tools, including ChatGPT, are vulnerable to prompt injection attacks. In this technique, attackers craft prompts that trick the model into performing tasks like writing malicious codes, providing inaccurate outputs, or creating phishing websites. 

Another significant issue around generative AI is that any information entered into ChatGPT becomes part of its training dataset. Though it sounds helpful, it isn't because if cybercriminals get hold of such data, they misuse it to fulfill their malicious intentions. For example, Samsung engineers entered the company's top-secret data, like proprietary code and internal meeting notes relating to their hardware, resulting in the leaking of sensitive data.  

Moreover, generative AI tools are much more affordable, and therefore, malicious actors can easily use them to spread misinformation and propaganda. The AI tool's impressive search capabilities help bad actors create convincing content that they can use to manipulate public opinion or cause significant harm. One may not forget the incident when the OpenAI tool ChatGPT defamed a mayor in Australia as being jailed for bribery when he was the whistleblower in a case.  

Defending Against Generative-AI Threats

Organizations need a proactive approach to defend against generative AI threats. This includes practicing robust security measures such as using unique passwords, beware of phishing and unsolicited communications, updating the software, and enabling phishing-resistant MFA instead of only MFA. Besides this, other safety measures that prove helpful to businesses are as follows:

  • Security Assessments and Audits: Regular security assessments and audits help identify potential vulnerabilities and security gaps within the generative AI models. Doing so allows organizations to address the issues before the bad actors exploit them.
  • Data Privacy and Protection: Prioritize data privacy and protection by implementing access controls and strong encryption via a VPN to protect sensitive information that the employees entered in generative AI models.
  • Model Validation: Enterprises must implement adversarial training techniques to recognize and eliminate the ethical concerns around generative AI models.
  • Ethical Considerations: Organizations must be mindful of the ethical implications of generative AI. They should establish clear guidelines regarding the ethical use of AI technology and keep a check on employees to ensure the guidelines are followed.
  • Cyber Awareness and Education: Cybersecurity is a shared responsibility of all individuals across the organization, and therefore, they need to educate the employees about the potential dangers of generative AI. Also, they must teach them to detect and respond to threats to mitigate risks.

Staying updated about the latest developments in generative AI technology and adopting provocative measures helps organizations minimize the challenges and risks to which generative AI exposes them.

The Future of Generative AI

The misuse of AI has affected businesses, consumers, and even the government. To combat the dangers of this technology in the future, the White House has taken the initiative and announced new investments in AI research and promoted responsible AI innovation that protects the rights and safety of American citizens. However, more innovative actions are needed to ensure that generative AI technology is used ethically and responsibly. 

Organizations looking to defend against emerging AI-related threats can adopt the Zero-Trust Network Access (ZTNA) approach now and in the future. This security framework eliminates the trust from the network perimeter and continuously monitors, authenticates, and authorizes each user and device within the network. This ensures that no unauthorized user or system can access the generative AI models.

As AI-based threats become more sophisticated, adopting AI-driven security solutions will become essential in detecting and preventing the threats. For example, the HackerOne Bounty program offers continuous adversarial testing and detects security loopholes within the attack surface, including those arising from poor implementation of generative AI. 

Similarly, a cybersecurity startup also announced the launch of an AI-powered search platform. It aims to help security teams improve their mean time to detection (MTTD) and mean time to response (MTTR) by 80% and offer correlation and risk validations for potential vulnerabilities. Overall, generative AI tools will continue to evolve and help shape a more technologically developed future. 

Final Words

Generative AI is trang significant risks and dangers in the hands of hackers. They can misuse this technology to craft phishing emails, spread propaganda and malware, and produce biased outputs.

As organizations increasingly rely on this technology, they must adopt cybersecurity measures to mitigate the risks and leverage the benefits they offer.

Farwa Sajjad is a Cyber Security Journalist & Product Marketing Writer               Image: Steve Johnson

You Might Also Read:

Numerous Organisations Are Banning ChatGPT:

___________________________________________________________________________________________

If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

 

« Australian Government Suffers A Widespread Ransom Attack
Jargon Buster: Untangling The Complexity In Cybersecurity  »

Infosecurity Europe
CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

The PC Support Group

The PC Support Group

A partnership with The PC Support Group delivers improved productivity, reduced costs and protects your business through exceptional IT, telecoms and cybersecurity services.

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

BackupVault

BackupVault

BackupVault is a leading provider of automatic cloud backup and critical data protection against ransomware, insider attacks and hackers for businesses and organisations worldwide.

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

Intruder

Intruder

Intruder is a cloud-based vulnerability scanner that finds cyber security weaknesses in your digital infrastructure, to avoid costly data breaches.

Surrey Centre for Cyber Security (SCCS)

Surrey Centre for Cyber Security (SCCS)

The Centre focuses on three main research directions - Privacy and Data Protection, Secure Communications, and Human-Centred Security.

Security Audit Systems

Security Audit Systems

Security Audit Systems is a website security specialist providing website security audits and managed web security services.

Guy Carpenter

Guy Carpenter

Guy Carpenter delivers a powerful combination of broking expertise, strategic advisory services, and industry-leading analytics.

Mondo

Mondo

Mondo is the largest national staffing agency specializing exclusively in high-end, niche IT, Tech, and Digital Marketing talent. Areas of expertise include Cybersecurity.

Security & Intelligence Agency (SOA) - Croatia

Security & Intelligence Agency (SOA) - Croatia

SOA is the Croatian security and intelligence service. Areas of activity include Cyber Security and Information Security.

Dell Technologies

Dell Technologies

Dell Technologies Consulting Services enables a highly resilient business amidst the proliferation of cloud-based IT services and constant threats to your most critical information.

Applied Magnetics Laboratory (AML)

Applied Magnetics Laboratory (AML)

Applied Magnetics Laboratory is a manufacturer of military security and data destruction equipment for sensitive, classified, and secret information.

British Blockchain Association (BBA)

British Blockchain Association (BBA)

British Blockchain Association (BBA) is a not-for-profit organisation that promotes evidence-based adoption of Blockchain and Distributed Ledger Technologies (DLT) across the public and private sector

Tech Nation

Tech Nation

Tech Nation is the UK’s first national scaleup programme for the cyber security sector, aimed at ambitious tech companies ready for growth, at home and abroad.

Flix11

Flix11

Flix11 is a Cyber Security & ICT Solutions focused company. We provide a range of products and services in Cyber Security, Internet of Things (IoT) and infrastructure solutions.

IT-Seal

IT-Seal

IT-Seal GmbH specializes in sustainable security culture and awareness training.

Extreme Networks

Extreme Networks

Since 1996, Extreme has been pushing the boundaries of networking technology, driven by a vision of making it simpler and faster as well as more agile and secure.

Sacumen

Sacumen

Sacumen is a niche player in the cybersecurity market, solving critical problems for security product companies.

Opkalla

Opkalla

We started Opkalla because we believe IT professionals deserve better. We help our clients navigate the confusion in the marketplace and choose the solution that is right for your business.

TriVigil

TriVigil

TriVigil offer a full-service, comprehensive cybersecurity approach specifically tailored to meet the unique needs of educational institutions.