ChatGPT's Image Generation Could Be Driving Retail Fraud
The retail industry is already facing a huge challenge with retail fraud - and this is only set to continue as generative AI tools reach new levels of sophistication.
The recent launch of ChatGPT's enhanced image generation capabilities marks yet another significant milestone in AI development - but also one that carries some serious implications for retail security and fraud prevention.
Generative AI is of course transforming retail in positive ways such as creative workflows, content production, and business operations (as it is across all industries). However, underpinning the excitement surrounding these advancements does lie some serious concerns: the same technology available to consumers is also making fraudsters faster and even more cunning.
The democratisation of sophisticated image generation technology, such as ChatGPT’s Image Generation, is another example of that. It has the potential to create the perfect conditions for a new wave of retail fraud - one that could make the traditional verification process redundant and not fit for purpose.
How Can GenAI tools Be Used To Enable Fraud?
The latest AI image generators can create images that look like real photographs as well as imagery from simple text prompts with incredible accuracy. It can reproduce documents with precisely matching formatting, official logos, accurate timestamps, and even realistic barcodes or QR codes.
In the hands of fraudsters, these tools can be used to commit ‘return fraud’ by creating convincing fake receipts and proof-of-purchase documentation. What makes this use even more concerning is that, unlike previous forgeries that often contained telltale signs and human errors, AI-generated fakes are much better at creating indistinguishable dupes.
The Potential Impact Stems Far & Wide
The concerns with this new technology extend far beyond merely returning items. For example, fake proof of purchase documentation can be used to claim warranty service for products that are out of warranty or purchased through unauthorised channels. Fraudsters could also generate fake receipts showing purchases at higher values than was actually paid for - then requesting refunds to gift cards for the inflated amount. Internal threats also exist too, as employees can create fake expense receipts for reimbursement.
This is particularly damaging for businesses with less sophisticated verification processes in place. Perhaps the scenario most concerning of all is that these tools can enable scammers to generate convincing payment confirmations or shipping notices as part of larger social engineering attacks. Of course, the financial impact is substantial - industry estimates already place return fraud costs in the billions annually, and this could significantly increase as these GenAI tools become more accessible and sophisticated.
What About The Damage To Legitimate Customers?
It's not just about the direct financial impact on retailers, but also the potential impact of a ‘seamless customer experience’ for consumers. As retailers have to implement more complex and lengthy verification processes to mitigate sophisticated fraud, honest customers could face greater friction during their returns and exchange experience.
This creates a difficult dilemma for retailers. The National Retail Federation reports that 70% of consumers say a positive return experience encourages them to continue shopping with a retailer. Yet, creating stricter return verification processes to combat risks posed by genAI will not only frustrate these valuable customers, it could also impact their brand loyalty.
Hitting Back At GenAI With AI
While ChatGPT’s image generator is the latest advancement getting attention, it’s not the first or last genAI tool with these capabilities. So, how can retailers fight back?
The solution to these challenges doesn't lie in reverting back to manual processes or creating higher-friction customer experiences. Retailers must instead fight AI-powered fraud with AI. By examining the full customer journey rather than just the return transaction, retailers can begin to identify suspicious patterns without creating friction for legitimate customers.
Advanced AI can be used to detect subtle inconsistencies that most models, and certainly not manual reviews or rules, can spot.
The relationship between retailers and customers has always been built on trust and today's new genAI challenge isn't all that different. The successful retailers will be the ones that work out how to strike the right balance and view fraud prevention not as a cost center but as an essential component of the customer experience.
Doriel Abrahams is Principal Technologist at Forter
Image: Ideogram
You Might Also Read:
A New Threat To Biometric Security:
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible