What Are The Key Trends That Will Shape Tech In 2025?
2024 was a whirlwind year. Businesses and public sector organisations alike were left reeling in the aftermath of cyberattacks which resulted in data compromises and costly ransoms. On top of this, leaders scrambled to capitalise on the AI ‘gold rush’, taking a sometimes-haphazard approach to AI adoption to stay ahead of the curve.
We also saw threat actors start to weaponise AI, as increasingly sophisticated deepfakes made it trickier for organisations to distinguish between real and malicious entities.
2025 will add yet another layer of complexity. Governments across the globe are set to crack down on compliance, and regulators will keep an eagle eye on the data that feeds AI models, putting up guardrails to limit its access to information. At the same time, leaders will start to define their AI offerings and ensure AI is being used to solve specific problems, driving meaningful value for businesses.
With the world of tech undoubtedly set to shift again next year, executives at NinjaOne identify the key trends that will shape the industry in 2025 and beyond.
1. Weaponised AI will be the biggest security concern in 2025 – and IT teams will be hit hardest.
The biggest security threat we're seeing is the continual evolution of AI. It’s getting really good at content creation and creating false imagery (i.e. deepfakes) and as AI gets better at data attribution, it will become even more difficult for organisations to distinguish between real and malicious personas.
Because of this, AI-based attacks will focus more on targeting individuals in 2025. Most of all, IT teams will be hit hardest, due to the keys they possess and the sensitive information they have access to.
Most AI-based attacks will target individuals to solicit access and money, and IT organisations need to ensure they’re prepared, educating staff, and shoring up defenses accordingly. The best way to reign in AI risks is with more employee training. People have to know what to be on the lookout for, especially as AI technology evolves. In general, you can’t do enough cyber awareness training. It's very real – even beyond AI, there are a ton of ways to compromise an individual system or information, and I think the more that we can educate people, rather than try to curtail the technology, the better. - Mike Arrowsmith, Chief Trust Officer, NinjaOne
2. Government entities will double down on compliance.
As AI adoption and privacy concerns rise, 2025 will bring with it more stringent data protection and compliance requirements from around the world. In the EU, NIS2 is now law, meaning that there’s a whole new set of cybersecurity and privacy requirements that all entities that do business in healthcare, financial services, manufacturing, and others must comply with. And as AI regulation becomes a bigger part of the conversation, the more that organisations can secure, track, and report on where and how they’re storing data now, the better positioned they’ll be to comply with all the above, especially as new regulation and more stringent enforcement ensues. - Mike Arrowsmith, Chief Trust Officer, NinjaOne
3. CIOs will be held accountable when AI failings occur.
In 2025, as AI innovation and exploration continues, it will be the senior-most IT leader (often a CIO) who is held responsible for any AI shortcomings inside their organisation. As new AI companies appear that explore a variety of complex and potentially groundbreaking use cases, some are operating with little structure in place and have outlined loose privacy and security policies. While this enables organisations to innovate and grow faster, it also exposes them to added confidentiality and data security risks.
Ultimately, there needs to be a single leader on the hook when AI fails the business.
To mitigate potential AI risks, CIOs or IT leaders must work closely on internal AI implementations or trials to understand their impact before any failings or misuse can occur. - Joel Carusone, Senior Vice President of Data and AI, NinjaOne
4. AI will start to find its identity.
In 2024, we saw a shotgun approach to AI. Organisations threw a lot against the wall as they tried to find and monetise what sticks, sometimes even at the expense of customers. For example, we saw the emergence of things like autonomous script generation – giving AI carte blanche access to writing and executing scripts on endpoint devices. But giving AI the keys to the entire kingdom with little to no human oversight sets a dangerous precedent. In 2025, people will double down on practical use cases for AI – use cases that actually add value without compromising security, via capabilities like automating threat detection, patching support, and more.
Plus, next year, we’ll see regulators really start sharpening the pencil on where the data is going and how it’s being used, as more AI governance firms up around specific use cases and protection of data. - Sal Sferlazza, CEO and Co-Founder, NinjaOne
Image: Shubham Dhage
You Might Also Read:
Using AI To Its Full Cybersecurity Potential:
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible