Securing Smart Cities Using AI
Billions of Internet-connected devices and the introduction of 5G are transforming the way cities and municipalities care for their citizens. However, this rapid digitisation has made many crucial public services, previously isolated from the Internet, new victims of attack.
Increasingly backed by nation-states, criminal rings are leveraging the latest developments in basic machine learning to augment their attacks, increasing efficiency while reducing their chances of detection at every stage of an attack.
Security teams now face an unprecedented challenge in defending these critical environments, and they can no longer rely on outdated defense methods and Artificial Intelligence and Machine Learning, AI and ML, are being suggested.
Now there is a lot of discussion about cybersecurity's relationship with artificial intelligence and machine learning (AI/ML) revolves around how AI and ML can improve security product functionality. However, that is actually only one dimension of a much broader collision between cybersecurity and AI.
As applied use of AI/ML starts to advance and spread throughout a plethora of business and technology use cases, security experts are going to need to help their colleagues in the business start to address new risks, new threat models, new domains of expertise, and, yes, sometimes new security solutions.
Heading into 2020, business and technology analysts expect to see solid applications of AI and ML accelerate. This means that CISOs and security professionals will need to quickly get up to speed on AI-driven enterprise risks. Here are some thoughts from security veterans on what to expect from AI and cybersecurity in 2020.
While some IT leaders are clearly skeptical of AI’s role in cybersecurity, others are more bullish. There are some proven benefits to adding AI into IT security environments that some agencies are starting to take advantage of.
Last years US Presidential Executive Order on artificial intelligence gave US governmnet agencies a mandate to invest in technologies such as machine learning, computer vision, and robotic process automation. Civilian spending on AI technologies grew by over 22% in fiscal 2019, while Pentagon investment grew by almost 70%. Pentagon spending on AI contracts has more than tripled since fiscal 2017. Expect more growth government wide in fiscal 2020.
AI/ML Data Poisoning and Sabotage
The security industry will need to keep tabs on emerging cases of attackers seeking to poison AI/ML training data in business applications to disrupt decision-making and otherwise operations. Imagine, for example, what would happen with companies depending on AI to automate supply chain decisions. A sabotaged data set could result in drastic under- or oversupply of product.
"Expect to see attempts to poison the algorithm with specious data samples specifically designed to throw off the learning process of a machine learning algorithm," said Haiyan Song, senior vice president and general manager of security markets for Splunk talking to Dark Reading. "It's not just about duping smart technology, but making it so that the algorithm appears to work fine - while producing the wrong results."
Deepfake Audio Takes BEC Attacks into a New Arena
Business email compromise (BEC) has cost organizations billions of dollars as attackers pose as CEOs and other senior-level managers to trick people in charge of banking accounts to make fraudulent transfers in the guise of closing a deal or otherwise getting business done.
Now attackers are taking BEC attacks to a new arena with the use of AI technology: the telephone. 2019 saw one of the first reports of an incident where an attacker used deepfake audio to impersonate a company CEO over the phone in order to trick someone at a British energy firm to wire $240,000 to a specious bank account.
Experts believe we will see increased use of AI-powered deepfake audio of CEOs to carry out BEC-style attacks in 2020.
AI-Powered Malware Evasion
Deepfakes are going to be just one way that the bad guys will leverage AI to perpetrate attacks. Security researchers are on tenterhooks waiting to discover AI-powered malware evasion techniques. Some believe 2020 will be the year they discover the first malware using AI-models to evade sandboxes.
Differential Privacy Gains Steam to Protect Analytics Data
The combination of big data, AI, and strict privacy regulations is going to cause enterprise headaches until security and privacy professionals start innovating better ways to shield the kind of customer analytics that fuel a lot of AI applications today. The good news is that other forms of AI can be used to accomplish this.
Hard Lessons About AI Ethics and Fairness
There are some hard lessons ahead with AI ethics, fairness, and consequences. These issues are relevant to security leaders who are tasked with maintaining the integrity and availability of systems that rely on AI to operate.
Fed Tech Magazine: Dark Reading: Bank Info Security: Bloomberg:
You Might Also Read:
10 Predictions For The IoT Future: