How Will Cybercriminals Take Advantage of AI in the Future?
It’s difficult to predict what can happen in a decade. Looking back to 2009, when the iPhone 3GS dominated the (very new) smartphone market and Google Maps forever changed the way we move, we would be hard pressed to envisage the future we now find ourselves in.
On one hand, we can try to learn from these previous technologies in a bid to make an educated guess as to what’s around the corner. After all, the smart home assistants and rapid streaming services we’re now accustomed to have long been predicted by those clever enough to extrapolate from the past and imagine what that means for the future.
On the other hand, it’s anyone’s guess. Historically, it was believed that computers would get bigger and bigger over time, as this was the trend initially. I’m sure some can remember back when computers were the size of a whole room. Countless engineers, technicians and analysts predicted this over the years, which we now know to be totally false. Instead, computers have miniaturised and multiplied. Now your laptop, television and toaster can all house several computers.
Hello Google
Just as miniature computers have long been sprinkled throughout our homes and workplaces, the past 18 months has seen a similar adoption of AI. From Alexa to Google Home, Nest to smart speakers, you’d struggle to find a home that hasn’t incorporated some form of AI. Beyond our devices, AI recommendation engines are allowing for highly targeted (and creepily precise) advertisments across the web and social media.
Machine learning and other additions are also making AI even more intelligent. This allows AI to monitor anomalies, perform classification on gathered data and predict if a user is about to quit a service, for example.
But with more capabilities comes more code, and with more code comes more bugs. Coupled with the fact that AI is a new technology, which as a rule makes it inherently less secure, it’s easy to see why cybercriminals are taking advantage of this problematic new tool. The question is how they will continue to do so, and on what scale.
Goodbye Transparency
While recent headlines are hardly fear-mongering, it’s easy to predict a time when attackers finally harness the full criminal potential of AI to cause some serious damage. I believe the first case of this will come from audio fakes.
The technology to record, analyse, and emulate a voice already exists, though it has yet to be criminalised on a major scale due to the fact that the software is not publicly available. These things only stay secure for so long, of course, and once one clever hacker finds a way through the fence others will hastily follow.
Once it’s in their hands, there’s no end to how they can leverage it. Imagine a cybercriminal pretending to be you on the phone to your bank, equipped not only with your security details but a perfect replica of your voice to boot. Imagine your managing director or boss calling you in a panic, asking you to transfer valuable company information or complete a fraudulent wire transfer. The list goes on and on.
And it doesn’t stop there; deep fakes, in their most basic form, are already doing the rounds on social media. These clever misuses of AI not only copy your voice, they also have a generated video of you saying these words into a webcam or FaceTime call. Once an attacker has the software to pull this off, watching and profiling you via your webcam to get the information they need is the easy part. In these types of attacks, criminals could reap equal havoc to audio fakes, with the added illusion of being physically present to squash any remaining suspicions.
A Call to Arms
AI shows no signs of slowing down; it’s effective and addictive, which is why we have adopted it with open arms. Clearly, there’s no going back now. As defenders, our next step has to be building the tools, security models and processes to combat the wave of deep fakes and beyond, securing a bright future with AI by our side, not against us.
Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.
Read more about Cyber Security
- Is SearchMine Adware Teeing Up Your Endpoints For Other Threat Actors?
- Working From Home | How to Use Zoom, Slack and Other Remote Software Safely
- Looking on the Bright Side of Coronavirus: Impact on Low-to-Mid-Tier Criminals and Vendors
- Enterprise Employees | 11 Things You Should Never Do at Work (or Home)
- COVID-19 Outbreak | Defending Against the Psychology of Fear, Uncertainty and Doubt
- macOS Malware Researchers | How To Bypass XProtect on Catalina
- COVID-19 Outbreak | Employees Working from Home? It’s Time to Prepare
Leave a Reply
Want to join the discussion?Feel free to contribute!