We’re in a rush. We have got jobs to do, holidays to organize, families to get back to. We don’t have time to read those 10-page documents, get through multiple websites listing every single thing to do in Barcelona and think of what our significant other wants for their birthday. And what if I’m eager to know what my cat would look like as a person, but I only have a few minutes before I need to create a detailed menu for next week that will ensure I stay within my calorie deficit? Thankfully, there is a tool for that.
We have all heard of it, we have all used it, even if we didn’t realize we did. The ‘thinking machine’ created by and available to the greatest minds of the 1950s exclusively has become easily accessible to all in the 2010s by the names of Siri and Alexa, and seeped into our everyday lives in 2020s where all companies strive to be ‘powered by AI’ and even simple Google searches come with a neat summary of answers. Developments in generative AI created a machine that helps us by responding to text prompts while continuously learning both from every interaction and materials from the ever-growing internet.
We moved away from seeing AI as an evil robot that will eventually enslave humanity. It is our best friend, our little helper. So, it is not a surprise that many of us turn to it multiple times a day. Unfortunately, the dark side never sleeps, and bad actors come up with ways to use AI tools for their benefit. From using AI tools to gather comprehensive data on their targets, including identifying high-value targets and exploitable vulnerabilities within their systems, to creating sneakier, harder to detect malware and automated cyberattacks. However, one of the most common ways remains the old-fashioned malicious download. A legitimate-looking application containing a malicious code is a versatile tool that can be designed to infect systems and steal data, create botnets or even act as ransomware.
The risks associated with malicious downloads are high and no one knows it better than a Disney employee who, in July 2024, downloaded an infected AI image generator and ended up losing his data and his job. Van Andel, who was publicly ‘named and shamed’ by NullBulge, the bad actor responsible for the attack, downloaded a perfectly functional AI tool from GitHub – publicly available collaborative software development platform. It is believed that the tool contained a keylogging Trojan that granted NullBulge access to Van Andel’s unsecured password manager, 1Password. That’s right, the unfortunate Disney employee not only downloaded a potentially suspicious piece of software but also failed to set up two-factor authentication (2FA) – a security system that adds an additional layer of protection by requiring two forms of identification to access a service. Two major mistakes that led to major inconveniences.
As it turned out, Van Andel’s 1Password account contained credentials that allowed access to Disney’s internal systems thus significantly magnifying the breaches impact. The victim’s login details gave the hacker access to almost 10,000 of Disney’s Slack channels containing approximately 44 million messages dating back to 2019 as well as 1.1 terabytes of data that included financial information and details about unreleased projects. Unsurprisingly, Disney terminated Van Andel’s employment shortly after the attack once the cause of the breach was determined.
While it can be argued that AI was not the problem in the case of Disney data breach, the fact that AI tools are popular enough to be used as an attack vector is what adds a level of risk. Generating morally-grey images when it comes to copyright is not a necessity yet our desire for a quick entertainment fix, an immediate detailed answer to any question we could possibly have, is what drives bad actors to produce infected versions of the tools we so badly crave. There is nothing wrong with using the available tools – they are there for a reason. However, it only takes one wrong to unleash large-scale trouble.
In 2024 yet another collaborative community known as Hugging Face was discovered to contain over 100 malicious code-execution models. Created as an open-source platform and community for AI enthusiasts, researchers, and developers, it hosts hundreds of thousands of models, datasets and demos helping making AI more accessible. It comes as no surprise that many bad actors see it as an opportunity. The consequences of using AI tools that have been tampered with depend on the hacker’s goal. They can include data poisoning that involves attackers inserting incorrect or malicious data into the training dataset causing it to learn incorrect patterns and make potentially harmful decisions. It can also be programmed to carry out remote code execution that allows bad actors to run code and execute commands remotely. And, of course, it can be programmed for data theft.
Although there aren’t any publicly known incidents related to the Hugging Face discovery, and there does not seem to be a direct connection between it and the Disney breach, it makes us contemplate how unofficial AI helpers can pose a threat to every user coming into contact with them. Therefore, as with any download from any open source, a degree of suspicion is necessary. While it is impossible to be 100% secure, practicing safe internet browsing can minimize the risks.
- Trusted providers – it is vital that we only obtain software from trusted sources to ensure security and reliability.
- 2FA/MFA – multi-factor authentication both for work and personal services is crucial to avoid the spread of infection in the unfortunate event of one occurring.
- Education and awareness – it is important to educate yourself and stay informed about cybersecurity threats and best practices, not only on a corporate but on a personal level as well.
The topic of AI is a controversial one with many pros and just as many cons to its usage. It has become an indispensable tool, helping us manage tasks, organize our schedules, and even entertain us. However, the Disney breach serves as a stark reminder that the convenience of AI comes with significant risks. It does seem, however, that AI is here to stay and if we are to coexist, we need to ensure we are being cautious to reap the benefits of this indispensable tool.
Sources
https://futurism.com/the-byte/life-destroyed-ai
https://www.infosecurity-magazine.com/news/malicious-ai-models-hugging-face/
The information contained in this article is provided for informational purposes only and does not constitute professional advice and is not guaranteed to be accurate, complete, reliable, current or error-free.
Cyber threats evolved rapidly in 2024, affecting companies like Ticketmaster, AT&T, and Intesa Sanpaolo. From insider threats to massive cloud breaches, these seven events highlight critical failures in cybersecurity and the growing cost of digital vulnerabilities.
Read moreCrypto-drainers are tools that steal cryptocurrency quickly and silently via fake websites and deceptive smart contracts. A recent theft of 14 Bored Ape NFTs, valued at over $1 million, highlights the sophistication of these scams. However, with the right strategies, protection is possible.
Read moreAI and Large Language Models (LLMs) are transforming cybersecurity by automating tasks like vulnerability monitoring and malware analysis. While adoption is growing, many professionals remain cautious about their current effectiveness. However, AI promises to ease workloads and improve efficiency in the industry.
Read moreDonald Trump’s re-election brings new debates about data privacy, AI regulation, and cybersecurity. This article evaluates his policies, from TikTok to encryption, and their possible effects on US citizens and innovation.
Read moreAs cyber criminals develop new tactics, companies must strengthen their defenses. This guide provides a list of essential cybersecurity practices to protect businesses against data breaches, ransomware and other online risks.
Read moreThe increased reliance on VPNs made the latter an attractive target to cybercriminals. In particular, threat actors began exploiting one of the known weakest links in the chain: users’ passwords.
Read moreCybercriminals use malicious CHAPTCHAs to install stealers on victims' computers, collecting sensitive data such as emails, passwords, addresses, and financial information. We examine various attack methods, including watering hole attacks, ad abuse, and SEO poisoning, and provide practical tips for protecting yourself from these sophisticated threats.
Read moreOperation Talent, a coordinated effort by international law enforcement agencies, successfully dismantled the Nulled and Cracked cybercrime forums. These platforms, which facilitated the sale of hacking tools and stolen data, were seized, disrupting a major network of online threats. The operation highlights the growing commitment of global authorities to combat cybercrime.
Read morePlease fill in the form below (fields with * are mandatory) and we will respond to your request as soon as possible!