A report from cybersecurity platform Barracuda has revealed that bad bots have become more advanced, human-like and sophisticated as online businesses become more effective at combatting them. Bots are automated software programs designed to perform online activities at scale and can be good or bad. The report, which analyses the latest trends in bot-related traffic […]
A report from cybersecurity platform Barracuda has revealed that bad bots have become more advanced, human-like and sophisticated as online businesses become more effective at combatting them.
Bots are automated software programs designed to perform online activities at scale and can be good or bad.
The report, which analyses the latest trends in bot-related traffic and activity, found that while the proportion of bad bots in internet traffic has declined from 39% in 2021 to 24% in 2024, the number of individual bad bots has risen to 44% of detected clients, up from 36% last year.
Barracuda researchers noted that while a decrease in the proportion of bad bots might seem like good news, a deeper analysis shows that while the proportion of bad bots has declined, the proportion of individual bad bots has risen.
In other words, there is less traffic on the road, but many more makes of bad vehicles.
What’s more, almost half the bots classified as advanced are mostly malicious and designed to mimic human behaviour and handle complex online interactions such as engaging with targets in account takeover attacks, the report found.
The good, the bad and the grey
Good bots include search engine crawler bots, SEO bots, and customer service bots that can help organisations streamline processes, increase efficiency and strengthen customer interactions.
Bad bots are designed for malicious or harmful online activities and can be deployed against websites, servers, application programming interfaces (APIs), and other endpoints.
Bad bots target e-commerce and login sites with the aim of breaching accounts to steal personal data or commit fraud. They can also exploit vulnerabilities in websites for access, overload the target with traffic, spread spam, skew business analytics and disrupt services for legitimate customers.
Barracuda researchers also noted an emerging category of AI bots which are “blurring the boundary of legitimate activity”.
Called “grey bots”, they’re not overtly malicious but are designed to scrape large volumes of data from websites without permission for the purposes of training AI generative models.
These bots can be aggressive, possibly ignoring any embedded robots.txt code that is added by publishers to signal to scraper bots that they shouldn’t take website data.
ML to tackle bot attacks
The report states that the general decline in bad bot traffic detections is driven both by growing awareness of the threat and reduced demand for mass-automated shopping bots.
Barracuda recommends businesses take a multilayered approach to combat bots, including robust application security as well as specialised bot protection.
The report says businesses should take advantage of machine learning which can be used to effectively detect and block hidden, almost-human bot attacks.
“Authentication controls, including multi-factor authentication, will help to secure vulnerable access points such as login pages from brute force and credential stuffing attacks,” it advised.