It’s hard not to love the idea of a bot that simplifies ecommerce, enhances messaging, or even arranges your vacation travel end-to-end, from transport to the airport, booking a flight, reserving a hotel, and back again. However, there is another class of bots — bad bots, which you should be very wary of. They’ve been around for years, but with the rapid embrace of chatbots, the security threat they pose has increased. In fact, about 40 percent of all bot traffic is malicious, according to security firm Distil Networks.
That means for every cool new weather or shopping bot, there is probably a sinister equivalent being hatched somewhere. “Bots are the centerpiece of a hacker’s toolkit,” says Rami Essaid, CEO and cofounder of Distil Networks, a 5-year-old company specializing in bot detection and mitigation for a broad range of customers from AOL to Staples to StubHub. Last summer, the San Francisco-based firm raised $21 million in a series B round led by Bessemer Ventures, bringing its total capital raise to about $35 million. Earlier this year the firm released its 2016 Bad Bots Landscape Report.
Hackers have long used such bots as Burp to intercept web traffic and automate attacks, and Metasploit to probe webpages for vulnerabilities. Until recently, bad bots commonly targeted companies’ line of business operations to steal competitive information like pricing and inventories, intellectual property, and, of course, financial information.
“Now we’re seeing a lot more interest from companies seeking to protect business-to-consumer activities,” says Essaid. “Bad guys are buying lists of user names and passwords and then brute-forcing their way into banking, ecommerce and health care, as well as the Postal Service and the IRS.”
While the harm from bots at the enterprise level often doesn’t generate headlines, the ones that involve consumers do. New York Attorney General Eric Schneiderman’s office recently released a report about abuses in the ticketing industry. One incident cited was how a scalper’s bad bot purchased 1,012 tickets to a U2 concert within the very first minute of going on sale.
With all the excitement around bots, it’s not difficult to imagine a consumer being tricked into downloading a bad bot, akin to a phishing scheme or those fraudulent pleas of a Nigerian prince seeking to wire money to you. “As the appeal of bots for consumers widens, so do the risks. More bots mean more potential access points for bad bots,” says Essaid. He adds that his firm has seen an increase in bot attacks against web apps, APIs, and native apps.
Good bots will also be targets for hackers, but vetting by platforms will lessen the risks. “Responsibility for security absolutely lies on the platform level,” says Essaid. He points to the submission process of Apple’s App Store as an important example of how bots should be reviewed and consumers safeguarded.
Security is, in many ways, a cat-and-mouse game. Just as good bots will become more sophisticated, so will bad bots increase their ability to evade detection, load malicious code, and imitate human behavior.
While it’s too early to say how much of a problem bad bots will be for consumer-facing bots, there’s little doubt of the looming threat, particularly with the pervasiveness of mobile phones. Essaid estimates that, for now, bad bots found on people’s desktop computers outnumber mobile bots six or seven to one. “Bad bots are more prevalent on hardwired networks than on mobile, but their numbers are growing,” he says. “The more accessibility to install and download bots there is, the more wary you need to be.”