A Layered Approach to Mitigating Brute Force and Automated Credential Stuffing Attacks
Written by David McClure, Product Manager at US Signal
There’s no shortage of ways for cybercriminals to wreak havoc. Account Take Over (ATO) attacks are among them, including brute force attacks and automated credential stuffing.
Brute force attacks involve guessing usernames and passwords to gain system access. Credential stuffing is the automated use of stolen usernames and passwords to breach a system.
Why attacks are so prevalent
With the pervasiveness of open source software and cheap cloud hosting, an attacker can easily and inexpensively spin up countless servers to run these types of attacks. Bot nets, comprised of Internet-connected devices that attackers have compromised, often play a role as well. Cybercriminals simply load lists of credentials and common passwords into their software and use their compute resources and bot nets to attack the login page of the victim.
The use of stringent password requirements helps make it more difficult for cybercriminals to guess or steal passwords. But because people still tend to reuse the same passwords or variations of them, it’s not that hard for persistent cybercriminals to guess them. User credentials also can easily get exposed due to previous data breaches and successful phishing campaigns, often ending up for sale on the Dark Web.
Both brute force attacks and credential stuffing attacks rely on scale to increase the odds of success. Simply put, the more password guesses or attempts attackers make, the better their chance of success. With the increased volume of requests hitting a website, servers are more likely to be overwhelmed and/or internet bandwidth consumed and for the victim to experience a Denial of Service (DoS).
Website and Application Security (WaAS) in Action
One attack that our US Signal team witnessed entailed website traffic increases of 100x with the customer’s website going down multiple times for days. By implementing our Website and Application Security (WaAS) service, which includes DDoS mitigation, we were able to filter out malicious traffic before it reached the customer’s network and restore service.
We also got visibility into the traffic and saw that it was flowing directly to the login for the customer portal. While it was coming from all over the world, the primary traffic was from outside of the United States. It wasn’t unusual for the customer to have traffic from surrounding countries, but it was obvious the volume of traffic from certain countries and from specific traffic sources was not normal.
When dealing with an automated attack like this on a website, you don’t want to block real customers as you try to halt attack traffic. What we recommend — and what we did for our customer — was combine rate limiting; filtering based on whether the user was using a browser; and Captcha challenges to verify if a user was a real human.
While human challenges work well for protecting a website, they aren’t effective for an API. APIs are often used with apps on a mobile device, with an app using the API to communicate back and forth with the server. Without a browser in the middle, there’s no opportunity to use a human challenge without custom software development work.
Rate limiting is often touted as an effective control for API but it can’t stop an attack. It only slows it down. There’s also a fine line between filtering and accidentally blocking real users.
When you use rate limiting, each visitor is evaluated on a specific page, the login page in our customer’s case. If there are thousands or tens of thousands of automated bots, then each one gets a certain number of guesses before hitting the rate limit. That can add up to a lot of guesses over time. At this point, you’re basically playing the old Whack-A-Mole game — trying to knock new traffic sources down or blocking new user agents every time the attacker adjusts to supplement the rate limiting.
The Addition of Bot Management
For the situation we were dealing with for our customer, we brought in a specialized tool, known as Bot Management, that layers into our existing WaAS service. It combines behavioral analysis with threat intelligence feeds, heuristics, and machine learning to filter traffic coming in from automated bots. When configuring the tool, we created rules that allowed the platform to identify normal traffic and filter out what wasn’t normal. As it “learned,” it continually made adjustments to better filter traffic even as it changed.
The Bot Management platform blocked around 70% of all malicious traffic to the API almost immediately. Over the next four days, that number jumped up to ~99.9%. A few days later, the customer informed us that no malicious requests hit their API in days.
For the regular website login page, we layered in human challenges with Bot Management, only challenging users that appeared to be automated bots. If end users successfully solve the puzzles, they are less likely to be a bot. The Bot Management platform continued to learn over time, further separating good traffic from the bad without blocking real users. Because the platform learns from users that solve challenges, those users are less likely to see challenges in the future.
Not all automated traffic and bots are malicious. You may want some of them, such as search engine crawlers or the crawler used for Alexa devices, to reach your site. The tool we use includes a feature that allows good bots through. Other tools require manual configuration to distinguish between good and bad traffic.
Protect your Websites and APIs
There are only a handful of effective security solutions specifically designed to stop automated traffic such as that seen in credential stuffing attacks. We’re fortunate to have one that we can leverage with our existing solutions. If you’re interested learning more about this solution or any of the other IT security services we offer at US Signal, let us know. Call 866.2. SIGNAL or email [email protected].