Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Re^2: RFC / Audit: Mojo Login Example

by jcb (Priest)
on Mar 23, 2020 at 22:15 UTC ( #11114583=note: print w/replies, xml ) Need Help??


in reply to Re: RFC / Audit: Mojo Login Example
in thread RFC / Audit: Mojo Login Example

The problem is that, if you do proper hash stretching on the server, the server must do a fairly expensive operation before rejecting an incorrect password. This means that brute force password guessing is a denial-of-service attack and the best that you can do is throttle login attempts somehow.

A simple CAPTCHA is a good option for this; asking the solution to simple math problem will confound most bot herders and allow to prioritize actual users' requests ahead of a bot horde. This has to be site-wide, not per-user, however and is probably best accompanied by an explanation that the server is under high load due to password-guessing attacks and solving the CAPTCHA will get your request priority. Tarpit requests that lack a CAPTCHA solution until they timeout, if you can.

A large botnet can produce a very diffuse attack, somewhat reducing the effectiveness of filtering by IP address, and storing IP addresses raises privacy concerns, but if your users' accounts are linked to real-world identities anyway (for example, you are running a paid service) the privacy concerns are less severe and you may want to store commonly-used IP addresses per-user and give priority to logins originating from IP addresses or IP address blocks that a user has previously used. Associating processing priority with how many logins have been seen from the same IP address could result in login attempts from password-guessing bots being demoted to "idle" priority and taking perhaps minutes while actual users see quick logins in less than a second.

Replies are listed 'Best First'.
Re^3: RFC / Audit: Mojo Login Example
by haj (Deacon) on Mar 24, 2020 at 15:29 UTC

    I don't mind CAPTCHAs to slow down bots (or, more likely, have them skip that particular target). On the other hand, in my opinion assigning different priorities isn't worth the effort or at least way outside the scope of this example. To find out whether a particular request is a login, the example code needs to go through the routing table in the application, so you've already passed any front ends which might be able to schedule requests according to some priority. Running another backend layer just for logins seems like over-engineering.

    In general, any proactive measures against bad bot behavior are an uphill struggle, even more so in an open source environment. Bot developers are at an advantage: They see your code and can design the attack methods accordingly. It is this imbalance why I recommend security logging by the application, even in a simple example like this. The application can help to detect the attack pattern, or to leave that job to security specialists, but only if the data are made available by the application. In particular, making the log entries machine readable is something the application must take care of.

    A reply falls below the community's threshold of quality. You may see it by logging in.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://11114583]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (4)
As of 2020-05-30 13:03 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    If programming languages were movie genres, Perl would be:















    Results (172 votes). Check out past polls.

    Notices?