Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

cgi and https (mildly off topic)

by coolmichael (Deacon)
on Nov 27, 2001 at 08:27 UTC ( #127720=perlquestion: print w/ replies, xml ) Need Help??
coolmichael has asked for the wisdom of the Perl Monks concerning the following question:

I have some (mildly) sensitive personal information in a database that I would like to be publicly accessable via the web and CGI. It will be username/password access only. What is the best way to go about this? I'm asking about things like .htaccess and .htpasswd, or https. It isn't so necessary that we have to do 128 bit https, but at the same time, the users will want some security.

As far as https goes, I've never used it before, and would have to set it up myself. Are there any concerns I need to be aware of?

Also, how well does .htaccess work for protecting CGI scripts.

I've thought about using javascript in the webpages to encrypt the password, but then someone can still sniff the account information.

Michael

Comment on cgi and https (mildly off topic)
Re: cgi and https (mildly off topic)
by IlyaM (Parson) on Nov 27, 2001 at 08:51 UTC
    HTTPS and HTTP authorization with .htaccess and .htpasswd are not mutually exclusive thigs.

    HTTPS provides SSL layer for data transfers between server and client. SSL itself prevents third party from sniffing this network traffic and can give client guarantee that server haven't been substituted by another by "cracker". However it doesn't provides authorization of client.

    HTTP authorization with .htaccess and .htpasswd can be used for authorization of clients. It is based on protocol which passes password and username as clear text (well, not clear text but something which can be easily decoded). So without additional layer of enryption (like SSL) it is easy target for sniffer attacks.

    Please note that HTTP authorization is not the only way to do client authorization (but propably simpliest to setup since it doesn't require any coding). It is common to use cookies for this task for example (like Perlmonks website does).

    What you need is probably both SSL and some method of client authorization (for example - HTTP authorization).

    As for using javascript in the webpages to encrypt the password. Well, since you have not gave any details about it I can't say that it is insecure. But unless you use some kind of asymmetric cryptography it will be always subject of sniffer attacks. Do you?

Re: cgi and https (mildly off topic)
by Dogma (Pilgrim) on Nov 27, 2001 at 09:17 UTC
    The post from IIayma does a good job of explaining ssl and auth. (This is now off topic for perlmonks) The use of ssl is pretty much a given for anything you don't want the entire world to know. What you need to do now is decide how many users this site is going to have and if it's viable to manage .htaccess. For just a couple of users .htaccess should work fine. With anything more then a few users it will soon become a pain to manage. There are means of having .htaccess replaced with a sql lookup. Of course if you need session tracking as well it's much easier to just use cookies. If you do decide to go that route please take the time to become informed about the security issues involving cookies.
Re: cgi and https (mildly off topic)
by NodeReaper (Curate) on Nov 27, 2001 at 09:18 UTC

    Reason: (footpad) Content Posted in a Void Context (nee No Content)

    For more information on this node visit: this

Re: cgi and https (mildly off topic)
by Ryszard (Priest) on Nov 27, 2001 at 09:30 UTC

    Why not use session management so people have to log in to use the service? Its easy to build yourself, or even easier if you want to download a module to do it for you.

    whatever you do, use perl -wT, and *dont* put any JS in webpages to encrypt anything. All passwords should be stored server-side with some kind of one way hash (md5, sha-1 (i prefer hashing over encrypting as you dont need to leave a key lying about somewhere). The incoming password is then, captured, untainted, encrypted and compared to the one that is stored.

    If possible, put the backend storage machine on a private network so it is harder to get to, (but that may be over kill in your situation.)

    As a rule untaint *everything* that is coming from outside your script (which is what -T actually enforces)

    By far the easiest method of doing this is have one script that accepts a password, and conditionally on the password being correct, it will pump out the right information.
    The downside is it is *so* unscalable, and you need to re-enter the password each time you want to review the information

    Just the disjointed ravings of a crazed lunatic.

Re: cgi and https (mildly off topic)
by Spenser (Friar) on Nov 27, 2001 at 10:56 UTC

    As Dogma has alluded to, .htaccess can be problematic when dealing with many users and a lot of usage. Besides what has already been pointed out here, .htaccess also has a problem with repeated verification. Once a user enters a directory containing a .htaccess file, the browser is called upon to provide user name and password for each page requested. The user is only asked once, but the browser, behind the scenes, is having to authenticate with each page requested. This slows things down a bit and could be a real drain on the system with a large number of users.

    A good alternative to .htaccess is to modify the Apache configuration file. In RedHat Linux 7.0 it's /etc/httpd/conf/httpd.conf. Check the man pages for httpd on your system, if different. The advantage is that when you enter the protected directory, authentication is conducted only once and not repeated with each page retrieved. This option may not be available, though, if you're renting space on someone else's server.

    Assuming you do have access to httpd.conf, here's a sample of text you would include in the configuration file:

    <Directory "/var/www/html/intranet">
    Options Indexes Includes FollowSymLinks
    AllowOverride
    AuthType Basic
    AuthName staff
    AuthUserFile /var/www/users/staff
    AuthGroupFile /var/www/users/groups
    Satisfy any
    require valid-user
    require group staff
    Order deny,allow
    Order allow,deny
    Allow from 10.1.71.0/24
    Deny from all
    </Directory>

    A directive like this needs to be put in the correct, general location in the httpd.conf file. Just search for <Directory for the example and place it in that area.

    You'll notice that I specify the directory protected in the openning tag (with no trailing slash). I also specify where to find the user file (staff) which contains the user names and their encrypted passwords.

    Read the man pages on htpasswd, obviously. But, basically, you create the user file in the directory you want and your first user by typing the following command at the command prompt:

        htpasswd -c staff bob

    You'll be prompted to enter the user's password in twice. To add more users to this "staff" file, type:

        htpasswd staff ted

    One last comment about my sample configuration: I'm protecting an intranet section of my web site for employees to use from home or work. If they're outside the office, I want them to be authenticated so I can be sure of who it is that's coming in. However, if they're inside my local network, I don't want them to have to worry about authenticating. So I've added the line "Allow from 10.1.71.0/24" where my network subnet is 10.1.71.x.

      There is no way how HTTP authentication can be conducted only once and not repeated with each page retrieved. Read RFC for HTTP 1.1. Basic HTTP authorization require user agent to send username and password on each HTTP request for protected area.

        I'm sorry to have taken so long to respond to your comments, IlyaM, but I got side tracked with other activities and I needed time to ponder your comments. I agree with your comments, but to the extent that they contradict mine, I'm now confused.

        What you said makes sense as I understand httpd: After a page is requested from Apache and delivered, the relationship is terminated, the daemon dies along with all references to the client. If this understanding of mine is incorrect, please correct me.

        My error seems to come from my reading of a line in O'Reilly's book, Apache: The Definitive Guide (2nd Edition) by Ben & Peter Laurie. In Chapter 5: Authentication, on page 126, the section entitled, "Using .htaccess Files" it says:

        "The drawback to the .htaccess method is that the files are parsed for each access to the server, rather than just once at startup, so there is a substantial performance penalty."

        Honestly, I think you're right. I must be misreading O'Reilly's book. I know it's not your job to defend O'Reilly, but I'm trying to reconcile the two logical comments. Incidentally, I think this relates to Perl and Perl Monks in that the CGI.pm is very widely used by perl programmers.

        Please let me know what you think.

        -Thanks.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://127720]
Approved by root
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others perusing the Monastery: (16)
As of 2014-09-19 14:19 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (140 votes), past polls