Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling

(webappdev) database centralization ease + client-side file caching?

by princepawn (Parson)
on Jul 15, 2001 at 23:18 UTC ( [id://96884]=perlquestion: print w/replies, xml ) Need Help??

princepawn has asked for the wisdom of the Perl Monks concerning the following question:

When designing a website, I find it mentally less challenging to use a database as opposed to a database + bunch of files. For example, if I have a table called PLAYER then I prefer to have all aspects of that player centralizedin a table instead of say, having the players picture on disk somewhere and having the player's name as a column in the table. This to me has many advantages:
  1. Creating a new player is not a hybrid task of adding a database row and managing a file hierarchy for him
  2. I dont have to keep track of two different access policies
  3. of course player deletion is fairly straightforward too.
Now the disadvantage is that it is much harder to get the client to cache image data (say of the player's picture) when I am serving it out of a database based on a query string instead of when the browser is making a call to the full URL to the image.

Is there an easy way to recoup the two? One thing which comes to mind is path translation:
really gets served by an Apache handler ala:
SELECT pic FROM player WHERE username = 'bobo';
but I dont have the luxury of such a handler because I am being hosted somewhere where I only have CGI access.

Replies are listed 'Best First'.
Re: (webappdev) database centralization ease + client-side file caching?
by IndyZ (Friar) on Jul 15, 2001 at 23:43 UTC
    You could try: The webserver will run, and you can parse bobo.jpg out of the http header "REQUEST_URI". The example above would return a REQUEST_URI of /, for example. Use a regexp to grab the username, look it up in the database, generate an image/jpeg content-type header, and dump the file out.

    Update: I think I should explain this a little better. The web browser will think that is simply a directory, and that what it is accessing is a file called bobo.jpg. If you return the right headers, the browser will think that it is retrieving a static jpeg. Depending on how the browser determines whether or not to reload the picture, that may be all that you need. On the other hand, some browser will request the files headers in order to check modification time, file sizes, and whatnot. You need to be prepared to fake those headers if necessary.


(ar0n) Re: (webappdev) database centralization ease + client-side file caching?
by ar0n (Priest) on Jul 15, 2001 at 23:40 UTC

    You don't need a handler to do what you want. Use the path_info method from CGI.

    Write a CGI script img (or whatever) whose sole purpose is to feed user images. e.g:

    #!/usr/bin/perl -w use strict; use CGI; my $q = new CGI; my ($user) = split m!/!, $q->path_info(); # database connection stuff here ... # fetch $pic from database for $user pint $q->header("image/jpeg"), $pic;
    ... or something like that. And then have an image tag like <img src="/cgi-bin/img/bobo/whatever.jpg">

    ar0n ]

Re: (webappdev) database centralization ease + client-side file caching?
by thpfft (Chaplain) on Jul 16, 2001 at 21:11 UTC

    I like the database + filesystem approach, mostly because it plays to the strengths of the different vehicles. Apache + browser is a very mature system for delivering slowly-changing information efficiently, so i try and use it for that. It isn't optimal for volatile data - just as handmade html isn't - but here the database output is ideal, so I write the world's 23,902,371st content management system.

    I think pictures fall into the first category. They don't change much, they're never edited in situ, they're not analysable or indexable and they need no formatting that they don't already have.

    In this context using the filesystem is a natural part of the http sequence: the page generated from the database triggers the subsequent requests for images. To have those requests go back to the database seems unnecessary when you've got such a good image delivery mechanism sitting there already. You don't even need to manage a folder structure: dump everything in one place and use the database metadata to update and delete it.

    An analogy: consider what happens if you succumb to a broadband evangelist and add video clips to the player profiles. As well as delivering proper image headers, your database system will have to be able to hold and stream the bulky new clips. It's possible, but rather redundant when there are perfectly good mechanisms for streaming it already and they only need to be told where it is.

    I think this is a case where two heads are actually simpler than one, given their different strengths. The system you describe is structurally more elegant but practically less efficient.

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://96884]
Approved by root
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others admiring the Monastery: (4)
As of 2024-04-22 04:19 GMT
Find Nodes?
    Voting Booth?

    No recent polls found