u meant access the database for every lookup that will be more efficient?
I am expecting an inflow of data like this
100 file/min
2600 records in each file and this perl script has to execute in every 10-15 mins
i have a memory of 64gb. | [reply] |
I mean that for the amount of data and the complexity of queries that you're talking about, using a database will probably be more efficient than trying to manipulate your own in memory data structures.
Will it be fast enough on your system? I have no way of knowing. I suggest trying it and seeing what happens. If there's a problem then come back here for performance tuning suggestions.
| [reply] |
:(
i have my records are coming in unix file system only after this lookup process i have to give the records for some other ksh script.
So how can i use a database in b/w
u are pointing to sql loader and spooling back ?
| [reply] |