If
you have a question on how to do something in Perl, or
you need a Perl solution to an actual real-life problem, or
you're unsure why something you've tried just isn't working...
then this section is the place to ask.
However, you might consider asking in the chatterbox first (if you're a
registered user). The response time tends to be quicker, and if it turns
out that the problem/solutions are too much for the cb to handle, the
kind monks will be sure to direct you here.
I'm working on a module for parsing and extracting data out of ELF (Executable and Linkable Format) files which I intend to put on CPAN shortly. An ELF file contains a couple of important tables whose entries describe parts of the file (segments and sections). I have an object which holds on to objects for each of the tables. The tables aren't very interesting, but the table entries are so I want to provide access to the individual entries. Current options look like this:
use warnings;
use strict;
use ELF::Reader;
my $elfPath = $ARGV[1];
my $elfFile = ELF::Reader->new(filePath => $elfPath);
# Using index on the segments object
my $segments = $elfFile->GetSegments();
for my $segIndex (0 .. $elfFile->SegmentCount() - 1) {
next if !${$segments}[$segIndex]->FileSize();
print ${$segments}[$segIndex]->Describe(head => 16, tail => 16, wi
+dth => 32)
};
print "\n";
# Using an iterator
my $nextSeg = $elfFile->GetSegmentIter();
while (my $segment = $nextSeg->()) {
next if !$segment->FileSize();
print $segment->Describe(head => 16, tail => 16, width => 32)
}
The question is: should I provide both access techniques, or just one (which?) or something else? There will be a number of different classes that provide similar acessors.
Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond
Greetings. I have about 250K records in a postgres db I need to process. I'm wondering if there's a way to read this in batches of 1000 records or something, to cut down on db transaction overhead. THe DBI docs mention methods execute_array and execute_for_fetch that do something in batches, but I don't completely follow what they're talking about. These methods have something to do with tuples. I just want to run a query, get a batch of records, process those, get another batch, and repeat. Super Search on "postgres batch read" returns nothing. Any pointers appreciated.
Now I'm wondering where the output format is defined.
As I would always require all timestamps to be in a RFC3339 format, I would have to change each and every select statement to have a TO_CHAR with the proper format.
There must be a more practical way of defining the format for all timestamps at a central place, but i have no clue where.
The display is completely controlled by the SQL client, not by the server.
The SQL client here should be DBD::Pg.
But maybe that's the wrong place to search and the conversion of a timestamp to a string representation is done at a later stage e.g. when doing the encode_json?
My current attempts in finding the spot where to change the format were fruitless :(
I hope someone here can give me more ideas where to search.
Is there a way to browse the CPAN namespace hierarchy? I'm trying to figure out the best place to fit a new module for parsing and dumping selected contents of ELF files. I've scanned through the PAUSE docs that looked most likely to lead the way, but didn't see anything.
Hmm, while preparing this question I found https://www.cpan.org/modules/01modules.index.html which is at least part of the answer, but somewhat unwieldy. Maybe I'll just have to put a Tk wrapper around that data unless someone has a suitably lazier option for me?
Update:https://ELF::Extract::Sections exists already and has a name rather like what I would want to use. However it's dependency heavy and has a pretty bad rep on cpantesters, especially for Windows builds. I did try installing ELF::Extract::Sections, but after a rather long time it failed as might have been guessed from the testers reports. I'm open to suggestions for an alternate name in the vicinity of the ELF::Extract name space.
Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond
While I'm still thinking over the excellent suggestions I received for the name for the module here is the code. Please help with suggestions or critiques.
The module extracts the memo text from the lqm files that QuickMemo+ exports. The lqm file is actually a zip file that contains a JSON file that contains the memo text so it was an easy module to write. Each memo is in a separate archive so it is very tedious to extract each memo by hand.
Update: I neglected to add my tests as Discipulus wisely pointed out. Since I have prove -l working now with my tests I have been enjoying refactoring and letting the tests help me find the errors.
An internal system for business displays a series of 'cards' which show key data across various parts of the business all in one place. These are quite diverse and cover things like property occupancy levels, future pricing data and call centre call volume.
Currently this is implemented with a bespoke kind of template. There is an HTML file with placeholders for all the data items and a Perl script which gathers all the data from various systems across the business, reads the file, substitutes that data into the placeholders before displaying the output. It can be viewed at any time as a webpage but also runs twice per week from CRON and sends an email to key people. The script knows the difference by checking if $ENV{'GATEWAY_INTERFACE'} is defined.
This system already has 106 placeholders and needs extra information adding to it and I've decided to take the opportunity to refactor it to use Template thanks to the good influence of the Monastery! As part of the refactoring I want to add the facility for different users to be able to view the system with the cards in an order to suit them. Perhaps even to be able to hide the ones that do not interest them. We operate an open information policy so everyone in the business is permitted to see everything so there are no permission issues but not everything is actually useful to everyone so it would be good if they could put the cards they use most at the top and ones they seldom use further down.
In trying to work out how to implement this I have come up with a solution but it seems there must be a more elegant solution.
I've considered having a database table consisting of 4 fields:
- User_idUser - Foriegn Key to primary of User table
- Card_idCard - Foreign Key to primary of Card table
- metric - order to display cards for user
- visible - boolean - show card to user?
with User_idUser and Card_idCard being the composite primary key.
Then have a Perl script (of course!) that reads the cards from the database in the order given by metric. For each card it calls a subroutine that assembles the appropriate data for that card and uses Template to display the template file for that card. There will need to be 12 template files plus one for header and footers. Something like this (untested):
my $cards = $dbh->prepare("SELECT Card_idCard FROM CardList WHERE User
+_idUser = $user_number AND visible = 1 ORDER BY metric");
$cards->execute();
while (my $card = $cards->fetchrow_array) {
card_call_center() if $card == 1;
card_price_data() if $card == 2;
card_occupancy() if $card == 3;
# etc etc
}
# ...
sub card_call_center {
# Collect data
# from systems
my $vars = {
'foo' => bar,
'some' => data,
};
$template->process('card_call_center.tt', $vars);
}
I last did a CGIed website something like 25 years ago -- AJAX was _just_ becoming popular! At the time I just used CGI.pm. I now would like to do another -- it is for a simple but CGIed site.. I don't need AJAX or a DB interface or the like. Is there a better/more modern package for that kind of thing than just CGI.pm {NB: I remember almost nothing about using CGI.pm -- it's been a long time :) so I'd have to start by studying the POD docs anyway, so if there's something newer/easier/better I'd be happy to try it.}
The purpose of the module is to help me extract the large number of memos I have written on my LG smartphone in QuickMemo+. The module extracts the memo text from the lqm files that QuickMemo+ exports. I couldn't find any software that could open those files other than 7zip.
The lqm file is actually a zip file that contains a JSON file that contains the memo text so it was an easy module to write. Each memo is in a separate archive so it is very tedious to extract each memo by hand.
My questions are:
Is this name acceptable for CPAN? Some other options I considered are:
LG::QuickMemo_Plus::Memo::Extract
LG::QuickMemo+::Memo::Extract
QuickMemo_Plus::Memo::Extract
QuickMemo_Plus::Extract::Memo
QuickMemo_Plus::Extract
Is a module like this needed on CPAN?
I'm thinking it might be better to just make a small GUI and release the binary on Github. My plan is to do both.
But I wonder how many Perl programmers would be interested in this module.
Larry Wall says "We will encourage you to develop the three great virtues of a programmer: laziness, impatience, and hubris."
However the perlmonks listing of users by rank at Saints in our Book says "there are a lot of things monks are supposed to be, but lazy is not one of them!"
Is this a test? Can I get some guidance on how I should act?