Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Re^3: Matching alphabetic diacritics with Perl and Postgresql

by chacham (Prior)
on Jun 03, 2017 at 20:03 UTC ( #1192080=note: print w/replies, xml ) Need Help??


in reply to Re^2: Matching alphabetic diacritics with Perl and Postgresql
in thread Matching alphabetic diacritics with Perl and Postgresql

I see you have thought this one out. I would still like to suggest that you add a staging table and copy the data into it before processing, and then do all you processing in sql. This should avoid the issues as the database itself will be doing the comparisons, and with where not exists, at the same time as the insertion.

If there are multiple steps, a stored procedure can be used, or a series stored procedures called from another one, should you want separate entry points.

  • Comment on Re^3: Matching alphabetic diacritics with Perl and Postgresql

Replies are listed 'Best First'.
Re^4: Matching alphabetic diacritics with Perl and Postgresql
by (anonymized user) (Curate) on Jun 03, 2017 at 23:56 UTC
    Staging table means all my efforts to clean the data have to be migrated from Perl to SQL. This is particularly awkward for location. To take an example, the registered office at Companies House might be as daft as:

    Country of Origin: Channel Islands

    RegAddressCountry: GY1 4PW

    RegAddressCounty: St. Peterport

    PostTown:

    Address Line1:MR FRED BLOGGS

    Address Line 2: The Company Name

    CompanyName: A variation of the company name

    Care of:

    Postcode:

    So the algorithm to clean the data is broadly: delete or translate some countries (Channel Islands is deleted because it isn't a country. But postcode GY1 implies the country is Guernsey. For my model, the county is also Guernsey as opposed to Alderney. If more than one level is on the same line they have to be split. If data is at too high a level, e.g. the registered country was mistakenly put in Country of Origin and then everything moved up one from where it should be, defaults have to be pushed on the stack moving the location lines down (largest location items being higher in the list, the way I am doing it). So I have hashes of common translations e.g. if Curacao has a sedilla whereas my country table does not have sedillas and BVI gets unabbreviated to British Virgin Islands. The number of ways Ireland gets spelled is particularly astounding ROI, Rep of. etc.. Just about every Republic of something needs different translations there are even several ways for China. Mis-spellings like United Kinmod etc. Scotnadl, Isalnds, Isles-> Islands all have to be corrected, there's a method that splits location lines where pattern matching is needed, a list of deletable countries including Channel Islands and British West Indies to force the next level to be promoted to a country, a list of pushable items where the country is missing already, like Leicester being entered as a country in the register and the facility to assume registered country is the same as the country of origin for some cases. I could go on forwever, but it's about 200 lines of OO Perl and would be about 100 pages in SQL if I load it in before processing.

    Update: Yes, I could call the Perl from Postgres after staging, at least I can do that in my dev. env., but I can't expect that to hold true when it gets ported to the hosting env.

    One world, one people

      I see. I'll just add my 2 cents for your perusal.

      Batch processing with loaded data is almost always better in the database simply because it is made for it. It ends up being faster because systems are not switched, the database can do things in batches (as opposed to line by line) and character issues are unlikely to be coming up. So, before you even start, you're already ahead.

      I'm relatively sure you can do it a few statements too, if done correctly. I have done things like this before, and on one job, i created a batch table to make record (and time) the different steps, allowing multiple batches to be done at the same time (because the steps do not have anything to do with the batches, per se).

      If there are common misspellings, i would likely use a translation table to change misspellings to the used spelling, or move a country to the next step. A small query would do that (perhaps as part of a larger query). Anything not found would be flagged for human perusal, which would happen less and less with each run. Having the translation table in the database is likely better for managing it anyway, and you can even keep fun numbers like how often each shows up.

      To remove spaces and new lines, postgres has replace functions, including regex. That's one very simple query.

      In my experience, the sql code is smaller, not larger, if written correctly. However, if you already have it written in perl, there's no reason to reinvent the wheel, unless a large problem comes up, or you want/need processing to take (what might be) a fraction of the time.

        Also, a thought:

        If throughput efficiency is a trade-off you can afford, using an INSERTinto a staging table and then pulling it back out again might allow you to use Postgres as a 100% Postgres-compatible data filter, if it changes the data upon storage.

        Everything in engineering is a trade-off; the idea of introducing an execution inefficiency is naturally anathama to an engineer's way of looking at a problem, but if this approach causes a data translation to occur, putting the onus on Postgres to perform the translation for you does have the merit of adjusting perfectly each and every time Postgres changes the way it handles the data.

        Effectiveness is often more important than efficiency; if no more efficient solution can be found and if this does the job, it could also be used as a work-around until a more graceful solution can be found.

        I'd document the heck out of it, though. Have mercy on the poor next soul who has to touch the code.

        Although this is an initial load scenario, data validation will eventually need to be performed from the browser. It might be advisable to move such code upward from Perl in the direction of Javascript at some point, but moving it downward into the database is going in the wrong direction, creating not only unnecessary internet communication but unnecessary communication between application and DBMS. Ideally, data should be cleaned before it gets to the database. One popular late 90s solution in market data warehousing was to use a special language called Formula Engine to clean data based on a C++ library before it is inserted in a hybrid database (SQL/TSDBMS). But in today's paradigm, the functionality is destined eventually to move to the GUI so that the data is clean before it even hits the server.

        One world, one people

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1192080]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others musing on the Monastery: (8)
As of 2021-01-27 11:24 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    Notices?