Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Re^2: Using DBIx::Class::Schema::Loader to determine constraints

by jds17 (Pilgrim)
on Oct 31, 2012 at 22:18 UTC ( #1001751=note: print w/ replies, xml ) Need Help??


in reply to Re: Using DBIx::Class::Schema::Loader to determine constraints
in thread Using DBIx::Class::Schema::Loader to determine constraints

This is a nice alternative, pokki, I really don't need an ORM for my current purpose. What I am trying to achieve is getting the information about constraints in a uniform way for as many DBMSs as possible, and it has been suggested that I try DBIx::Class::Schema::Loader. As I can see, SQL::Translator supports all of the DBMSs I wanted to support, so I am fine with SQL::Translator, too.

What I could not get from the documentation is in which form the dump of the database structure which is passed as second parameter to parse() is expected (for each supported DBMS). Do you know the expected format?


Comment on Re^2: Using DBIx::Class::Schema::Loader to determine constraints
Re^3: Using DBIx::Class::Schema::Loader to determine constraints
by pokki (Scribe) on Oct 31, 2012 at 22:43 UTC
    (snip) It has been suggested that I try DBIx::Class::Schema::Loader. As I can see, SQL::Translator supports all of the DBMSs I wanted to support, so I am fine with SQL::Translator, too.

    That would be because DBICSL uses SQL::Translator behind the scenes, as far as I can tell. Using SQL::Translator directly is just cutting out the middle man.

    What I could not get from the documentation is in which form the dump of the database structure which is passed as second parameter to parse() is expected (for each supported DBMS). Do you know the expected format?

    The parse() function expects the raw DDL as a string. In my example, data.sql is just your DDL as is:

    CREATE TABLE a (u integer, PRIMARY KEY (u)); CREATE TABLE t (i integer, j integer, u integer, Primary key (i,j), co +nstraint fkey1 foreign key (u) references a (u)); CREATE TABLE s (j integer, r1 integer, r2 integer, constraint fkey for +eign key (r1,r2) references t (i,j));

    The various SQL::Translator::Parser::$FOOPARSER classes document individually what grammar they support. Sometimes a few bits are missing, but the author and the community are rather responsive and it's not too hard to patch it yourself if you know the basics of writing a grammar. "Normal" MySQL syntax is a problem, though. The ANSI compatibility mode of mysqldump seems to produce parser-friendlier output, but even then I usually have to tweak it until the parser accepts it.

    At $work I maintain an SQLite schema as the reference DDL for a database ("if SQLite supports it, everything supports it -- and if they don't they have no excuse"). DBICSL (through SQL::Translator) is pretty good at turning it into a DBIC schema which then produces MySQL-compatible DDL for deployment (and diff files for upgrading). It even knows about a few things not encoded in the grammar itself, e.g. old versions of MySQL don't support VARCHAR columns with a size larger than 255, so VARCHAR(500) in the DDL would eventually turn into a TEXT column. So far everything I've thrown at it, constraints, keys, cascade actions have been supported (cascade action support is recent though). I've only ever had trouble with triggers, and had to work around this by providing custom code for each supported DBMS.

      Thank you for the clarification! SQL::Translator looks really impressive, but I fear that it is nontrivial to create the dumps for each DBMS (db user rights?) and that it may also not be stable enough to use SQL::Translator on those dumps for my purpose.

      Recently, I have put a module on CPAN (DBIx::Table::TestDataGenerator). Currently, some methods related to finding out database metadata related to constraints have been abstracted into a role and for each DBMS I want to support the plan was to write a class impersonating that role. This gives me a good control over how the metadata is queried and I can even handle relevant DBMS version differences.

      I would not want the code to crash because a dump cannot be created or read. On the other hand I don't want to reinvent the wheel and I acknowledge that it does not look too elegant handling each DBMS separately (although one learns a lot while doing this), maybe you can still convince me that using SQL::Translator is better.

        From what I can see, you're currently using the same kind of tricks (table_info) as the SQL::Translator::Parser::DBI::$Driver modules. These have also worked for me in the past. Basically instead of passing them DDL in a big string, you give them a DSN, username and password (an open database handle works too). Then SQL::Translator::Parser::DBI selects the appropriate $Driver, which goes and fetches the metadata directly from the database.

        If you have enough permissions to use table_info, you have enough permissions to use the DBI parsers.

        The doc for the DBI parser says that Oracle is not supported and Pg support is experimental, but if you look at the module list for the distribution there *is* a DBI::Oracle class, so the doc is probably out of date. I'd say, install SQL::Translator, play around on real databases, see if it fits your needs.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1001751]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (12)
As of 2014-09-19 12:37 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (138 votes), past polls