http://www.perlmonks.org?node_id=1001747


in reply to Using DBIx::Class::Schema::Loader to determine constraints

If you don't need a full ORM (and it looks like that's not the point for you) you probably just want SQL::Translator instead:

#!perl use strict; use warnings; use 5.010; use SQL::Translator; use Data::Dumper; use SQL::Translator::Parser::SQLite; my $data = do { local $/; open my $fh, '<', 'data.sql' or die "Can't read file: +$!\n"; <$fh> }; my $sqlt = SQL::Translator->new; SQL::Translator::Parser::SQLite::parse($sqlt, $data); my $schema = $sqlt->schema; print Dumper([ map { $_->name } $schema->get_table('s')->get_constrain +ts ]);

Gives me:

$VAR1 = [ 'fkey' ];

See SQL::Translator::Schema::Constraint for any other methods to call on the Constraint object. It's a bit convoluted to use like this; normally you'd use SQL::Translator to, as the name implies, translate your DDL from a format (e.g. SQLite-style SQL) into another (e.g. MySQL).

Replies are listed 'Best First'.
Re^2: Using DBIx::Class::Schema::Loader to determine constraints
by jds17 (Pilgrim) on Oct 31, 2012 at 22:18 UTC

    This is a nice alternative, pokki, I really don't need an ORM for my current purpose. What I am trying to achieve is getting the information about constraints in a uniform way for as many DBMSs as possible, and it has been suggested that I try DBIx::Class::Schema::Loader. As I can see, SQL::Translator supports all of the DBMSs I wanted to support, so I am fine with SQL::Translator, too.

    What I could not get from the documentation is in which form the dump of the database structure which is passed as second parameter to parse() is expected (for each supported DBMS). Do you know the expected format?

      (snip) It has been suggested that I try DBIx::Class::Schema::Loader. As I can see, SQL::Translator supports all of the DBMSs I wanted to support, so I am fine with SQL::Translator, too.

      That would be because DBICSL uses SQL::Translator behind the scenes, as far as I can tell. Using SQL::Translator directly is just cutting out the middle man.

      What I could not get from the documentation is in which form the dump of the database structure which is passed as second parameter to parse() is expected (for each supported DBMS). Do you know the expected format?

      The parse() function expects the raw DDL as a string. In my example, data.sql is just your DDL as is:

      CREATE TABLE a (u integer, PRIMARY KEY (u)); CREATE TABLE t (i integer, j integer, u integer, Primary key (i,j), co +nstraint fkey1 foreign key (u) references a (u)); CREATE TABLE s (j integer, r1 integer, r2 integer, constraint fkey for +eign key (r1,r2) references t (i,j));

      The various SQL::Translator::Parser::$FOOPARSER classes document individually what grammar they support. Sometimes a few bits are missing, but the author and the community are rather responsive and it's not too hard to patch it yourself if you know the basics of writing a grammar. "Normal" MySQL syntax is a problem, though. The ANSI compatibility mode of mysqldump seems to produce parser-friendlier output, but even then I usually have to tweak it until the parser accepts it.

      At $work I maintain an SQLite schema as the reference DDL for a database ("if SQLite supports it, everything supports it -- and if they don't they have no excuse"). DBICSL (through SQL::Translator) is pretty good at turning it into a DBIC schema which then produces MySQL-compatible DDL for deployment (and diff files for upgrading). It even knows about a few things not encoded in the grammar itself, e.g. old versions of MySQL don't support VARCHAR columns with a size larger than 255, so VARCHAR(500) in the DDL would eventually turn into a TEXT column. So far everything I've thrown at it, constraints, keys, cascade actions have been supported (cascade action support is recent though). I've only ever had trouble with triggers, and had to work around this by providing custom code for each supported DBMS.

        Thank you for the clarification! SQL::Translator looks really impressive, but I fear that it is nontrivial to create the dumps for each DBMS (db user rights?) and that it may also not be stable enough to use SQL::Translator on those dumps for my purpose.

        Recently, I have put a module on CPAN (DBIx::Table::TestDataGenerator). Currently, some methods related to finding out database metadata related to constraints have been abstracted into a role and for each DBMS I want to support the plan was to write a class impersonating that role. This gives me a good control over how the metadata is queried and I can even handle relevant DBMS version differences.

        I would not want the code to crash because a dump cannot be created or read. On the other hand I don't want to reinvent the wheel and I acknowledge that it does not look too elegant handling each DBMS separately (although one learns a lot while doing this), maybe you can still convince me that using SQL::Translator is better.