Mon Dec 8 22:00:00 2014

Introductory Waffle

I'm going to follow a Mon-Fri schedule. Trying to do these seven days a week was interfering with my drinking time.

mstpan 6 - Databases

Welcome to Database Haters Anonymous

Note that I'm going to start with some straight DBI suggestions, then move to DBIx::Class related stuff since, well, that's a sane way to do complicated things with databases.


Obviously if you're talking to databases, you're using DBI (or something has gone horribly wrong, like say you're maintaining the control panel software on a RaQ4 and it uses

Here are the only two DBI methods 99% of your code should call:

$dbh->do($sql, {}, @args);
my @array_of_hashrefs = @{$dbh->selectall_arrayref($sql, { Slice => {} }, @args)};

(no, not begin_work, see next section)

Given SQL Server, prefer DBD::ODBC, and doing it 'properly' over FreeTDS. If you insist on DBD::Sybase, expect interesting times.

Given anything, check the DBD docs for the 'enable utf8' flag. Then turn it on.

Given SQLite, remember :memory: exists for testing.

Given mysql, force mysql_auto_reconnect to 0 so it stops doing random insane things depending on the environment, and remember Test::mysqld exists.

Given postgresql, set pg_server_prepare to 0 or be aware you're taking a steaming dump in the query optimiser's cheerios, and remember Test::PostgreSQL exists.

Given Oracle, double your hourly rate.


If you're writing an application that might fork, or use threads, or is going to be long running, or all of the above, you wanted DBIx::Connector.

Actually, you probably wanted it anyway.

DBIx::Connector handles keeping a connection open for you, will reconnect on error, and makes sure you get a fresh handle if you're in a new process or thread (and remember that on win32 you get threads from fork() so don't assume you don't need to care about that).

Also, it provides actually sensible transaction management, wherein you can do

$conn->txn(sub ($dbh) {

and get commit on success, and auto-rollback on exception, which mixes nicely with Try::Tiny for various things.

The one thing it doesn't have is transaction scope guards, though there's a DBIx::ScopedTransaction module that may or may not work to plug that gap.


If you're using Mojolicious, you should definitely have a look at sri's work on providing async access to DBD::Pg. There's also a Mojo::mysql that's aiming to keep parity.

I'm keeping an eye on this for ideas to steal into non-Mojo namespaces later.


There's two really important things you need to remember about DBIC - first, it's a lot more a relation to object mapper than the other way around, hence working pretty well as an SQL metaprogramming system even if you ignore the ORM part. Second, I wrote 0.01 in 2005 and all the interface mistakes are my fault; ribasushi probably ends up cursing my name more often and more justifiedly than my ex-girlfriends.

You wanted to add more ResultSet methods than you thought you did. Every time a non-DBIC class calls search(), ask yourself if you wanted a ResultSet method. Seriously.

Call the accessors and then call update() with no arguments. It makes things easier to add logic to, since you can now use method modifiers on the accessors rather than needing to override set_column, and switching between a real column and $something_else is now easy.

For overriding insert/update/delete, the following pattern is your friend:

sub insert {
  my ($self, @args) = @_;
  my $next = $self->next::can;
  $self->result_source->schema->txn_do(sub {
    ... # do stuff before
    ... # do stuff after

Using Class::Method::Modifiers works fine so long as you've loaded all your components first. Using Moo/Moose usually results in a poor gain to razor blade ratio.

Calling $schema->deploy to set up a basic test database is fine, but see below for managing full deployments.


The shinier, more elegant way to declare result classes.

For new schemas, I invariably reach for this first. It exports a bunch of subs to give you a declarative style, and cleans them all up after compilation so they don't interfere with method calls.

Check out the autotable option for automatic table naming, and the instructions on building a custom candy subclass to save boilerplate (though see also Import::Into for more complex situations).


This is basically an attempt at a version two of the venerable DBIx::Class::Schema::Versioned code. It has lots of different options, but you can happily ignore most of them and work with the defaults and things work out fine.

Follow the synopsis, read up on what prepare_install, prepare_upgrade, install and upgrade actually do, and throwing together a script to handle things is pretty easy.

Be aware that you can add your own custom files, both SQL and perl, and they'll be run in filename order. This makes upgrades that require data migration much less annoying.

For a superset of these features with DBIx::Class::Fixtures support, look at DBIx::Class::Migration.


Dump data. Load data. Extremely useful for testing. Sometimes useful for creating an initial database loadout on install. If that takes too long, you might want to consider e.g. cloning the base directory of your db after loading everything and kicking Test::PostgreSQL or similar into loading from that.

For integration between this, the db-specific Test::* stuff, and various other shinyness, look at Test::DBIx::Class. I'm conflicted over whether to recommend this because I like my tests to be as stupid as possible, but if you're drowning in boilerplate do at least consider it.


If you're storing passswords in your database, please just use this. It's pluggable, salts sanely, supports switching password hashing approaches later in the day which can be really useful, and if you get it to generate a check_password method auth systems can delegate knowing which hash to use to it - Catalyst's auth supports this as the 'self_check' pw auth setting.


Tomorrow will, apparently, be all about JSON.

-- mst, out.