A while ago, Django’s testing framework got transaction-based rollback, which obviously did wonders in terms of test performance. One thing that still bothered me though was the slow, initial table setup. For example, in a modestly sized project of mine with about 40 tables, this would take up to almost a minute. In particular when writing new tests, which is going to be an iterative process, that’s really not acceptable.
Now, one obvious things to do is using an in-memory SQLite database for testing purposes. I’ve tried that at times, but ultimately, various MySQL-specific stuff and raw SQL queries always made this an unsatisfying experience.
I’ve now finally realized that there is an easy solution, and I’m perplexed it didn’t occur to me earlier (maybe Linux, to which I’ve recently switched, just puts these kinds of options closer to one’s grasp). And it really is pretty straightforward: Mount a tmpfs, run a second MySQL instance on a different socket/port using this mount as a data dir, and tell Django to use it.
I’ve put shell script that I’m using on github.
You might want to customize the location of the data directory or the bind options, then simply do:
and when you’re done, shutdown with Ctrl+C.
The tables which previously took a minute to setup, now only need two and a half seconds. It even cuts the runtime of the actual tests, which were already using transaction-rollback before, in half. Not surprisingly, I notice that my motivation to actually write tests and keep them up-to-date has noticeably improved.