Docker is an obvious choice for packaging stateless services—but what about databases? We tried it out for packaging our test data sets. After a few lessons learned, the result is a nice win in developer efficiency.
As you may have read in other posts on this blog, Zoosk is made up of a number of separate services. We’ve been focusing recently on making fast and repeatable builds of the entire distributed environment. More on that in a future post—but long story short, packaging into cloud images and containers has taken the build process from infuriating to easy.
With our repeatability problems solved, engineers soon noticed that database loads were the slowest part of bringing up a test instance. It was a pity that we could spin up an instance in 2 minutes and then have to wait 15 minutes more for mysql to populate.
More than you might think. Our test instances have multiple DBs to simulate the sharding we do in production. We have very nice test data sets, with 1k or 10k users with profiles in predictable states. We rely on a device database (Wurfl DB) and a worldwide zip code database. Add it up and we’re loading about 7 gigabytes of mysql data on each instance. Despite efforts to parallelize the DB loading, it takes time to copy and index all of that data.
Waiting 15 minutes for a database to load does not sound like the end of the world. But our engineers like to run the process regularly to get their database into a known state. And test automation requires a pristine DB, so that time is added to any automation pass. Conservatively, if the process is run 1,000 times per year with a human held up waiting for it, we’re talking hundreds of man-hours wasted doing a repetitive task. Sounds like a job for robots!
We searched around for existing solutions, and saw a trend: to avoid repeatedly loading sql, do it once and then shutdown the database and make a filesystem snapshot. Simple enough. Maybe we’ll create LVM snapshots of /var/lib/mysql, compress them into an artifact, make some tooling to push them to a repository, then install them on other instances …
Docker can turn a container back into an image using the docker commit command. The resulting layer, once reconstituted into docker’s graph storage, is actually a filesystem snapshot. Maybe we can populate mysql running in a container, then docker stop && docker commit, and boom! Test data container.
But will mysql work on docker’s COW filesystem? Will docker choke on the bulky data files? Maybe mysql uses sparse files or some metadata that docker doesn’t preserve? Some experimentation was in order.
Initial experimentation was promising, but we noticed that container start-up times were quite slow. This reminded us of a subtlety of docker that resulted in a non-obvious optimization:
Subtlety of docker: volume data is copied at container start time
Non-obvious optimization: don’t use volumes for bulky throw-away data
Startup was slow because docker was duplicating gigs of database files at container start time. It does this to extract the files in “volume” directories out of the image graph and into some other FS that has an independent lifecycle from the container. But in our case, we’re optimizing for startup time and we want the data to have the same lifecycle as the container (we want to throw it out when the container is removed).
We use Percona’s image from Docker Hub, which marks /var/lib/mysql as a volume. We worked around that by extending their Dockerfile and adding a conf file in /etc/mysql/conf.d that uses a different directory as the datadir (see below).
With that change in place, mysql is perfectly happy running with its data directory inside docker’s COW filesystem. It was, however, pitifully slow to commit transactions. We improved write time by adding innodb_flush_log_at_trx_commit = 0. This setting throttles fsync calls and sacrifices atomicity, but we can live with that on a test VM. Mysql writes became approximately the same speed they were before when they went to the enclosing VM’s own COW filesystem. We also disabled the performance schema to save on memory. The final Dockerfile is thus:
RUN echo “[mysqld]” > /etc/mysql/conf.d/zoosk-vm.cnf \
&& echo “datadir = /var/lib/mysql-image” >> /etc/mysql/conf.d/zoosk-vm.cnf \
&& echo “performance_schema = OFF” >> /etc/mysql/conf.d/zoosk-vm.cnf \
&& echo “innodb_flush_log_at_trx_commit = 0” >> /etc/mysql/conf.d/zoosk-vm.cnf \
&& mkdir –p /var/lib/mysql-image/
Docker’s layer compression did better than expected with the mysql data directory. One container with 1.14 GB of database files is only 161.2 MB going over the wire during a docker pull.
Prior to test data containers, each test VM had a single mysql instance containing a database for each service. With the change to test containers, each DB is in it’s own container with it’s own mysqld. To keep from exhausting memory on test VMs, we tune each mysql instance container using start parameters. For example, a heavily used database might have –max-connections=1000 while a smaller utility DB can get away with –max-connections=50. The –innodb-buffer-pool-size, –innodb-log-buffer-size, –query-cache-size, and –key-buffer-size flags are also handy to reduce memory usage for lightly used DBs.
Once our VM DB was broken up into smaller units, we created jenkins jobs to build empty schema containers from each service’s schema code. Another job creates populated test data containers. It runs all of the empty schema containers together at the same time (creating a complete DB environment), into which the test data set is loaded. Finally, a set of test data containers is created with docker commit.
We run these jobs for every branch that is pushed to our github, so when feature work requires database migrations there will be a test container built for testing that feature.
Our engineers are now able to reset a VM state with a simple command that destroys and recreates containers. Because we pre-populate database containers into our VM cloud images, in most cases it takes about 15 seconds instead of the 15 minutes that it used to. Major win!
Devs who are working on database migrations follow their usual procedure, and the migration is applied on top of the existing schema in the container.
One area we haven’t explored yet is per-feature test data states. Developing features on a dating site often involves getting a pair of users into particular conversation states. It should be easy for engineers to establish this state once and then create a container for it with docker commit, then share the container for other engineers to pick up where they left off.