Using virtualized environments for development is smart. You can create accurate replicas of arbitrary systems, safely isolated from your host OS and other development environments. The overhead, however, is significant, both in terms of system and development resources. Developers need higher-end hardware with sufficient memory and storage and the necessary skills or help to build systems.
Vagrant made virtualized environments configurable and portable and made sharing environments popular. If you work in a limited number of environments that happen to be provisioned in a compatible way, you would have a framework to distribute environments to developers. Developers, however, typically don't have the skills or the inclination to get their hands dirty with Vagrantfile
's or provisioner scripts. And overhead is still an issue -- provisioning is slow and single machine environments take a lot of disk space and require a significant portion of the memory allotted in production. Multi-machine setups or many independent services can complicate or make impossible a Vagrant-based approach.
Enter Docker. Instead of highly isolated virtual machines defined in Vagrant and managed by a hypervisor, Docker shares the host OS kernel with virtualized environment containers. This means that virtualized processes are essentially native, do not require the overhead of virtual machines and still provide a high level of isolation from the host OS. Docker manages both memory and storage resources across containers and provides a good method of packaging and managing filesystem images, making initial startup much faster than Vagrant and subsequent startups nearly instantaneous.
Docker Compose is a Docker wrapper that allows you to easily define and manage sets of containers for a project.
Compose is a tool for defining and running multi-container applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.
For example, here is a Docker Compose configuration I use for a Drupal 7 site with Redis and Solr:
mysql:
image: mysql:5.5
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: drupal
volumes:
- ./conf/mysql/conf.d:/etc/mysql/conf.d
ports:
- "3306"
redis:
image: redis:2.8
ports:
- "6379"
solr:
build: ../../build/drupal-solr
ports:
- "8983"
web:
build: ../../build/drupal-nginx-php55x
ports:
- "80"
- "443"
- "22"
volumes:
- /opt/code/example/drupal:/var/www
links:
- mysql
- redis
- solr
Each top level key is a named container, and each container declares either an image
or a build
. Images are discoverable via Docker Hub. In this case the MySQL and Redis containers are examples of official images.
Containers are linked by adding a list links
. So, from the web
container, you can connect to the mysql
container using mysql --host=mysql --user=root --port=3306 --password=rootpasswd
.
If an existing image does not exactly suit your needs, you can easily extend. For example, the solr container contains a build
key that is a relative path to a directory containing the Dockerfile
:
FROM guywithnose/solr:4.10.2
COPY ./conf/solr/search_api/4.x /opt/solr/example/solr/collection1/conf
This extends an existing image with the Drupal Search API Solr module required configuration.
For the webserver, I'm using an Nginx+PHP-FPM+Drush image of my own, extended to create files directories and copy my public key to be able to run drush commands.
FROM tbfisher/drupal-nginx:php-5.5.x
# Configure files directory.
RUN mkdir -p /var/www_files/public && \
mkdir -p /var/www_files/private && \
chown -R www-data:www-data /var/www_files
COPY ./conf/ssh/authorized_keys /root/.ssh/authorized_keys
To get code on the webserver container, in the Docker Compose file above, we specify a volume:
volumes:
- /opt/code/example/drupal:/var/www
In order to use this, we need Docker Compose.
Hoops On a Mac
If on a Mac, you will need a virtual machine. There seem to be several still-evolving options that wrap VirtualBox with some sort of UI, but they don't seem too stable or performant yet, so I use Vagrant to build a VM, using a base box with Docker Compose already installed, and fast rsync shared folders.
hostname = 'local.dev'
memory = 4096
cpus = 2
Vagrant.configure('2') do |config|
# https://atlas.hashicorp.com/tbfisher/boxes/ubuntu1504docker
config.vm.box = 'tbfisher/ubuntu1504docker'
# Networking
config.vm.hostname = hostname
config.vm.network :private_network,
# https://github.com/oscar-stack/vagrant-auto_network
:auto_network => true
# Synced folders.
opts = {
create: true,
type: 'rsync',
rsync__exclude: ['.idea/', '.git/'],
rsync__args: ['--verbose', '--archive', '--delete', '-z']
}
# Disable default synced folder.
config.vm.synced_folder '.', '/vagrant', disabled: true
# Project/site files.
config.vm.synced_folder './code/', '/opt/code', opts
# Docker files.
config.vm.synced_folder './provision/', '/opt/provision', opts
# Vagrant provider configuration.
config.vm.provider :virtualbox do |v|
v.customize ['modifyvm', :id, '--memory', memory]
v.cpus = cpus
v.name = hostname
end
config.vm.provider :vmware_fusion do |v|
v.vmx['memsize'] = memory
v.vmx['numvcpus'] = cpus
v.vmx['displayName'] = hostname
end
end
To get code and Docker files on the virtual machine, we are sharing 2 directories:
# Project/site files.
config.vm.synced_folder './code/', '/opt/code', opts
# Docker files.
config.vm.synced_folder './provision/', '/opt/provision', opts
To start up:
vagrant up
vagrant rsync-auto > ~/Library/Logs/vagrant-rsync-auto.log &
Add Drupal
To test, download a Drupal 7 codebase with some contrib to use Redis and Solr:
drush -y dl --destination=code/example --drupal-project-rename drupal
drush -y dl --destination=code/example/drupal/sites/all/modules/contrib redis search_api search_api_solr entity views ctools
On a Mac, you should symlink the files directory to a location not shared to your host to avoid performance issues trying to sync files generated by Drupal.
sudo rm -rf code/example/drupal/sites/default/files && sudo ln -s /var/www_files code/example/drupal/sites/default/files
Add a settings file:
<?php
include dirname(__FILE__) . '/../default/default.settings.php';
$databases['default'] = [
'default' => [
'driver' => 'mysql',
'database' => 'drupal',
'username' => 'root',
'password' => 'yKPwXgF8QiafhVf8',
'host' => 'mysql',
'prefix' => '',
'collation' => 'utf8_general_ci',
],
];
$conf['redis_client_interface'] = 'PhpRedis';
$conf['cache_backends'] = [
'sites/all/modules/contrib/redis/redis.autoload.inc',
];
$conf['cache_default_class'] = 'Redis_Cache';
$conf['cache_class_cache_form'] = 'DrupalDatabaseCache';
$conf['lock_inc'] = 'sites/all/modules/contrib/redis/redis.lock.inc';
$conf['redis_client_host'] = 'redis';
$conf['redis_client_port'] = '6379';
$conf['redis_client_base'] = '0';
$conf['redis_client_password'] = '';
Note the simple networking provided by Docker Compose. Mysql has the hostname "mysql", Redis is "redis", etc., and the ports are as defined by the containers and mapped in docker-compose.yml
.
Docker Compose Up
On a Mac, ssh in. Set the current directory to that containing docker-compose.yml
:
vagrant ssh
cd /opt/provision/docker/compose/example/
Start up and inspect
$ docker-compose up -d
Creating example_redis_1...
Creating example_mysql_1...
Creating example_solr_1...
Creating example_web_1...
$ docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------------------
example_mysql_1 /entrypoint.sh mysqld Up 0.0.0.0:32775->3306/tcp
example_redis_1 /entrypoint.sh redis-server Up 0.0.0.0:32774->6379/tcp
example_solr_1 /bin/bash -c /opt/solr/bin ... Up 0.0.0.0:32776->8983/tcp
example_web_1 /sbin/my_init Up 0.0.0.0:32779->22/tcp, 0.0.0.0:32777->443/tcp, 0.0.0.0:32778->80/tcp
You can manually specify port mappings or let Docker Compose assign. Use the former if you want stable ports across runs, since the assigned ports will change every time your run docker-compose up
. The latter is simpler, and it is a simple exercise to script the generation of these alises. My alias file ~/.drush/local.aliases.drushrc.php
for this example run would be:
$aliases['example'] = array(
'db-url' => 'mysql://root:rootpasswd@local.dev:32775/drupal',
'remote-host' => 'local.dev',
'remote-user' => 'root',
'ssh-options' => '-p 32779 -o \'UserKnownHostsFile /dev/null\' -o \'StrictHostKeyChecking no\' -o \'PasswordAuthentication no\' -o \'IdentitiesOnly yes\' -o \'LogLevel FATAL\'',
'root' => '/var/www',
'path-aliases' =>
array (
'%drush-script' => 'drush-remote',
),
'uri' => 'http://local.dev:32778',
);
Repeat
With this technique, I forget that environments are running. I work on up to 4 or 5 projects in a day's work, and never think to shut environments down. Instead of many virtual machines, I have one. If you are on linux, you'll have none. The projects I work on are hosted on a variety of platforms, now building a development environment that is close enough is much simpler than before.