Introduction

Continuing from the last post, we want to automate the creation and download of a DB dump from a remote server. This article introduces an Ansible playbook that:

  1. Sets up a backup directory on the local PC under /tmp/bkups/blog/postgres/db.
  2. Executes a shell command in a postgres docker container on a remote server which dumps the database on a bind-mounted volume.
  3. In a last step, downloads the dumped DB from the remote server to the backup directory on the local PC

TLDR: The playbook can be found here on Bitbucket.

Implementation

First step: create a local backup directory

This is achieved by the following Ansible playbook tasks:

- name: Fetch a backup of a postgres DB that runs on a remote docker container
  hosts: all
  gather_facts: yes

  tasks:
    - name: Set backup_dir and db_dump_name variable
      set_fact:
        backup_dir: /tmp/bkups/blog/postgres/db
        db_dump_name: web_db-{{ ansible_date_time.date }}.bkp

    - name: Make sure backup directory exists on the local machine
      delegate_to: localhost
      become: no
      file:
        state: directory
        path: "{{ backup_dir }}"

Note, that we use the Ansible variable ansible_date_time which is set as a result of gather_facts: yes. That's the only reason, why I set it to yes. That info will be used in the file name of the data base dump.

Because the backup directory on the remote server shares the same path as on my local PC, I set it as single variable via set_fact.

Step 2: Execute a command in the docker container on the remote server

The most straightfoward way to create a backup of the data base inside the Postgres docker container is to execute a pg_dump command as the postgres user inside the container. This can be achieved by using the ansible command module like so:

- name: Dump the DB to a bind-mounted volume
      command: |
        docker exec -i --user postgres postgres bash -c
        "pg_dump -Fc web_db > bkups/db/{{ db_dump_name }}"

This will execute the pg_dump command on the data base named web_db (the name I've chosen for my DB) and dump it under bkups/db. This location is actually a bind-mount, realized like so:

services:
  db:
    container_name: postgres
    image: postgres:latest
    restart: always
    environment:
      POSTGRES_DB: web_db
    expose:
      - "5432"
    volumes:
       - /tmp/bkups/blog/postgres:/bkups

This part is from the docker-compose.yml of my blog application.

So after dumping the DB to bkups/db/{{ db_dump_name }} inside the container it will be available under /tmp/bkups/blog/postgres/db on my remote server. Note, that this is exactly the name of the backup_dir chosen.

Last step: downloading the dumped DB from the remote server to the local PC

This can be achieved via ansibles fetch module, like so:

- name: Fetch the dumped DB from remote to local
      fetch:
        src: "{{ backup_dir }}/{{ db_dump_name }}"
        dest: "{{ backup_dir }}/"
        flat: yes

Make sure to read up on the default behavior, that is flat: no. Also, pay attention to the trailing slash in dest: "{{ backup_dir }}/".

Summarized playbook, execution and result

So all together, the final playbook looks like

---

- name: Fetch a backup of a postgres DB that runs on a remote docker container
  hosts: all
  gather_facts: yes

  tasks:
    - name: Set backup_dir and db_dump_name variable
      set_fact:
        backup_dir: /tmp/bkups/blog/postgres/db
        db_dump_name: web_db-{{ ansible_date_time.date }}.bkp

    - name: Make sure backup directory exists on the local machine
      delegate_to: localhost
      become: no
      file:
        state: directory
        path: "{{ backup_dir }}"

    - name: Dump the DB to a bind-mounted volume
      command: |
        docker exec -i --user postgres postgres bash -c
        "pg_dump -Fc web_db > bkups/db/{{ db_dump_name }}"

    - name: Fetch the dumped DB from remote to local
      fetch:
        src: "{{ backup_dir }}/{{ db_dump_name }}"
        dest: "{{ backup_dir }}/"
        flat: yes

and can be executed for example like so:

ansible-playbook playbooks/backup.yml -l blog --key-file=~/.ssh/id_rsa -i $PWD/inventory/

where -l blog corresponds to the hostname of my blog specified in my Ansible Inventory.

The output of the command should look similar to the following:

PLAY [Fetch a backup of a postgres DB that runs on a remote docker container] *************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************
ok: [blog]

TASK [Set backup_dir and db_dump_name variable] *******************************************************************************************************
ok: [blog]

TASK [Make sure backup directory exists on the local machine] *****************************************************************************************
changed: [blog -> localhost]

TASK [Dump the DB to a bind-mounted volume] ***********************************************************************************************************
changed: [blog]

TASK [Fetch the dumped DB from remote to local] *******************************************************************************************************
changed: [blog]

PLAY RECAP ********************************************************************************************************************************************
blog                       : ok=5    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

We should then have the file on our local PC:

apoehlmann $ ll /tmp/bkups/blog/postgres/db/
total 1680
drwxr-xr-x  3 apoehlmann  wheel    96B Jan 15 22:35 .
drwxr-xr-x  3 apoehlmann  wheel    96B Jan 15 22:35 ..
-rw-r--r--  1 apoehlmann  wheel   837K Jan 15 22:35 web_db-2020-01-15.bkp
Getagged mit:
ansible Docker Tutorial
blog comments powered by Disqus