Docker For Developers

Docker Django Stack

Anurag Jain
12 min readFeb 4, 2018

Table of contents
-
Understanding Web APP Architecture
- Why do you need Docker? Lets identify problem statement
- Lets Start Learning about Docker
- List of Questions — How to apply Docker in your Project?
- Apply Docker for Development
- Apply Docker for Production (Dev to Prod workflow)
- References

Understanding Web APP Architecture

Before we look into Docker, first we should understand, Why even Docker exists?

General Web App Architecture

General Web App Architecture
  • User browse web app and perform some actions. Front-end send requests to the back-end. Backend acts accordingly as per the request, It fetches data from DB or stores it in.
  • In every application there are few requests which generally takes time to handle them we setup workers. On request we add a task in queue and worker listens to the queue and act accordingly.
  • To access data faster we use Cache DB.

Django stack Web App Architecture (Production)

User access front-end and perform some actions. Action creates HTTP requests and the request goes to Nginx. Nginx give request to uwsgi (app server) or Daphne (WebSocket server).

  • Uwsgi calls Django functions and store or fetches data from Postgres or Redis.
  • We also require WebSocket server for live page feeding. In Django stack, we use daphne. Websocket server also needs to get connected from Redis and Postgres for channeling and data accessing purpose.
  • We have to perform asynchronous tasks either to improve user experience or application performance. We generally use celery for it. App push tasks to RabbitMQ and celery worker execute tasks as soon as it received to queue.
  • We need the supervisor service for uwsgi, daphne, celery worker. Which monitors all such configured service and help us in unwanted service stop situations.

Django stack Web App Architecture (Development)

  • When you do development you don’t need supervisor and Nginx, instead, you use the local server provided by the framework itself.
  • In case of front-end development, you also need Webpack for server and NPM for package management.

Why do you need Docker? Lets identify problem statement

Assume You just joined a new project/company and You are a front-end developer and you don’t know about Django. You got the very small task for eg. you just need to change in the key variable of JSON.
What will you do? You will change the key variable directly in the code. But to test you have to setup Backend stack.

Problems

  • Setup takes time.
  • We will hear so many statements like “this is not getting installed it’s throwing error …”.
  • We have to do the setup every time new joiner comes ?

Solutions

  1. Let’s write scripts which will auto setup new joiner’s Laptop. but it’s not the solution as
  • Your script should support all type of OS. Why ? Let’s say Front-end developer also required to use photoshop software and may be it’s only available for Windows.
  • May be few packages may only support to Linux? And your required such packages for your application. How will you add to other OS?
  • Let’s say system required few version and your app required different Version? If it’s Python package then we can introduce virtualenv but non-python package requires different version — for example ImageMagic ?

2. Move to VirtualBox? It seems good solution as you even can create base images and use. Though it’s not the better solution, why?

  • Virtualbox utilizes your system resources.
  • Let’s say you want to test performance then you may want to run multiple virtual machines. If we run multiple Virtual Machine then Host system will get slow down. as we have to run complete OS over Host OS.

3. Something else ? Docker?

Lets Start Learning about Docker

Read the full overview here (Putting highlights below).

  • Docker provides a way to run applications securely isolated in a container, packaged with all its dependencies and libraries.
  • Docker enables you to separate your applications from your infrastructure so you can deliver software quickly.
  • Docker provides the ability to package and run an application in a loosely isolated environment called a container.
  • Containers are lightweight because they don’t need the extra load of a hypervisor, but run directly within the host machine’s kernel. This means you can run more containers on a given hardware combination than if you were using virtual machines.
  • Docker Engine is a client-server application with these major components:
    a) A server which is a type of long-running program called a daemon process (thedockerd command).
    b) A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.
    c) A command line interface (CLI) client (the docker command).
  • The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes.
  • The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out.
  • A Docker registry stores Docker images. When you use the docker pull or docker run commands, the required images are pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry.
  • An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.
  • To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it.
  • Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.
  • A container is a runnable instance of an image. You can create, run, stop, move, or delete a container using the Docker API or CLI.
  • You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
  • When a container is removed, any changes to its state that are not stored in persistent storage disappear.

Example docker run command

The following command runs an ubuntu container, attaches interactively to your local command-line session, and runs /bin/bash.

$ docker run -i -t ubuntu /bin/bash

When you run this command, the following happens (assuming you are using the default registry configuration):

  1. If you do not have the ubuntu image locally, Docker pulls it from your configured registry, as though you had run docker pull ubuntu manually.
  2. Docker creates a new container, as though you had run a docker createcommand manually.
  3. Docker allocates a read-write filesystem to the container, as its final layer. This allows a running container to create or modify files and directories in its local filesystem.
  4. Docker creates a network interface to connect the container to the default network, since you did not specify any networking options. This includes assigning an IP address to the container. By default, containers can connect to external networks using the host machine’s network connection.
  5. Docker starts the container and executes /bin/bash. Because the container is run interactively and attached to your terminal (due to the -i and -t) flags, you can provide input using your keyboard and output is logged to your terminal.
  6. When you type exit to terminate the /bin/bash command, the container stops but is not removed. You can start it again or remove it.

Important Docker Commands

Dockerfile

Read full details here(Putting Highlights).

  • The docker build command builds an image from a Dockerfile and a context. The build’s context is the set of files at a specified location PATH or URL. The PATHis a directory on your local filesystem. The URL is a Git repository location.
  • The build is run by the Docker daemon, not by the CLI.
  • To increase the build’s performance, exclude files and directories by adding a .dockerignore file to the context directory.

Syntax

  • FROM
    This instruction initializes a new build stage and sets the Base Image for subsequent instructions.

It is mandatory to set this in the first line of a Dockerfile.

  • MAINTAINER
    This is a non-executable instruction used to indicate the author of the Dockerfile.
  • RUN
    It is an image build step, the state of the container after a RUN command will be committed to the docker image. A Dockerfile can have many RUN steps that layer on top of one another to build the image.

Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.
Whenever possible, Docker will re-use the intermediate images (cache), to accelerate the docker build process significantly. This is indicated by the Using cachemessage in the console output

  • CMD
    it is the command the container executes by default when you launch the built image. A Dockerfile can only have one CMD. The CMD can be overridden when starting a container with docker run $image $other_command.

The major difference between CMD and RUN is that CMD doesn’t execute anything during the build time. It just specifies the intended command for the image. Whereas RUN actually executes the command during build time.
Note: there can be only one CMD instruction in a Dockerfile, if you add more, only the last one takes effect.

  • LABEL
    You can assign metadata in the form of key-value pairs to the image using this instruction.

It is important to notice that each LABEL instruction creates a new layer in the image, so it is best to use as few LABEL instructions as possible.

  • EXPOSE
    The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime.

EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag.

  • ENV
    This instruction can be used to set the environment variables in the container.
  • COPY
    The COPY instruction copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest>.
  • ADD
    This instruction is similar to the COPY instruction with few added features like remote URL support in the source field and local-only tar extraction. But if you don’t need a extra features, it is suggested to use COPY as it is more readable.
  • ENTRYPOINT
    You can use this instruction to set the primary command for the image. For example, if you have installed only one application in your image and want it to run whenever the image is executed, ENTRYPOINT is the instruction for you. Also, all the elements specified using CMD will be overridden, except the arguments. They will be passed to the command specified in ENTRYPOINT
    Both CMD and ENTRYPOINT instructions define what command gets executed when running a container. There are few rules that describe their co-operation.

Dockerfile should specify at least one of CMD or ENTRYPOINT commands. ENTRYPOINT should be defined when using the container as an executable.CMD should be used as a way of defining default arguments for an ENTRYPOINTcommand or for executing an ad-hoc command in a container. CMD will be overridden when running the container with alternative arguments.

  • VOLUME
    The VOLUME instruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers.
  • USER
    This is used to set the UID (or username) to use when running the image.
  • WORKDIR
    This is used to set the currently active directory for other instructions such as RUN, CMD, ENTRYPOINT, COPY and ADD. Note that if relative path is provided, the next WORKDIR instruction will take it as relative to the path of previous WORKDIR instruction.
  • ONBUILD
    This instruction adds a trigger instruction to be executed when the image is used as the base for some other image. It behaves as if a RUN instruction is inserted immediately after the FROM instruction of the downstream Dockerfile. This is typically helpful in cases where you need a static base image with a dynamic config value that changes whenever a new image has to be built (on top of the base image).

List of Questions — How to apply Docker in your Project?

I am quite excited about Docker after reading about it. I really want to apply it in my Django stack. But few immediate questions comes in mind -

  • What will be the architecture for Development?
  • Does developer really need to commit docker, It seems overhead to developer ? How can we trust developer’s container ? Isn’t it developer just need to commit on code repo?
  • What will be better workflow (Dev to Prod)?
  • What will be the architecture for Production ?
  • Will Dockerfile will be same for dev and prod ?
  • I also heard you can build production like environment in local as well but I don’t get it why do you want to run nginx/supervisor in local ? Do I really need to run them ?
  • Every time container may create new IP address do I need to access services from new IP addresses every time ?
  • Do I need to copy the code to container on every code change? When I change code Django local service gets reloaded automatically, now if on every change I need to copy files then how it’s improving development flow?
  • Do I need to run redis, rabbitmq, postgres in container ? If yes, how will I persist data?
  • Do I really need to run redis, rabbitmq, postgres in container even if Docker is providing data persistence?
  • Do I need to run supervisor ? If I don’t use supervisor then how can I scale service ? To support different queues running multiple services of celery. Do I need to create single container of Celery and run all such services there using supervisor?
  • You can change anything in container and you can even commit that changes by running docker commit command. If that is there then why I am creating Dockerfile?
  • Someone told me about Vagrant and Kubernetes but I am not seeing finding any details about it when I am searching about Docker ?
  • Do I need to create Virtualenv ?
  • Docker network and Volume seems complicated to me ? Do I really need to understand ?
  • What is docker-compose ? Is it only meant for development purpose ?
  • How will I make logging persistent?
  • I am writing REST apis in Django and frontend is consuming. All code are in different repository, what is best way to handle docker ?
  • How will it utilize host resources (memory, cpu). Can I set max limits ?

I will be adding details and will try to answer all such questions as per my understanding of Docker.

Apply Docker for Development

Docker Architecture on Developer Machine

  • There will be micro service containers for each type of services.
  • As all your containers needs to talk with each other all containers should be in Same network. (Solid lines)
  • As you are developing you need to mount your codes from host to containers. Docker provide such options through Volume. And You need to create data volume to persist data on Postgres, redis and rabbitmq containers. (Dots lines are showing from host to container)

Implement the architecture through Docker cli:

We can use docker-cli but it’s actually not easy to manage, because you need to

  • remember all commands.
  • set hostname or IP, as you need to set in “settings.py”.
  • manage Networks.

Implement the architecture through Docker Compose:

Apply Docker for Production (Dev to Prod workflow)

(Will be adding it soon…)

References

Following Videos and Links helped me to understand Docker. Thank you all !

--

--

Anurag Jain
Anurag Jain

Responses (1)