20 Feb 2018

feedDjango community aggregator: Community blog posts

Django development with Docker —A completed development cycle

After finishing the last post about Django development with Docker, we got a host-isolated development environment, which allows us to encapsulate our application and dependencies. Let's review some tips and improvements for our environment.

Introduction

Usually on our development environment, we can access to the local database, install different requirements, reload the running server on code changes, use different settings or running different commands. If we cannot do that, our development cycle could be slow and tedious.

In order to solve that, we need to implement these features:

Accessing the database

(Thanks Dilip Maharjan for inspiring this part)

From our last post, this is our configuration. Our database service definition in compose looks like:

https://medium.com/media/35df038b3b4523db615116fe32f5fca4/href

And our Django settings looks like:

https://medium.com/media/a86b17b1eb6442fa8f29b23b4e0aed22/href

From time to time, you might want to access the database directly, but we cannot access to the database container directly from outside the django container.

The Django application inside the container can access to the database container because they are in the same network and Docker has a feature called "automatic service discovery", which resolves "db" to an IP of that network. Read more about this in the official documentation about networking.

However, from outside the Django Application, we are not sharing the same network and cannot resolve "db" to an IP. We can only connect to a local server.

To solve this, we can do the following:

https://medium.com/media/5ba59371c6e86bb1fdbc1702342e1387/href

After running docker-compose up -d, you should be able to connect to the database, but using 127.0.0.1 as host. The default password in this case is root.

$ mysql --host=127.0.0.1 --port=3306 -u root -p -D djangodocker_db
Enter password:
Your MySQL connection id is 10
Server version: 5.7.19 MySQL Community Server (GPL)
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [djangodocker_db]>

Using different requirements or settings

If we look at our Dockerfile, we find these lines:

https://medium.com/media/4bca43f102e27cd5ff1310a352a19001/href

In a development environment, we usually have some dependencies only present in that environment, but they are not present in production or testing/CI environments. How can we use a different requirements file depending on the environment? Well, we can use Docker's ARG. With this we can define a variable that users can pass at build-time.

Before showing you how to use it, we need to change our requirements file. Currently is a file called requirements.txt with two dependencies: Django and mysqlclient. Let's say we want to install Django Debug toolbar (I'll use it as an example, but we can use any dependency) only in development, but we don't want it in production. In order to achieve this, we can split that file into three files and add a requirements/directory : requirements/base.txt, requirements/development.txt and requirements/production.txt . These are the contents of each file:

https://medium.com/media/1297679b5cfa2a9255224083264af703/href

Now we can modify our Dockerfile and our docker-compose.yml to use a specific requirements file. But how?

First, by adding an ARG we can pass a value at build-time, this means it would be used only when our Django application image is being built. Also we can use a default value. The Dockerfile below shows us how we can use this ARG and keep everything working:

https://medium.com/media/e6fb9fafb046806652db38233acab699/href

Finally, we have to rebuild the application image and restart the containers:

Now we just need to find a way to pass the development requirements file instead of the production requirements, and we can do it by using the docker-compose.yml file. In the official documentation, we can see that there's a key called "args" that we can use. It is used at build-time by docker-compose and passed to the image being built. The new version of our docker-compose.yml file looks like:

https://medium.com/media/14cf96f41ee3981d004ea8a0a2a63024/href

Now, we have to rebuild the application image and restart the containers again:

Now we have Django Debug Toolbar installed! But we don't have it configured yet. We don't want that library configured in production, only in development, and we can use a different settings file for that.

First, let's split our settings.py file and create the settings/base.py , settings/development.py and a settings/production.py files.

mkdir djangodocker/settings/
touch djangodocker/__init__.py
mv djangodocker/settings.py djangodocker/settings/base.py

The production.py file will only contain these lines:

https://medium.com/media/05084159beee9c6703a2a00412e5b05f/href

And the development.py with Django Debug Toolbar will contain:

https://medium.com/media/26059e0fbd07b494c661ffdf169c6365/href

One final change in urls.py file:

https://medium.com/media/8a4970c9d499984ee1acad75126a3f71/href

Now we can use Django Debug Toolbar in development, but how do we select the right settings for development in the Docker container? "With ARG, as we did before" you might say. But by using ARG, we only have that variable at build-time, and we need it at runtime, when the server is starting. Our solution will come from Docker's ENV and docker-compose's environment.

First, define a default value for production environments at the Dockerfile:

https://medium.com/media/91be571a9a89084bf4cbfcf8825faadc/href

With this configuration every running container will have a DJANGO_SETTINGS_MODULE environment variable. And when we try to run a command from manage.py it will use the production settings. Great!

Now, how do we use it for development? By configuring docker-compose's environment, any variable configured here will appear (and override other variables if required) in the container in runtime. The application service would look like:

https://medium.com/media/4fa98a8be4b956160ace0956a4a88ee5/href

Now let's review what we have done:

By using the ARG command in the Dockerfile, we can pass some variables at build-time, when the image is being built. These variables are only present at that time, when the container starts running those variables are no longer available. Also these variables may have defaults, which allows us to define some smart default behaviour, in this case, install the production requirements. Also we can change their values at will when building an image if we want to. We did it so by using the args keyword in docker-compose.yml , and installing development requirements.

After that, by using the Dockerfile's commandENV, we can define environment variables that will be present at runtime. This allows us to configure the runtime environment of our application. We did it so when we defined the DJANGO_SETTINGS_MODULE with its value to production settings. This variables also can be modified by docker-compose. Making use of the environment keyword, we can define and override any variable present in the container at runtime. This allows us to use the development settings in our containers.

Detecting code changes in development (in docker)

A nice feature of Django is its capacity of reloading the server on code changes. According to the documentation:

The development server automatically reloads Python code for each request, as needed. You don't need to restart the server for code changes to take effect. However, some actions like adding files don't trigger a restart, so you'll have to restart the server in these cases.

But when using Docker, we are editing code in our machine but running the code in a container. This avoids the code changes from triggering the server reloads. Let's see how to use docker's bind mounts and docker-compose's volumes for getting our server reloads.

Docker's bind mounts allows a file or directory on the host machine to be mounted into a container. As an example, if we run this command in our machine:

https://medium.com/media/76a8ecebb8add717d34f9788f484a09d/href

This line will create a container based on the nginx:latest image. Also it will mount the "target" directory of our machine (specifically "$(pwd)"/target is the same as ./target) into the /app of that running container.

Now we can replicate the same specification with docker-compose:

https://medium.com/media/8ba5e3a01b067b761d2ac6c2fac85bc5/href

With this configuration, if we add or change files in the "./target" directory, that will be noticed in the "/app" directory inside the container.

Now, let's do that for our application code. If we look at the Dockerfile, our application code is copied to the /app directory inside the container. Therefore, we have to mount our current host directory (i.e. pwd or ./) into the /app container directory. Our docker-compose.yml will be:

https://medium.com/media/8f284f8a056122c855136c60798a4f20/href

Now, if you want to check if this is working, open a terminal and attach it to the app service's log:

docker-compose logs -f app

You should see no logs, just the info text "Attaching to djangodocker_app_1". Now, make a change in a python file, maybe adding a space in the djangodocker/settings/urls.py file. If you look at the logs, you will see something like:

App logs after code changes

Yay! we can detect code changes!

Conclusion

Docker and Docker-Compose allow us to run a development environment, knowing how to use them is not always clear. Here we learnt some tips that make our development easier, and also increase our Docker knowledge. Now you can access your database if you need to, you can configure your build for development or production, and can detect code changes for a quick development cycle.

The final version of the code can be found here.

What's next?

We still have some pending points from the previous post, three of them are:

These will be topics of upcoming posts…


Django development with Docker -A completed development cycle was originally published in devartis on Medium, where people are continuing the conversation by highlighting and responding to this story.

20 Feb 2018 9:08pm GMT

Continuous Integration and Deployment with Drone, Docker, Django, Gunicorn and Nginx - Part 3

The Introduction

This is the third and final part of a multi-part tutorial covering a simple(ish) setup of a continuous integration/deployment pipeline using Drone.io:0.5. Since Part 2, Drone.io:0.8 has become available. This new version boasts much better documentation and is comparably much easier to set up than Drone.io:0.5 and even outlines how to set up your server behind NGINX.

In parts 1 and 2 of this series we:

The last remaining piece of infrastructure for a successful deployment is to allow the outside world the ability to communicate with our Django application's Docker container. This tutorial will cover:

  1. Setting an Elastic IP to our EC2 instance
  2. Registering a domain
  3. Installing and configuring NGINX to forward traffic to our Django application's Docker container

Step 1: Set an elastic IP to your EC2 instance

By default, your EC2 instance will be assigned an IP address. However, if you restart your EC2 instance, your instance will be assigned a different IP address. This is problematic if you want to tie a domain name to a EC2 instance in a more permanent fashion. Elastic IPs help address this situation.

An Elastic IP address is a static IPv4 address designed to allow you to easily reassign the IP address from one EC2 instance to another, without a DNS change. While setting one to your EC2 instance isn't necessary, it's a good idea to set the domain you will be registering in step 2 to point to this address so that if your EC2 instance is restarted or you want to associate that address to a different EC2 instance, you won't have to update your DNS settings.

To assign an Elastic IP to your instance, in the EC2 Service section of the AWS console, navigate to the Elastic IPs section under Network & Security. From there, simply click on the Allocate new address button at the top of the page and you should see a new Elastic IP address appear in the list of available addresses. Right click on your new IP and click "associate address." This will open up a screen where you can tie your address to your application's EC2 instance.

Now that you have tied the address to your instance, we can now register a domain!

Step 2: Registering a domain

In order to set up NGINX properly, we will need a domain name (if you already have a domain registered and have associated it with your EC2 instance, skip to Step 3). For this tutorial, I will be using Amazon Route 53 as the domain registrar and to route traffic to our EC2 instance. Registering a domain with AWS will cost you $12 a year per domain. Feel free to use a different registrar as the steps should roughly be the same.

In your AWS console and in the services tab, click on Route 53 to go to the Route 53 dashboard. Under the Domains section, click on Registered domains and you should see a list of all currently registered domains. Register a domain with the Register Domain button and follow the displayed steps to register a domain.

After your domain is registered, click on the Hosted zones tab. For your newly registered domain we want to create a new hosted zone via the Create Hosted Zone button at the top of the dashboard. Enter in your domain name and make sure the Type of the hosted zone is set to Public Hosted Zone. Once your hosted zone is created, we need to create a couple of "A" records that should look something like this:

Name Type TTL Value
* A 1h 34.237.301.14
www A 1h 34.237.301.14

Now that we have a DNS record associating a specific domain name with our EC2 instance, we need to install and configure NGINX in order to forward that traffic on to our Django application's Docker container.

Step 3: Installing and configuring NGINX

Now that you have an elastic IP pointed at your application's EC2 instance and your domain's DNS configuration is set up to use that IP, we need to install NGINX on your EC2 instance and configure it to correctly forward HTTPS traffic to your application's Docker container.

With the exception of having to set up a firewall (which isn't necessary thanks to AWS' security groups), this Digital Ocean tutorial outlining how to install NGINX is very good.

Once NGINX is installed on your instance, open up an editor (such as Vim or Nano) and make a new file inside of /etc/nginx/sites-enabled/your_site. While I've provided both a site configuration that allows for either HTTP or HTTPS traffic, I recommend you use the HTTPS configuration, which provides a secure/encrypted connection between your server and a client's browser. You can find plenty of good tutorials on how to install SSL certs on the web. I recommend Let's Encrypt certificates from certbot (Imaginary's LetsEncrypt installation tutorial , Dgitial Ocean's tutorial).

HTTP configuration (not recommended)

# Your sites' conf.

upstream web {
    server 127.0.0.1:8000;
}


server {
    listen 80;
    server_name <your_domain>;  # example.com www.example.com;

    charset utf-8;

    #Max upload size; Adjust to your preference
    client_max_body_size 75M;

    location / {
        proxy_pass          http://web;
        proxy_set_header    Host $host;
        proxy_set_header    X-Real-IP $remote_addr;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

HTTPS configuration (Recommended)

upstream web {
    server 127.0.0.1:8000;
}

server {
    listen 80;
    server_name  <your_domain>;

    # Redirect non-https traffic to https
    return 301 https://$host$request_uri;

}

server {
    # SSL configuration
    listen 443 ssl http2;
    server_name  <your_domain>;

    charset utf-8;

    client_max_body_size 75M;

    location / {
           proxy_pass http://web;
           proxy_set_header Host $host;
           proxy_set_header X-Real-IP $remote_addr;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location ~ /.well-known {
           allow all;
           root /var/www/html;
    }

    ssl_certificate /etc/letsencrypt/live/<your_domain>/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/<your_domain>/privkey.pem; # managed by Certbot

}

The "web" in the upstream web block can be set to anything you want (ex: upstream app), just be sure to keep that name consistent with what is passed in the proxy_pass line (ex: proxy_pass http://app). This block allows you to specify multiple web applications to handle traffic for a specific domain (see the NGINX documentation for more information).

Save your file and restart NGINX:

nginx restart

If there are any errors in your config, NGINX will complain. If the restart is successful, then simply try to visit your site via your favorite browser!

http(s)://your_domain

If you only see the NGINX splash page, then double check that your site's configuration is setup correctly ( /etc/nginx/sites-enabled/your_site ).

If you see content related to your site, congratulations! You've correctly configured NGINX to serve traffic to your dockerized Django application. With that, you now have a fully deployed dockerized Django application on an EC2 instance that automatically updates itself when new code is merged into your master branch!

20 Feb 2018 9:03pm GMT

How to Hide And Auto-populate Title Field of a Page in Wagtail CMS

Given

from django.db import models
from wagtail.wagtailsnippets.models import register_snippet

class CountryPage(Page):
    country = models.ForeignKey('Country', blank=False, null=True, unique=True)

class Country(models.Model):
    name = models.CharField(max_length=128)

Task

I want to have the title field of my CountryPage to be auto-populated with a ...

Read now

20 Feb 2018 12:38pm GMT