16 Sep 2019

feedDjango community aggregator: Community blog posts

Django shell_plus with Pandas and Jupyter Notebook

16 Sep 2019 11:22pm GMT

14 Sep 2019

feedDjango community aggregator: Community blog posts

A Single File Asynchronous Django Application

Didn't Daphne turn into a laurel tree?

Django 3.0 alpha 1 came out this week. It introduces ASGI support thanks to lots of hard work by Andrew Godwin.

Thanks to a question from Emiliano Dalla Verde Marcozzi on the shiny new Django forums, I decided to play with it a bit. I adapted my "single file Django app" to ASGI.

Here's how to try it yourself.

First, create a new virtual environment (I'm using Python 3.7.4) and install requirements:

$ python -m venv venv
$ source venv/bin/activate
$ pip install django==3.0a1 daphne==2.3.0

Then create a new file app.py:

import html
import os
import sys

from django.conf import settings
from django.core.asgi import get_asgi_application
from django.http import HttpResponse
from django.urls import path
from django.utils.crypto import get_random_string

settings.configure(
    DEBUG=(os.environ.get("DEBUG", "") == "1"),
    # Disable host header validation
    ALLOWED_HOSTS=["*"],
    # Make this module the urlconf
    ROOT_URLCONF=__name__,
    # We aren't using any security features but Django requires this setting
    SECRET_KEY=get_random_string(50),
)


def index(request):
    name = request.GET.get("name", "World")
    return HttpResponse(f"Hello, {html.escape(name)}!")


urlpatterns = [path("", index)]

application = get_asgi_application()

if __name__ == "__main__":
    from django.core.management import execute_from_command_line

    execute_from_command_line(sys.argv)

Then run it under Daphne:

$ daphne app:application

Visit http://localhost:8000/?name=Django%20user in your browser to see "Hello, Django user!"

Async support in Django 3.0 is the first step, and limited to the outer handler layer. Middleware, views, the ORM, and everything you're used to in Django remains synchronous. The ASGI handler achieves this by running response generation in a thread pool.

If you want to use Websockets, you'll need a second framework for now, like Channels or Starlette.

ASGI is a simple interface with a "turtles all the way down" approach. This lets us glue our Django app to another with a "middleware" application:

django_application = get_asgi_application()
websocket_application = # TODO: make it


async def application(scope, receive, send):
    if scope['type'] == 'http':
        await django_application(scope, receive, send)
    elif scope['type'] == 'websocket':
        await websocket_application(scope, receive, send)
    else:
        raise NotImplementedError(f"Unknown scope type {scope['type']}")

ASGI support will increase with coming Django versions. DEP 9 outlines Andrew's plan for increasing ASGI support going forwards.

Fin

Hope this helps you get started experimenting with Django on ASGI,

-Adam

14 Sep 2019 4:00am GMT

12 Sep 2019

feedDjango community aggregator: Community blog posts

How I Import Python's datetime Module

Old Father Datetime

Python's datetime module risks a whole bunch of name confusion:

For these reasons, I use this import idiom and recommend you do too:

import datetime as dt

Rather than any of:

import datetime
from datetime import datetime  # or time, timezone

Then in your code, dt.datetime, dt.time, and dt.timezone will be unambiguous.

Fin

Hope this helps you save time understanding code,

-Adam

12 Sep 2019 4:00am GMT

11 Sep 2019

feedDjango community aggregator: Community blog posts

Learning to Love Django Tests - Lacey Williams Henschel

SHAMELESS PLUGS

11 Sep 2019 10:00pm GMT

10 Sep 2019

feedDjango community aggregator: Community blog posts

Django shell_plus with Pandas and Jupyter Notebook

10 Sep 2019 9:02pm GMT

Mercurial mirrors updates

Time to clean up my mercurial mirrors as Django 3.0 has just had its first alpha released. Keeping an eye on the supported versions, I did the following changes: Added mirror for 3.0 branch : https://bitbucket.org/orzel/django-3.0-production/ Removed mirror for 1.8 branch Removed mirror for 1.9 branch Removed mirror for 1.10 branch Branch 2.0 is officially […]

10 Sep 2019 8:53pm GMT

08 Sep 2019

feedDjango community aggregator: Community blog posts

Custom Application Metrics with Django, Prometheus, and Kubernetes

Why are custom metrics important?

Custom Application Metrics with Django, Prometheus, and Kubernetes

While there are volumes of discourse on the topic, it can't be overstated how important custom application metrics are. Unlike the core service metrics you'll want to collect for your Django application (application and web server stats, key DB and cache operational metrics), custom metrics are data points unique to your domain with bounds and thresholds known only by you. In other words, it's the fun stuff.

How might these metrics be useful? Consider:

Setting up the Django Application

Besides the obvious dependencies (looking at you pip install Django), we'll need some additional packages for our pet project. Go ahead and pip install django-prometheus-client. This will give us a Python Prometheus client to play with, as well as some helpful Django hooks including middleware and a nifty DB wrapper. Next we'll run the Django management commands to start a project and app, update our settings to utilize the Prometheus client, and add Prometheus URLs to our URL conf.

Start a new project and app
For the purposes of this post, and in fitting with our agency brand, we'll be building a dog walking service. Mind you, it won't actually do much, but should suffice to serve as a teaching tool. Go ahead and execute:

django-admin.py startproject demo
python manage.py startapp walker

#settings.py

INSTALLED_APPS = [
    ...
    'walker',
    ...
]

Now, we'll add some basic models and views. For the sake of brevity, I'll only include implementation for the portions we'll be instrumenting, but if you'd like to follow along in full just grab the demo app source.

# walker/models.py
from django.db import models
from django_prometheus.models import ExportModelOperationsMixin


class Walker(ExportModelOperationsMixin('walker'), models.Model):
    name = models.CharField(max_length=127)
    email = models.CharField(max_length=127)

    def __str__(self):
        return f'{self.name} // {self.email} ({self.id})'


class Dog(ExportModelOperationsMixin('dog'), models.Model):
    SIZE_XS = 'xs'
    SIZE_SM = 'sm'
    SIZE_MD = 'md'
    SIZE_LG = 'lg'
    SIZE_XL = 'xl'
    DOG_SIZES = (
        (SIZE_XS, 'xsmall'),
        (SIZE_SM, 'small'),
        (SIZE_MD, 'medium'),
        (SIZE_LG, 'large'),
        (SIZE_XL, 'xlarge'),
    )

    size = models.CharField(max_length=31, choices=DOG_SIZES, default=SIZE_MD)
    name = models.CharField(max_length=127)
    age = models.IntegerField()

    def __str__(self):
        return f'{self.name} // {self.age}y ({self.size})'


class Walk(ExportModelOperationsMixin('walk'), models.Model):
    dog = models.ForeignKey(Dog, related_name='walks', on_delete=models.CASCADE)
    walker = models.ForeignKey(Walker, related_name='walks', on_delete=models.CASCADE)

    distance = models.IntegerField(default=0, help_text='walk distance (in meters)')

    start_time = models.DateTimeField(null=True, blank=True, default=None)
    end_time = models.DateTimeField(null=True, blank=True, default=None)

    @property
    def is_complete(self):
        return self.end_time is not None
        
    @classmethod
    def in_progress(cls):
        """ get the list of `Walk`s currently in progress """
        return cls.objects.filter(start_time__isnull=False, end_time__isnull=True)

    def __str__(self):
        return f'{self.walker.name} // {self.dog.name} @ {self.start_time} ({self.id})'

our (not yet instrumented) application models

# walker/views.py
from django.shortcuts import render, redirect
from django.views import View
from django.core.exceptions import ObjectDoesNotExist
from django.http import HttpResponseNotFound, JsonResponse, HttpResponseBadRequest, Http404
from django.urls import reverse
from django.utils.timezone import now
from walker import models, forms


class WalkDetailsView(View):
    def get_walk(self, walk_id=None):
        try:
            return models.Walk.objects.get(id=walk_id)
        except ObjectDoesNotExist:
            raise Http404(f'no walk with ID {walk_id} in progress')


class CheckWalkStatusView(WalkDetailsView):
    def get(self, request, walk_id=None, **kwargs):
        walk = self.get_walk(walk_id=walk_id)
        return JsonResponse({'complete': walk.is_complete})


class CompleteWalkView(WalkDetailsView):
    def get(self, request, walk_id=None, **kwargs):
        walk = self.get_walk(walk_id=walk_id)
        return render(request, 'index.html', context={'form': forms.CompleteWalkForm(instance=walk)})

    def post(self, request, walk_id=None, **kwargs):
        try:
            walk = models.Walk.objects.get(id=walk_id)
        except ObjectDoesNotExist:
            return HttpResponseNotFound(content=f'no walk with ID {walk_id} found')

        if walk.is_complete:
            return HttpResponseBadRequest(content=f'walk {walk.id} is already complete')

        form = forms.CompleteWalkForm(data=request.POST, instance=walk)

        if form.is_valid():
            updated_walk = form.save(commit=False)
            updated_walk.end_time = now()
            updated_walk.save()

            return redirect(f'{reverse("walk_start")}?walk={walk.id}')

        return HttpResponseBadRequest(content=f'form validation failed with errors {form.errors}')


class StartWalkView(View):
    def get(self, request):
        return render(request, 'index.html', context={'form': forms.StartWalkForm()})

    def post(self, request):
        form = forms.StartWalkForm(data=request.POST)

        if form.is_valid():
            walk = form.save(commit=False)
            walk.start_time = now()
            walk.save()

            return redirect(f'{reverse("walk_start")}?walk={walk.id}')

        return HttpResponseBadRequest(content=f'form validation failed with errors {form.errors}')

our (not yet instrumented) application views

Update app settings and add Prometheus urls
Now that we have a Django project and app setup, it's time to add the required settings for django-prometheus. In settings.py, apply the following:

INSTALLED_APPS = [
    ...
    'django_prometheus',
    ...
]

MIDDLEWARE = [
    'django_prometheus.middleware.PrometheusBeforeMiddleware',
    ....
    'django_prometheus.middleware.PrometheusAfterMiddleware',
]

# we're assuming a Postgres DB here because, well, that's just the right choice :)
DATABASES = {
    'default': {
        'ENGINE': 'django_prometheus.db.backends.postgresql',
        'NAME': os.getenv('DB_NAME'),
        'USER': os.getenv('DB_USER'),
        'PASSWORD': os.getenv('DB_PASSWORD'),
        'HOST': os.getenv('DB_HOST'),
        'PORT': os.getenv('DB_PORT', '5432'),
    },
}

add django-prometheus settings to the relevant parts of settings.py

and add the following to your urls.py

urlpatterns = [
    ...
    path('', include('django_prometheus.urls')),
]

add the django-prometheus URLs to our application urls conf

At this point, we have a basic application configured and primed for instrumentation.


Instrument the code with Prometheus metrics

As a result of out of box functionality provided by django-prometheus, we immediately have basic model operations, like insertions and deletions, tracked. You can see this in action at the /metrics endpoint where you'll have something like:

Custom Application Metrics with Django, Prometheus, and Kubernetesdefault metrics provided by django-prometheus

Let's make this a bit more interesting.

Start by adding a walker/metrics.py where we'll define some basic metrics to track.

# walker/metrics.py
from prometheus_client import Counter, Histogram


walks_started = Counter('walks_started', 'number of walks started')
walks_completed = Counter('walks_completed', 'number of walks completed')
invalid_walks = Counter('invalid_walks', 'number of walks attempted to be started, but invalid')

walk_distance = Histogram('walk_distance', 'distribution of distance walked', buckets=[0, 50, 200, 400, 800, 1600, 3200])

defining the custom application metrics we want to track

Painless, eh? The Prometheus documentation does a good job explaining what each of the metric types should be used for, but in short we are using counters to represent metrics that are strictly increasing over time and histograms to track metrics that contain a distribution of values we want tracked. Let's start instrumenting our application code.

# walker/views.py
...
from walker import metrics
...

class CompleteWalkView(WalkDetailsView):
    ...
    def post(self, request, walk_id=None, **kwargs):
        ...
        if form.is_valid():
            updated_walk = form.save(commit=False)
            updated_walk.end_time = now()
            updated_walk.save()

            metrics.walks_completed.inc()
            metrics.walk_distance.observe(updated_walk.distance)

            return redirect(f'{reverse("walk_start")}?walk={walk.id}')

        return HttpResponseBadRequest(content=f'form validation failed with errors {form.errors}')

...

class StartWalkView(View):
    ...
    def post(self, request):
        if form.is_valid():
            walk = form.save(commit=False)
            walk.start_time = now()
            walk.save()

            metrics.walks_started.inc()

            return redirect(f'{reverse("walk_start")}?walk={walk.id}')

        metrics.invalid_walks.inc()

        return HttpResponseBadRequest(content=f'form validation failed with errors {form.errors}')

our application views with custom metrics instrumentation added

If we make a few sample requests, we'll be able to see the new metrics flowing through the endpoint.

Custom Application Metrics with Django, Prometheus, and Kubernetespeep the walk distance and created walks metricsCustom Application Metrics with Django, Prometheus, and Kubernetesour metrics are now available for graphing in prometheus

By this point we've defined our custom metrics in code, instrumented the application to track these metrics, and verified that the metrics are updated and available at the /metrics endpoint. Let's move on to deploying our instrumented application to a Kubernetes cluster.

Deploying the application with Helm

I'll keep this part brief and limited only to configuration relevant to metric tracking and exporting, but the full Helm chart with complete deployment and service configuration may be found in the demo app. As a jumping off point, here's some snippets of the deployment and configmap highlighting portions with significance towards metric exporting.

# helm/demo/templates/nginx-conf-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "demo.fullname" . }}-nginx-conf
  ...
data:
  demo.conf: |
    upstream app_server {
      server 127.0.0.1:8000 fail_timeout=0;
    }

    server {
      listen 80;
      client_max_body_size 4G;

      # set the correct host(s) for your site
      server_name{{ range .Values.ingress.hosts }} {{ . }}{{- end }};

      keepalive_timeout 5;

      root /code/static;

      location / {
        # checks for static file, if not found proxy to app
        try_files $uri @proxy_to_app;
      }

      location ^~ /metrics {
        auth_basic           "Metrics";
        auth_basic_user_file /etc/nginx/secrets/.htpasswd;

        proxy_pass http://app_server;
      }

      location @proxy_to_app {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $http_host;
        # we don't want nginx trying to do something clever with
        # redirects, we set the Host: header above already.
        proxy_redirect off;
        proxy_pass http://app_server;
      }
    }

nginx configmap


# helm/demo/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
...
    spec:
      metadata:
        labels:
          app.kubernetes.io/name: {{ include "demo.name" . }}
          app.kubernetes.io/instance: {{ .Release.Name }}
          app: {{ include "demo.name" . }}
      volumes:
        ...
        - name: nginx-conf
          configMap:
            name: {{ include "demo.fullname" . }}-nginx-conf
        - name: prometheus-auth
          secret:
            secretName: prometheus-basic-auth
        ...
      containers:
        - name: {{ .Chart.Name }}-nginx
          image: "{{ .Values.nginx.image.repository }}:{{ .Values.nginx.image.tag }}"
          imagePullPolicy: IfNotPresent
          volumeMounts:
            ...
            - name: nginx-conf
              mountPath: /etc/nginx/conf.d/
            - name: prometheus-auth
              mountPath: /etc/nginx/secrets/.htpasswd
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          command: ["gunicorn", "--worker-class", "gthread", "--threads", "3", "--bind", "0.0.0.0:8000", "demo.wsgi:application"]
          env:
{{ include "demo.env" . | nindent 12 }}
          ports:
            - name: gunicorn
              containerPort: 8000
              protocol: TCP
           ...

deployment config

Nothing too magick-y here, just your good ol' YAML blob. There are only two important points I'd like to draw attention to:

  1. We put the /metrics endpoint behind basic auth via an nginx reverse proxy with an auth_basic directive set for the location block. While you'll probably want to deploy gunicorn behind a reverse proxy anyway, we get the added benefit of protecting our application metrics in doing so.
  2. We use multi-threaded gunicorn as opposed to multiple workers. While you can enable multiprocess mode for the Prometheus client, it is a more complex setup in a Kubernetes environment. Why is this important? Well, the danger in running multiple workers in a single pod is that each worker will report its own set of metric values on scrape. However, since the service is grouped to the pod level in the Prometheus Kubernetes SD scrape config, these (potentially) jumping values will be incorrectly classified as counter resets leading to inconsistent measurements. You don't necessarily need to follow all the above, but the big Tl:Dr here is: If you don't know better, you should probably start in either a single thread + single worker gunicorn environment, or else a single worker + multi-threaded one.

Deploying Prometheus with Helm

With the help of Helm, deploying Prometheus to the cluster is a 🍰. Without further ado:

helm upgrade --install prometheus stable/prometheus

install Prometheus into the cluster

After a few minutes, you should be able to port-forward into the Prometheus pod (the default container port is 9090)

Configuring a Prometheus scrape target for the application

The Prometheus Helm chart has a ton of customization options, but for our purposes we just need to set the extraScrapeConfigs. To do so, start by creating a values.yaml. As in most of the post, you can skip this section and just use the demo app as a prescriptive guide if you'd like. In that file, you'll want:

extraScrapeConfigs: |
  - job_name: demo
    scrape_interval: 5s
    metrics_path: /metrics
    basic_auth:
      username: prometheus
      password: prometheus
    tls_config:
      insecure_skip_verify: true
    kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
            - default
    relabel_configs:
    - source_labels: [__meta_kubernetes_service_label_app]
      separator: ;
      regex: demo
      replacement: $1
      action: keep
    - source_labels: [__meta_kubernetes_endpoint_port_name]
      separator: ;
      regex: http
      replacement: $1
      action: keep
    - source_labels: [__meta_kubernetes_namespace]
      separator: ;
      regex: (.*)
      target_label: namespace
      replacement: $1
      action: replace
    - source_labels: [__meta_kubernetes_pod_name]
      separator: ;
      regex: (.*)
      target_label: pod
      replacement: $1
      action: replace
    - source_labels: [__meta_kubernetes_service_name]
      separator: ;
      regex: (.*)
      target_label: service
      replacement: $1
      action: replace
    - source_labels: [__meta_kubernetes_service_name]
      separator: ;
      regex: (.*)
      target_label: job
      replacement: ${1}
      action: replace
    - separator: ;
      regex: (.*)
      target_label: endpoint
      replacement: http
      action: replace

Prometheus chart values

After creating the file, you should be able to apply the update to your prometheus deployment from the previous step via

helm upgrade --install prometheus -f values.yaml

To verify everything worked properly, open up your browser to http://localhost:9090/targets (assuming you've already port-forwarded into the running prometheus server Pod). If you see the demo app there in the target list, then that's a big 👍.

Try it yourself

I'm going to make a bold statement here: Capturing custom application metrics and setting up the corresponding reporting and monitoring is one of the most immediately gratifying tasks in software engineering. Luckily for us, it's actually really simple to integrate Prometheus metrics into your Django application, as I hope this post has shown. If you'd like to start instrumenting your own app, feel free to rip configuration and ideas from the full sample application, or just fork the repo and hack away. Happy trails 🐶

08 Sep 2019 8:59am GMT

04 Sep 2019

feedDjango community aggregator: Community blog posts

Django Fellow - Mariusz Felisiak

SHAMELESS PLUGS

04 Sep 2019 10:00pm GMT

31 Aug 2019

feedDjango community aggregator: Community blog posts

Profiling & Optimizing Bottlenecks In Django

In the previous article, we have learnt where to start with performance optimization in django application and find out which APIs to optimize first. In this article, we will learn how to optimize those selected APIs from the application.

Profling APIs With django-silk

django-silk provides silk_profile function which can be used to profile selected view or a snippet of code. Let's take a slow view to profile and see the results.

from silk.profiling.profiler import silk_profile


@silk_profile()
def slow_api(request):
    time.sleep(2)
    return JsonResponse({'data': 'slow_response'})

We need to add relevant silk settings to django settings so that required profile data files are generated and stored in specified locations.

SILKY_PYTHON_PROFILER = True
SILKY_PYTHON_PROFILER_BINARY = True
SILKY_PYTHON_PROFILER_RESULT_PATH = '/tmp/'

Once the above view is loaded, we can see the profile information in silk profiling page.

In profile page, silk shows a profile graph and highlights the path where more time is taken.

It also shows cprofile stats in the same page. This profile data file can be downloaded and used with other visualization tools like snakeviz.

By looking at the above data, we can see most of the time is spent is time.sleep in our view.

Profling APIs With django-extensions

If you don't want to use silk, an alternate way to profile django views is to use runprofileserver command provided by django-extensions package. Install django-extensions package and then start server with the following command.

$ ./manage.py runprofileserver --use-cprofile --nostatic --prof-path /tmp/prof/

This command starts runserver with profiling tools enabled. For each request made to the server, it will save a corresponding .prof profile data file in /tmp/prof/ folder.

After profile data is generated, we can use profile data viewing tools like snakeviz, cprofilev visualize or browse the profile data.

Install snakeviz using pip

$ pip install snakeviz

Open the profile data file using snakeviz.

$ snakeviz /tmp/prof/api.book.list.4212ms.1566922008.prof

It shows icicles graph view and table view of profile data of that view.

These will help to pinpoint which line of code is slowing down the view. Once it is identified, we can take appropriate action like optimize that code, setting up a cache or moving it to a task queue if it is not required to be performed in the request-response cycle.

31 Aug 2019 8:51pm GMT

28 Aug 2019

feedDjango community aggregator: Community blog posts

From Django 0.9 to Present - Russell Keith-Magee

SHAMELESS PLUGS

28 Aug 2019 9:23pm GMT

26 Aug 2019

feedDjango community aggregator: Community blog posts

Revisions

Mistakes happen, and that's part of a learning process. In a large project like Django, it can be hard to spot a mistake. Thanks to it being open source, anyone can see the code and fix the mistakes they see. In this post, I'll explain how I found a vulnerability in contrib.postgres.fields.JSONField that allowed SQL injections to be performed. If you're familiar with Python's DB-API, you may have used the .

26 Aug 2019 8:08am GMT

22 Aug 2019

feedDjango community aggregator: Community blog posts

Build a Python Jupyter Notebook Server with Docker & Heroku

Jupyter notebooks have become ...

22 Aug 2019 7:06am GMT

21 Aug 2019

feedDjango community aggregator: Community blog posts

MySQL and Security - Adam Johnson

SHAMELESS PLUGS

21 Aug 2019 10:00pm GMT

Django Tutorial - ManyToManyField via a Comma Separated String in Admin & Forms

The `ManyToManyField` is super...

21 Aug 2019 7:04am GMT

My Appearance on DjangoChat

Let's have a chat

A few weeks ago I had the pleasure of talking over the internet with Will Vincent and Carlton Gibson about lots of Django-related topics. They somewhat informed me it was being recorded for a podcast.

Now the day I anticipated and feared is here. I discovered through this tweet that my voice is online:

Episode 27 - MySQL & Security with Adam Johnson (@AdamChainz) is now live!

Adam is a Django core developer responsible for the popular django-mysql package. We discuss why MySQL still makes sense with Django, security, hosting on AWS, and more.

https://djangochat.com

We talked about all kinds of things including:

If you're interested in Django, head on over to the website and listen to Episode 027: MySQL & Security.

Enjoy!

-Adam

(P.S. Don't get mixed up and go to the wrong Twitter account, @DjangoChat, unless you want to see an adorable French cat.)

21 Aug 2019 4:00am GMT

20 Aug 2019

feedDjango community aggregator: Community blog posts

DjangoCon, Here We Come!

We're looking forward to the international gathering at DjangoCon 2019, in San Diego, CA. The six-day conference, from September 22 - 27, is focused on the Django web framework, and we're proud to attend as sponsors for the tenth year! We're also hosting the second annual Caktus Mini Golf event.

⛳ If you're attending DjangoCon, come play a round of mini golf with us. Look for our insert in your conference tote bag. It includes a free pass to Tiki Town Adventure Golf on Wednesday, September 25, at 7:00 p.m. (please RSVP online). The first round of golf is on us! And whoever shoots the lowest score will win a $100 Amazon gift card.*

Talk(s) of the Town

Among this year's talented speakers is one of our own, Erin Mullaney (pictured). Caktus Contractor Erin Mullaney Erin has been with Caktus since 2015, and has worked as a contractor for us since July 2017. On Monday, September 23, she'll share her experiences going from a full-time developer to a contractor in her talk, "Roll Your Own Tech Job: Starting a Business or Side Hustle from Scratch." The talk will cover her first two years as a consultant, including how she legally set up her business and found clients. Erin said she enjoys being her own boss and is excited to share her experiences.

Caktus Developer Jeremy Gibson, who will attend DjangoCon for the first time, is looking forward to expanding his knowledge of Django best practices surrounding queries and data modeling. He's also curious to see what other developers are doing with the framework. He's most looking forward to the sessions about datastore and Django's ORM, including:

Other talks we're looking forward to include:

See the full schedule of talks and tutorials.

Meeting and Greeting

If you'd like to meet the Caktus team during DjangoCon, join us for our second annual Mini Golf Event. Or you can schedule a specific time to chat with us one-on-one.

During the event, you can also follow us on Twitter @CaktusGroup and #DjangoCon2019 to stay tuned in. Check out DjangoCon's Slack channel for attendees, where you can introduce yourself, network, and even coordinate to ride share.

We hope to see you there!

*In the event of a tie, the winner will be selected from a random drawing from the names of those with the lowest score. Caktus employees can play, but are not eligible for prizes.

20 Aug 2019 11:13pm GMT