20 Jan 2017

feedPlanet Python

Kushal Das: Fedora Atomic Working Group update from 2017-01-17

This is an update from Fedora Atomic Working Group based on the IRC meeting on 2017-01-17. 14 people participated in the meeting, the full log of the meeting can be found here.

OverlayFS partition

We had a decision to have a docker partition in Fedora 26. The root partition sizing will also need to be fixed. One can read all the discussion on the same at the Pagure issue.

We also require help in writing the documentation for migration from Devicemapper -> Overlay -> Back.

How to compose your own Atomic tree?

Jason Brooks will update his document located at Project Atomic docs.

docker-storage-setup patches require more testing

There are pending patches which will require more testing before merging.

Goals and PRD of the working group

Josh Berkus is updating the goals and PRD documentation for the working group. Both short term and long term goals can be seen at this etherpad. The previous Cloud Working Group's PRD is much longer than most of the other groups' PRD, so we also discussed trimming the Atomic WG PRD.

Open floor discussion + other items

I updated the working group about a recent failure of the QCOW2 image on the Autocloud. It appears that if we boot the images with only one VCPU, and then after disabling chroyd service when we reboot, there is no defined time to have ssh service up and running.

Misc talked about the hardware plan for FOSP., and later he sent a detailed mail to the list on the same.

Antonio Murdaca (runcom) brought up the discussion about testing the latest Docker (1.13) and pushing it to F25. We decided to spend more time in testing it, and then only push to Fedora 25, otherwise it may break Kubernetes/Openshift. We will schedule a 1.13 testing week in the coming days.

20 Jan 2017 4:18am GMT

Kushal Das: Fedora Atomic Working Group update from 2017-01-17

This is an update from Fedora Atomic Working Group based on the IRC meeting on 2017-01-17. 14 people participated in the meeting, the full log of the meeting can be found here.

OverlayFS partition

We had a decision to have a docker partition in Fedora 26. The root partition sizing will also need to be fixed. One can read all the discussion on the same at the Pagure issue.

We also require help in writing the documentation for migration from Devicemapper -> Overlay -> Back.

How to compose your own Atomic tree?

Jason Brooks will update his document located at Project Atomic docs.

docker-storage-setup patches require more testing

There are pending patches which will require more testing before merging.

Goals and PRD of the working group

Josh Berkus is updating the goals and PRD documentation for the working group. Both short term and long term goals can be seen at this etherpad. The previous Cloud Working Group's PRD is much longer than most of the other groups' PRD, so we also discussed trimming the Atomic WG PRD.

Open floor discussion + other items

I updated the working group about a recent failure of the QCOW2 image on the Autocloud. It appears that if we boot the images with only one VCPU, and then after disabling chroyd service when we reboot, there is no defined time to have ssh service up and running.

Misc talked about the hardware plan for FOSP., and later he sent a detailed mail to the list on the same.

Antonio Murdaca (runcom) brought up the discussion about testing the latest Docker (1.13) and pushing it to F25. We decided to spend more time in testing it, and then only push to Fedora 25, otherwise it may break Kubernetes/Openshift. We will schedule a 1.13 testing week in the coming days.

20 Jan 2017 4:18am GMT

Python Diary: Encryption experiment in Python

I recently created a toy encryption tool using pure Python, and it's dead simple to implement and use. It is slow in CPython, a bit faster in Cython, and runs nicely in a compiled language like ObjectPascal.

I created this as a way to better understand how encryption works and to allow others who don't understand cryptography to have an easy to read and learn example of the utter basics of encryption. This code can be easily expanded to further strengthen it. It uses eXclusive OR to toggle bits which is what does the actual encryption here. It is a stream cipher, so the key and input can be variable in length. The encryption works by using a custom table, or master key as it is labeled in the code, along with an actual password/passphrase. I'd highly recommend passing an SHA512 digest hash of a password.

My initial idea was to create a Crypto Virtual Machine, where each byte in the password/passphrase would map to a virtual op code in the VM. This op code would then do something to the current byte to be encrypted, so effectively the password is a bytecode string for this VM which tells the VM how to encrypt and decrypt the clear-text or otherwise data to be encrypted. This may make encryption slow, as a VM needs to parse through each and every byte and do something to it, and it needs to be reversible using the same bytecode. Essentially you would need an encryption VM, and a decryption VM. The encryption VM would perform the encryption of the bytes or blocks, and the decryption VM would perform the decryption. So rather than a set algorithm, each byte in your key works with a VM to perform almost random encryption making it almost impossible to decrypt without knowing the bytecode the VM needs to decrypt it.

This is only a theory on how more advanced computers with faster processing power could implement encryption, and technically a dedicated microcontroller or processor could be used to implement what the bytecodes do in the hardware level. I do still plan on making an example Virtual Machine like this to play around with the idea. For now, you can check out the encryption project I talked about above here on my KCrypt source code.

20 Jan 2017 1:20am GMT

Python Diary: Encryption experiment in Python

I recently created a toy encryption tool using pure Python, and it's dead simple to implement and use. It is slow in CPython, a bit faster in Cython, and runs nicely in a compiled language like ObjectPascal.

I created this as a way to better understand how encryption works and to allow others who don't understand cryptography to have an easy to read and learn example of the utter basics of encryption. This code can be easily expanded to further strengthen it. It uses eXclusive OR to toggle bits which is what does the actual encryption here. It is a stream cipher, so the key and input can be variable in length. The encryption works by using a custom table, or master key as it is labeled in the code, along with an actual password/passphrase. I'd highly recommend passing an SHA512 digest hash of a password.

My initial idea was to create a Crypto Virtual Machine, where each byte in the password/passphrase would map to a virtual op code in the VM. This op code would then do something to the current byte to be encrypted, so effectively the password is a bytecode string for this VM which tells the VM how to encrypt and decrypt the clear-text or otherwise data to be encrypted. This may make encryption slow, as a VM needs to parse through each and every byte and do something to it, and it needs to be reversible using the same bytecode. Essentially you would need an encryption VM, and a decryption VM. The encryption VM would perform the encryption of the bytes or blocks, and the decryption VM would perform the decryption. So rather than a set algorithm, each byte in your key works with a VM to perform almost random encryption making it almost impossible to decrypt without knowing the bytecode the VM needs to decrypt it.

This is only a theory on how more advanced computers with faster processing power could implement encryption, and technically a dedicated microcontroller or processor could be used to implement what the bytecodes do in the hardware level. I do still plan on making an example Virtual Machine like this to play around with the idea. For now, you can check out the encryption project I talked about above here on my KCrypt source code.

20 Jan 2017 1:20am GMT

19 Jan 2017

feedPlanet Python

Python Software Foundation: Shelia Miguez and Will Kahn-Greene and their love for the Python Community: Community Service Award Quarter 3 2016 Winners

There are two elements which make Open Source function:
  1. Technology
  2. An active community.
The primary need for a successful community is a good contributor base. The contributors are our real heroes, who work persistently, on many (if not most) occasions without any financial benefits, just for the love of the community. The Python Community is blessed with many such heroes. The PSF's quarterly Community Service Award honors these heroes for their notable contributions and dedication to the Python ecosystem.


The PSF is delighted to give the 2016 Third Quarter Community Service Award to Sheila Miguez and Will Kahn-Greene:

Sheila Miguez and William Kahn-Greene for their monumental work in creating and supporting PyVideo over the years.



Community Service Award for 3rd Quarter

Will Kahn-Greene
Taken by Erik Rose, June 2016
The PSF funds a variety of conferences and workshops throughout the year worldwide to educate people about Python. But, not everyone can attend all of these events. Two people, Sheila Miguez and Will Kahn-Greene wanted to resolve this problem for the Pythonistas. Will came up with a brilliant idea of PyVideo and Sheila later joined the mission. PyVideo works as the warehouse of videos from Python conferences, local user groups, screencasts, and tutorials.

The Dawn of PyVideo

Back in 2010, Will started a Python video site using the Miro Community video-sharing platform. PSF encouraged his work with an $1800 grant the following year. As Will recalls, "I was thinking there were a bunch of Python conferences putting out video, but they were hosting the videos in different places. Search engines weren't really finding it. It was hard to find things even if you knew where to look." He started with Miro Community, and later wrote a whole new codebase for generating the data and another codebase for the front end of the website.
With these tools he started PyVideo.org. "This new infrastructure let me build a site closer to what I was envisioning."


When Sheila joined the project she contributed both to its technology and by helping the community find Python videos easier. Originally, she intended to only work on the codebase, but found herself dedicating a lot of time to adding content to the site.


What is PyVideo?
PyVideo is a repository that indexes and links to thousands of Python videos. It also provides a website pyvideo.org where people can browse the collection, which is more than 5000 Python videos and growing. The goals for PyVideo are:

  1. Help people get to Python presentations easier and faster
  2. Focus on education
  3. Data collection and categorization.
  4. Aim to give people an easy, enjoyable experience contributing to open source on PyVideo's GitHub repo

The Community Response

The Python community has welcomed Will and Sheila's noble endeavor enthusiastically. Pythonistas around the world never have to miss another recorded talk or tutorial. Sheila and Will worked relentlessly to give shape to their mammoth task. When I asked Will about the community's response, he said, "Many learned Python by watching videos they found on pyvideo.org. Many had ideas for different things we could do with the site and other related projects. I talked with some folks who later contributed fixes and corrections to the data."


Will and Sheila worked on pyvideo.org only in their spare time, but it has became a major catalyst in the growth of the Python community worldwide. According to Will, pyvideo.org has additional, under publicized benefits:


  • PyVideo is a primary source to survey diversity trends among Python conference speakers around the globe.
  • Since its videos are solely Python, it is easily searchable and provides more helpful results than other search engines.
  • It offers a preview of conferences: By watching past talks people can choose if they want to go.


PyVideo : The End?

With a blog post Will and Sheila announced the end of pyvideo.org. "I'm pretty tired of working on pyvideo and I haven't had the time or energy to do much on it in a while," Will wrote.


Though they were shutting down the site, they never wanted to lose or waste the valuable data. Will says, "In February 2016 or so, Sheila and I talked about the state of things and I just felt bad about everything. So we decided to focus on extracting the data from PyVideo and make sure that even if the site didn't live on, the data did. We wrote a bunch of tools and
infrastructure for a community of people to add to, improve and otherwise work on the data. We figured someone could take the data and build a static site around it." Will did a blog post about the status of the data of pyvideo.org, and invited new maintainers to replace the site.


The end of pyvideo.org broke the hearts of many Pythonistas, including Paul Logston. Paul's mornings used to begin by watching a talk on the site, and he couldn't renounce his morning entertainment. He resolved to replace pyvideo.org. To begin, he wrote his project called "PyTube" for storing videos. Though initially his interest was personal, its educational outreach aspect drove him to finish and publicize the project. Sheila remembers noticing Paul for the first time when she noticed his fork of the pyvideo data repository. She was excited to see that he'd already built a static site generator based on PyVideo data. She read Paul's development philosophy and felt he was the right person to carry on the mission.


In May 2016, at PyCon US, there was a lightning talk on PyVideo and its situation. Paul met some fellow PyVideo followers who, just like him, did not want to lose the site. They decided to work on it during the Sprints. Though the structure of the website was ready, there were a lot of things that needed to be done like data gathering, curating data, and the design of the website. So, the contributors divided the works between them.


Both Sheila and Will were committed to PyVideo's continued benefit for the community, while passing PyVideo to new hands. They were satisfied by Paul's work and transferred the domain to his control. Paul's PyTube code became the replacement of pyvideo.org on August 13, 2016.


Emergence of the Successor : The Present Status of PyVideo

Now the project has 30 contributors, with Paul serving as project lead. These contributors have kept the mission alive. Though PyVideo's aim is still the same, there is a difference in its technology. The old Django app is replaced with a static site generated with Pelican, and it now has a separate repository for data in JSON files. The team's current work emphasizes making the project hassle-free to maintain.


Listen to Paul talking about PyVideo and its future on Talk Python to Me.


The Wings to Fly

Every community needs someone with a vision for its future. Will and Sheila had showed us a path to grow and help the community. It is now our responsibility to take the new PyVideo further. Paul describes its purpose beautifully: "PyVideo's deeper 'why' is the desire to make educating oneself as easy, affordable, and available as possible." Contributors: please come and join the project, give a hand to Paul and the team to help move this great endeavor forward.

19 Jan 2017 10:47pm GMT

Python Software Foundation: Shelia Miguez and Will Kahn-Greene and their love for the Python Community: Community Service Award Quarter 3 2016 Winners

There are two elements which make Open Source function:
  1. Technology
  2. An active community.
The primary need for a successful community is a good contributor base. The contributors are our real heroes, who work persistently, on many (if not most) occasions without any financial benefits, just for the love of the community. The Python Community is blessed with many such heroes. The PSF's quarterly Community Service Award honors these heroes for their notable contributions and dedication to the Python ecosystem.


The PSF is delighted to give the 2016 Third Quarter Community Service Award to Sheila Miguez and Will Kahn-Greene:

Sheila Miguez and William Kahn-Greene for their monumental work in creating and supporting PyVideo over the years.



Community Service Award for 3rd Quarter

Will Kahn-Greene
Taken by Erik Rose, June 2016
The PSF funds a variety of conferences and workshops throughout the year worldwide to educate people about Python. But, not everyone can attend all of these events. Two people, Sheila Miguez and Will Kahn-Greene wanted to resolve this problem for the Pythonistas. Will came up with a brilliant idea of PyVideo and Sheila later joined the mission. PyVideo works as the warehouse of videos from Python conferences, local user groups, screencasts, and tutorials.

The Dawn of PyVideo

Back in 2010, Will started a Python video site using the Miro Community video-sharing platform. PSF encouraged his work with an $1800 grant the following year. As Will recalls, "I was thinking there were a bunch of Python conferences putting out video, but they were hosting the videos in different places. Search engines weren't really finding it. It was hard to find things even if you knew where to look." He started with Miro Community, and later wrote a whole new codebase for generating the data and another codebase for the front end of the website.
With these tools he started PyVideo.org. "This new infrastructure let me build a site closer to what I was envisioning."


When Sheila joined the project she contributed both to its technology and by helping the community find Python videos easier. Originally, she intended to only work on the codebase, but found herself dedicating a lot of time to adding content to the site.


What is PyVideo?
PyVideo is a repository that indexes and links to thousands of Python videos. It also provides a website pyvideo.org where people can browse the collection, which is more than 5000 Python videos and growing. The goals for PyVideo are:

  1. Help people get to Python presentations easier and faster
  2. Focus on education
  3. Data collection and categorization.
  4. Aim to give people an easy, enjoyable experience contributing to open source on PyVideo's GitHub repo

The Community Response

The Python community has welcomed Will and Sheila's noble endeavor enthusiastically. Pythonistas around the world never have to miss another recorded talk or tutorial. Sheila and Will worked relentlessly to give shape to their mammoth task. When I asked Will about the community's response, he said, "Many learned Python by watching videos they found on pyvideo.org. Many had ideas for different things we could do with the site and other related projects. I talked with some folks who later contributed fixes and corrections to the data."


Will and Sheila worked on pyvideo.org only in their spare time, but it has became a major catalyst in the growth of the Python community worldwide. According to Will, pyvideo.org has additional, under publicized benefits:


  • PyVideo is a primary source to survey diversity trends among Python conference speakers around the globe.
  • Since its videos are solely Python, it is easily searchable and provides more helpful results than other search engines.
  • It offers a preview of conferences: By watching past talks people can choose if they want to go.


PyVideo : The End?

With a blog post Will and Sheila announced the end of pyvideo.org. "I'm pretty tired of working on pyvideo and I haven't had the time or energy to do much on it in a while," Will wrote.


Though they were shutting down the site, they never wanted to lose or waste the valuable data. Will says, "In February 2016 or so, Sheila and I talked about the state of things and I just felt bad about everything. So we decided to focus on extracting the data from PyVideo and make sure that even if the site didn't live on, the data did. We wrote a bunch of tools and
infrastructure for a community of people to add to, improve and otherwise work on the data. We figured someone could take the data and build a static site around it." Will did a blog post about the status of the data of pyvideo.org, and invited new maintainers to replace the site.


The end of pyvideo.org broke the hearts of many Pythonistas, including Paul Logston. Paul's mornings used to begin by watching a talk on the site, and he couldn't renounce his morning entertainment. He resolved to replace pyvideo.org. To begin, he wrote his project called "PyTube" for storing videos. Though initially his interest was personal, its educational outreach aspect drove him to finish and publicize the project. Sheila remembers noticing Paul for the first time when she noticed his fork of the pyvideo data repository. She was excited to see that he'd already built a static site generator based on PyVideo data. She read Paul's development philosophy and felt he was the right person to carry on the mission.


In May 2016, at PyCon US, there was a lightning talk on PyVideo and its situation. Paul met some fellow PyVideo followers who, just like him, did not want to lose the site. They decided to work on it during the Sprints. Though the structure of the website was ready, there were a lot of things that needed to be done like data gathering, curating data, and the design of the website. So, the contributors divided the works between them.


Both Sheila and Will were committed to PyVideo's continued benefit for the community, while passing PyVideo to new hands. They were satisfied by Paul's work and transferred the domain to his control. Paul's PyTube code became the replacement of pyvideo.org on August 13, 2016.


Emergence of the Successor : The Present Status of PyVideo

Now the project has 30 contributors, with Paul serving as project lead. These contributors have kept the mission alive. Though PyVideo's aim is still the same, there is a difference in its technology. The old Django app is replaced with a static site generated with Pelican, and it now has a separate repository for data in JSON files. The team's current work emphasizes making the project hassle-free to maintain.


Listen to Paul talking about PyVideo and its future on Talk Python to Me.


The Wings to Fly

Every community needs someone with a vision for its future. Will and Sheila had showed us a path to grow and help the community. It is now our responsibility to take the new PyVideo further. Paul describes its purpose beautifully: "PyVideo's deeper 'why' is the desire to make educating oneself as easy, affordable, and available as possible." Contributors: please come and join the project, give a hand to Paul and the team to help move this great endeavor forward.

19 Jan 2017 10:47pm GMT

PyCharm: Make sense of your variables at a glance with semantic highlighting

Let's say you have a really dense function or method, with lots of arguments passed in and lots of local variables. Syntax highlighting helps some, but can PyCharm do more?

In 2017.1, PyCharm ships with "semantic highlighting" available as a preference. What is it, what problem does it solve, and how do I use it? Let's take a look.

It's So Noisy

Sometimes you have really, really big functions. Not in your codebase, of course, because you are tidy. But hypothetically, you encounter this in a library:

2016-noselection

PyCharm helps, of course. Syntax highlighting sorts out the reserved words and different kinds of symbols: bold for keywords, gray for unneeded, yellow means suggestion, green for string literals. But that doesn't help you focus on the parameter "namespaces". Clicking on a specific symbol highlights it for the rest of the file:

2016-selection

That kind of works, but not only do you have to perform an action for each symbol you want to focus on, it also moves your cursor. It's a solution to a different problem.

How can my tool help me scan this Python code without much effort or distraction?

IntelliJ Got It

As you likely know, PyCharm and our other IDEs are built atop the IntelliJ IDE platform. In November, IntelliJ landed an experimental cut of "semantic highlighting":

"Semantic Highlighting, previously introduced in KDevelop and some other IDEs, is now available in IntelliJ IDEA. It extends the standard syntax highlighting with unique colors for each parameter and local variable."

It wasn't available in the IDEs, but you could manually enable it via a developer preference. Here's a quick IntelliJ video describing the problem and how semantic highlighting helps.

With PyCharm 2017.1, the engine is now available to be turned on in preferences. Let's see it in action.

Crank Up the Signal

Blah blah blah, what does it look like?

2017Our noisy function now has some help. PyCharm uses semantic highlighting to assign a different color to each parameter and local variable: the "namespaces" parameter is now a certain shade of green. You can then let color help you scan through the function to track the variable, with no distracting action to isolate one of them or switch focus to another.

To turn on semantic highlighting in your project, on a per-font-scheme basis, visit the Editor -> Colors & Fonts -> Language Defaults preference:

prefs

Your Colors Make Me Sad

The default color scheme might not work for you. Some folks have visual issues for red and green, for example. Some might have contrast issues in their theme or workspace. Others might simply hate #114D77 (we've all been there.)

If you make IDEs for long enough, you learn self-defense, and that means shipping a flexible means of customization:

colorpicker

The pickers let you assign base colors then gradients to tailor a wide number of local symbols to your needs and taste.

Learn More

PyCharm's goal is to help you be a badass Python developer, and hopefully our use of semantic highlighting helps you make sense of dense code. We're still working on the idea itself as well as the implementation, so feel free to follow along in our bug tracker across all our products, since this isn't a PyCharm-specific feature.

And as usual, if you have any quick questions, drop us a note in the blog comments.

19 Jan 2017 5:14pm GMT

PyCharm: Make sense of your variables at a glance with semantic highlighting

Let's say you have a really dense function or method, with lots of arguments passed in and lots of local variables. Syntax highlighting helps some, but can PyCharm do more?

In 2017.1, PyCharm ships with "semantic highlighting" available as a preference. What is it, what problem does it solve, and how do I use it? Let's take a look.

It's So Noisy

Sometimes you have really, really big functions. Not in your codebase, of course, because you are tidy. But hypothetically, you encounter this in a library:

2016-noselection

PyCharm helps, of course. Syntax highlighting sorts out the reserved words and different kinds of symbols: bold for keywords, gray for unneeded, yellow means suggestion, green for string literals. But that doesn't help you focus on the parameter "namespaces". Clicking on a specific symbol highlights it for the rest of the file:

2016-selection

That kind of works, but not only do you have to perform an action for each symbol you want to focus on, it also moves your cursor. It's a solution to a different problem.

How can my tool help me scan this Python code without much effort or distraction?

IntelliJ Got It

As you likely know, PyCharm and our other IDEs are built atop the IntelliJ IDE platform. In November, IntelliJ landed an experimental cut of "semantic highlighting":

"Semantic Highlighting, previously introduced in KDevelop and some other IDEs, is now available in IntelliJ IDEA. It extends the standard syntax highlighting with unique colors for each parameter and local variable."

It wasn't available in the IDEs, but you could manually enable it via a developer preference. Here's a quick IntelliJ video describing the problem and how semantic highlighting helps.

With PyCharm 2017.1, the engine is now available to be turned on in preferences. Let's see it in action.

Crank Up the Signal

Blah blah blah, what does it look like?

2017Our noisy function now has some help. PyCharm uses semantic highlighting to assign a different color to each parameter and local variable: the "namespaces" parameter is now a certain shade of green. You can then let color help you scan through the function to track the variable, with no distracting action to isolate one of them or switch focus to another.

To turn on semantic highlighting in your project, on a per-font-scheme basis, visit the Editor -> Colors & Fonts -> Language Defaults preference:

prefs

Your Colors Make Me Sad

The default color scheme might not work for you. Some folks have visual issues for red and green, for example. Some might have contrast issues in their theme or workspace. Others might simply hate #114D77 (we've all been there.)

If you make IDEs for long enough, you learn self-defense, and that means shipping a flexible means of customization:

colorpicker

The pickers let you assign base colors then gradients to tailor a wide number of local symbols to your needs and taste.

Learn More

PyCharm's goal is to help you be a badass Python developer, and hopefully our use of semantic highlighting helps you make sense of dense code. We're still working on the idea itself as well as the implementation, so feel free to follow along in our bug tracker across all our products, since this isn't a PyCharm-specific feature.

And as usual, if you have any quick questions, drop us a note in the blog comments.

19 Jan 2017 5:14pm GMT

Python Data: Collecting / Storing Tweets with Python and MongoDB

A good amount of the work that I do involves using social media content for analyzing networks, sentiment, influencers and other various types of analysis.

In order to do this type of analysis, you first need to have some data to analyze. You can also scrape websites like Twitter or Facebook using simple web scrapers, but I've always found it easier to use the API's that these companies / websites provide to pull down data.

The Twitter Streaming API is ideal for grabbing data in real-time and storing it for analysis. Twitter also has a search API that lets you pull down a certain number of historical tweets (I think I read it was the last 1,000 tweets…but its been a while since I've looked at the Search API). I'm a fan of the Streaming API because it lets me grab a much larger set of data than the Search API, but it requires you to build a script that 'listens' to the API for your required keywords and then store those tweets somewhere for later analysis.

There are tons of ways to connect up to the Streaming API. There are also quite a few Twitter API wrappers for Python (and most of them work very well). I tend to use Tweepy more than others due to its ease of use and simple structure. Additionally, if I'm working on a small / short-term project, I tend to reach for MongoDB to store the tweets using the PyMongo module. For larger / longer-term projects I usually connect the streaming API script to MySQL instead of MongoDB simply because MySQL fits into my ecosystem of backup scripts, etc better than MongoDB does. MongoDB is perfectly suited for this type of work for larger projects…I just tend to swing toward MySQL for those projects.

For this post, I wanted to share my script for collecting Tweets from the Twitter API and storing them into MongoDB.

Note: This script is a mashup of many other scripts I've found on the web over the years. I don't recall where I found the pieces/parts of this script but I don't want to discount the help I had from other people / sites in building this script.

Collecting / Storing Tweets with Python and MongoDB

Let's set up our imports:

from __future__ import print_function
import tweepy
import json
from pymongo import MongoClient

Next, set up your mongoDB path:

MONGO_HOST= 'mongodb://localhost/twitterdb'  # assuming you have mongoDB installed locally
                                             # and a database called 'twitterdb'

Next, set up the words that you want to 'listen' for on Twitter. You can use words or phrases seperated by commas.

WORDS = ['#bigdata', '#AI', '#datascience', '#machinelearning', '#ml', '#iot']

Here, I'm listening for words related to maching learning, data science, etc.

Next, let's set up our Twitter API Access information. You can set these up here.

CONSUMER_KEY = "KEY"
CONSUMER_SECRET = "SECRET"
ACCESS_TOKEN = "TOKEN"
ACCESS_TOKEN_SECRET = "TOKEN_SECRET"

Time to build the listener class.

class StreamListener(tweepy.StreamListener):    
    #This is a class provided by tweepy to access the Twitter Streaming API. 

    def on_connect(self):
        # Called initially to connect to the Streaming API
        print("You are now connected to the streaming API.")
 
    def on_error(self, status_code):
        # On error - if an error occurs, display the error / status code
        print('An Error has occured: ' + repr(status_code))
        return False
 
    def on_data(self, data):
        #This is the meat of the script...it connects to your mongoDB and stores the tweet
        try:
            client = MongoClient(MONGO_HOST)
            
            # Use twitterdb database. If it doesn't exist, it will be created.
            db = client.twitterdb
    
            # Decode the JSON from Twitter
            datajson = json.loads(data)
            
            #grab the 'created_at' data from the Tweet to use for display
            created_at = datajson['created_at']

            #print out a message to the screen that we have collected a tweet
            print("Tweet collected at " + str(created_at))
            
            #insert the data into the mongoDB into a collection called twitter_search
            #if twitter_search doesn't exist, it will be created.
            db.twitter_search.insert(datajson)
        except Exception as e:
           print(e)

Now that we have the listener class, let's set everything up to start listening.

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
#Set up the listener. The 'wait_on_rate_limit=True' is needed to help with Twitter API rate limiting.
listener = StreamListener(api=tweepy.API(wait_on_rate_limit=True)) 
streamer = tweepy.Stream(auth=auth, listener=listener)
print("Tracking: " + str(WORDS))
streamer.filter(track=WORDS)

Now you are ready to go. The full script is below. You can store this script as "streaming_API.py" and run it as "python streaming_API.py" and - assuming you set up mongoDB and your twitter API key's correctly, you should start collecting Tweets.

The Full Script:

from __future__ import print_function
import tweepy
import json
from pymongo import MongoClient

MONGO_HOST= 'mongodb://localhost/twitterdb'  # assuming you have mongoDB installed locally
                                             # and a database called 'twitterdb'

WORDS = ['#bigdata', '#AI', '#datascience', '#machinelearning', '#ml', '#iot']

CONSUMER_KEY = "KEY"
CONSUMER_SECRET = "SECRET"
ACCESS_TOKEN = "TOKEN"
ACCESS_TOKEN_SECRET = "TOKEN_SECRET"


class StreamListener(tweepy.StreamListener):    
    #This is a class provided by tweepy to access the Twitter Streaming API. 

    def on_connect(self):
        # Called initially to connect to the Streaming API
        print("You are now connected to the streaming API.")
 
    def on_error(self, status_code):
        # On error - if an error occurs, display the error / status code
        print('An Error has occured: ' + repr(status_code))
        return False
 
    def on_data(self, data):
        #This is the meat of the script...it connects to your mongoDB and stores the tweet
        try:
            client = MongoClient(MONGO_HOST)
            
            # Use twitterdb database. If it doesn't exist, it will be created.
            db = client.twitterdb
    
            # Decode the JSON from Twitter
            datajson = json.loads(data)
            
            #grab the 'created_at' data from the Tweet to use for display
            created_at = datajson['created_at']

            #print out a message to the screen that we have collected a tweet
            print("Tweet collected at " + str(created_at))
            
            #insert the data into the mongoDB into a collection called twitter_search
            #if twitter_search doesn't exist, it will be created.
            db.twitter_search.insert(datajson)
        except Exception as e:
           print(e)

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
#Set up the listener. The 'wait_on_rate_limit=True' is needed to help with Twitter API rate limiting.
listener = StreamListener(api=tweepy.API(wait_on_rate_limit=True)) 
streamer = tweepy.Stream(auth=auth, listener=listener)
print("Tracking: " + str(WORDS))
streamer.filter(track=WORDS)

The post Collecting / Storing Tweets with Python and MongoDB appeared first on Python Data.

19 Jan 2017 1:17pm GMT

Python Data: Collecting / Storing Tweets with Python and MongoDB

A good amount of the work that I do involves using social media content for analyzing networks, sentiment, influencers and other various types of analysis.

In order to do this type of analysis, you first need to have some data to analyze. You can also scrape websites like Twitter or Facebook using simple web scrapers, but I've always found it easier to use the API's that these companies / websites provide to pull down data.

The Twitter Streaming API is ideal for grabbing data in real-time and storing it for analysis. Twitter also has a search API that lets you pull down a certain number of historical tweets (I think I read it was the last 1,000 tweets…but its been a while since I've looked at the Search API). I'm a fan of the Streaming API because it lets me grab a much larger set of data than the Search API, but it requires you to build a script that 'listens' to the API for your required keywords and then store those tweets somewhere for later analysis.

There are tons of ways to connect up to the Streaming API. There are also quite a few Twitter API wrappers for Python (and most of them work very well). I tend to use Tweepy more than others due to its ease of use and simple structure. Additionally, if I'm working on a small / short-term project, I tend to reach for MongoDB to store the tweets using the PyMongo module. For larger / longer-term projects I usually connect the streaming API script to MySQL instead of MongoDB simply because MySQL fits into my ecosystem of backup scripts, etc better than MongoDB does. MongoDB is perfectly suited for this type of work for larger projects…I just tend to swing toward MySQL for those projects.

For this post, I wanted to share my script for collecting Tweets from the Twitter API and storing them into MongoDB.

Note: This script is a mashup of many other scripts I've found on the web over the years. I don't recall where I found the pieces/parts of this script but I don't want to discount the help I had from other people / sites in building this script.

Collecting / Storing Tweets with Python and MongoDB

Let's set up our imports:

from __future__ import print_function
import tweepy
import json
from pymongo import MongoClient

Next, set up your mongoDB path:

MONGO_HOST= 'mongodb://localhost/twitterdb'  # assuming you have mongoDB installed locally
                                             # and a database called 'twitterdb'

Next, set up the words that you want to 'listen' for on Twitter. You can use words or phrases seperated by commas.

WORDS = ['#bigdata', '#AI', '#datascience', '#machinelearning', '#ml', '#iot']

Here, I'm listening for words related to maching learning, data science, etc.

Next, let's set up our Twitter API Access information. You can set these up here.

CONSUMER_KEY = "KEY"
CONSUMER_SECRET = "SECRET"
ACCESS_TOKEN = "TOKEN"
ACCESS_TOKEN_SECRET = "TOKEN_SECRET"

Time to build the listener class.

class StreamListener(tweepy.StreamListener):    
    #This is a class provided by tweepy to access the Twitter Streaming API. 

    def on_connect(self):
        # Called initially to connect to the Streaming API
        print("You are now connected to the streaming API.")
 
    def on_error(self, status_code):
        # On error - if an error occurs, display the error / status code
        print('An Error has occured: ' + repr(status_code))
        return False
 
    def on_data(self, data):
        #This is the meat of the script...it connects to your mongoDB and stores the tweet
        try:
            client = MongoClient(MONGO_HOST)
            
            # Use twitterdb database. If it doesn't exist, it will be created.
            db = client.twitterdb
    
            # Decode the JSON from Twitter
            datajson = json.loads(data)
            
            #grab the 'created_at' data from the Tweet to use for display
            created_at = datajson['created_at']

            #print out a message to the screen that we have collected a tweet
            print("Tweet collected at " + str(created_at))
            
            #insert the data into the mongoDB into a collection called twitter_search
            #if twitter_search doesn't exist, it will be created.
            db.twitter_search.insert(datajson)
        except Exception as e:
           print(e)

Now that we have the listener class, let's set everything up to start listening.

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
#Set up the listener. The 'wait_on_rate_limit=True' is needed to help with Twitter API rate limiting.
listener = StreamListener(api=tweepy.API(wait_on_rate_limit=True)) 
streamer = tweepy.Stream(auth=auth, listener=listener)
print("Tracking: " + str(WORDS))
streamer.filter(track=WORDS)

Now you are ready to go. The full script is below. You can store this script as "streaming_API.py" and run it as "python streaming_API.py" and - assuming you set up mongoDB and your twitter API key's correctly, you should start collecting Tweets.

The Full Script:

from __future__ import print_function
import tweepy
import json
from pymongo import MongoClient

MONGO_HOST= 'mongodb://localhost/twitterdb'  # assuming you have mongoDB installed locally
                                             # and a database called 'twitterdb'

WORDS = ['#bigdata', '#AI', '#datascience', '#machinelearning', '#ml', '#iot']

CONSUMER_KEY = "KEY"
CONSUMER_SECRET = "SECRET"
ACCESS_TOKEN = "TOKEN"
ACCESS_TOKEN_SECRET = "TOKEN_SECRET"


class StreamListener(tweepy.StreamListener):    
    #This is a class provided by tweepy to access the Twitter Streaming API. 

    def on_connect(self):
        # Called initially to connect to the Streaming API
        print("You are now connected to the streaming API.")
 
    def on_error(self, status_code):
        # On error - if an error occurs, display the error / status code
        print('An Error has occured: ' + repr(status_code))
        return False
 
    def on_data(self, data):
        #This is the meat of the script...it connects to your mongoDB and stores the tweet
        try:
            client = MongoClient(MONGO_HOST)
            
            # Use twitterdb database. If it doesn't exist, it will be created.
            db = client.twitterdb
    
            # Decode the JSON from Twitter
            datajson = json.loads(data)
            
            #grab the 'created_at' data from the Tweet to use for display
            created_at = datajson['created_at']

            #print out a message to the screen that we have collected a tweet
            print("Tweet collected at " + str(created_at))
            
            #insert the data into the mongoDB into a collection called twitter_search
            #if twitter_search doesn't exist, it will be created.
            db.twitter_search.insert(datajson)
        except Exception as e:
           print(e)

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
#Set up the listener. The 'wait_on_rate_limit=True' is needed to help with Twitter API rate limiting.
listener = StreamListener(api=tweepy.API(wait_on_rate_limit=True)) 
streamer = tweepy.Stream(auth=auth, listener=listener)
print("Tracking: " + str(WORDS))
streamer.filter(track=WORDS)

The post Collecting / Storing Tweets with Python and MongoDB appeared first on Python Data.

19 Jan 2017 1:17pm GMT

PyTennessee: PyTN Profiles: Deborah Hanus and Axial Healthcare



Speaker Profile: Deborah Hanus (@deborahhanus)

Deborah graduated from MIT with her Master's and Bachelor's in Computer Science in computer science, where she developed mathematical models of human perception. Then as a Fulbright Scholar in Cambodia, she investigated how education translates into job creation. She worked as an early software engineer at a San Francisco start up before taking a break to work on exciting data & programming-related projects as a PhD candidate in machine learning at Harvard.

Deborah will be presenting "Lights, camera, action! Scraping a great dataset to predict Oscar winners" at 11:00AM Sunday (2/5) in the Room 100. Using Jupyter notebooks and scikit-learn, you'll predict whether a movie is likely to win an Oscar or be a box office hit. (http://oscarpredictor.github.io/) Together, we'll step through the creation of an effective dataset: asking a question your data can answer, writing a web scraper, and answering those questions using nothing but Python libraries and data from the Internet.

Sponsor Profile: Axial Healthcare (@axialhealthcare)

axialHealthcare is the nation's leading pain medication and pain care management company. Our cutting-edge analytics engine mines data to give insurers a comprehensive view of their pain problem and what it's costing them. axial's pain management solutions improve financial performance by engaging practitioners and patients, optimizing pain care outcomes, and reducing opioid misuse. The axialHealthcare team is comprised of some of the nation's top physicians, scientists, pharmacists, and operators in the field of pain management. Our team is mission-focused, smart, collaborative, and growing. Learn more at axialhealthcare.com.

19 Jan 2017 12:34pm GMT

PyTennessee: PyTN Profiles: Deborah Hanus and Axial Healthcare



Speaker Profile: Deborah Hanus (@deborahhanus)

Deborah graduated from MIT with her Master's and Bachelor's in Computer Science in computer science, where she developed mathematical models of human perception. Then as a Fulbright Scholar in Cambodia, she investigated how education translates into job creation. She worked as an early software engineer at a San Francisco start up before taking a break to work on exciting data & programming-related projects as a PhD candidate in machine learning at Harvard.

Deborah will be presenting "Lights, camera, action! Scraping a great dataset to predict Oscar winners" at 11:00AM Sunday (2/5) in the Room 100. Using Jupyter notebooks and scikit-learn, you'll predict whether a movie is likely to win an Oscar or be a box office hit. (http://oscarpredictor.github.io/) Together, we'll step through the creation of an effective dataset: asking a question your data can answer, writing a web scraper, and answering those questions using nothing but Python libraries and data from the Internet.

Sponsor Profile: Axial Healthcare (@axialhealthcare)

axialHealthcare is the nation's leading pain medication and pain care management company. Our cutting-edge analytics engine mines data to give insurers a comprehensive view of their pain problem and what it's costing them. axial's pain management solutions improve financial performance by engaging practitioners and patients, optimizing pain care outcomes, and reducing opioid misuse. The axialHealthcare team is comprised of some of the nation's top physicians, scientists, pharmacists, and operators in the field of pain management. Our team is mission-focused, smart, collaborative, and growing. Learn more at axialhealthcare.com.

19 Jan 2017 12:34pm GMT

PyTennessee: PyTN Profiles: Keynoter Courey Elliott and Big Apple Py


Speaker Profile: Courey Elliott (@dev_branch)

Courey Elliott is a software engineer at Emma who has a love of architecture, automation, methodology, and programming principles. They enjoy working on community projects and microcomputing in their spare time. They have a spouse, two kids, two GIANT dogs, two cats, five chickens, a lizard, and a hedgehog named Jack.

Courey will be presenting at 4:00PM Saturday (2/4) in the Auditorium.

Sponsor Profile: Big Apply Py (@bigapplepyinc)

Big Apple Py is a New York State non-profit that promotes the use and education of open source software, in particular, the Python programming language, in and around New York City. Big Apple Py proudly organizes the NYC Python (http://nycpython.org), Learn Python NYC (http://learn.nycpython.org), and Flask-NYC (http://flask-nyc.org) meetup groups, as well as PyGotham (https://pygotham.org), an annual regional Python conference.

19 Jan 2017 12:27pm GMT

PyTennessee: PyTN Profiles: Keynoter Courey Elliott and Big Apple Py


Speaker Profile: Courey Elliott (@dev_branch)

Courey Elliott is a software engineer at Emma who has a love of architecture, automation, methodology, and programming principles. They enjoy working on community projects and microcomputing in their spare time. They have a spouse, two kids, two GIANT dogs, two cats, five chickens, a lizard, and a hedgehog named Jack.

Courey will be presenting at 4:00PM Saturday (2/4) in the Auditorium.

Sponsor Profile: Big Apply Py (@bigapplepyinc)

Big Apple Py is a New York State non-profit that promotes the use and education of open source software, in particular, the Python programming language, in and around New York City. Big Apple Py proudly organizes the NYC Python (http://nycpython.org), Learn Python NYC (http://learn.nycpython.org), and Flask-NYC (http://flask-nyc.org) meetup groups, as well as PyGotham (https://pygotham.org), an annual regional Python conference.

19 Jan 2017 12:27pm GMT

Python Anywhere: New release! File sharing, and some nice little fixes and improvements

Honestly, it's strange. We'll work on a bunch of features, care about them deeply for a few days or weeks, commit them to the CI server, and then when we come to deploy them a little while later, we'll have almost forgotten about them. Take today, when Glenn and I were discussing writing the blog post for the release

-- "Not much user-visible stuff in this one was there? Just infrastructure I think..."

-- "Let's have a look. Oh yes, we fixed the ipad text editor. And we did the disable-webapps-on-downgrade thing. Oh yeah, and change-webapp-python-version, people have been asking for that. Oh, wow, and shared files! I'd almost totally forgotten!"

So actually, dear users, lots of nice things to show you.

File sharing

People have been asking us since forever about whether they could use PythonAnywhere to share code with their friends. or to show off that famous text-based guessing game we've all made early on in our programming careers. And, after years of saying "I keep telling people there's no demand for it", we've finally managed to make a start.

If you open up the Editor on PythonAnywhere you'll see a new button marked Share. screenshot of share button

You'll be able to get a link that you can share with your friends, who'll then be able to view your code, and, if they dare, copy it into their own accounts and run it.

screenshot of share menu

We're keen to know what you think, so do send feedback!

Change Python version for a web app

Another feature request, more minor this time; you'll also see a new button that'll let you change the version of Python for an existing web app. Sadly the button won't magically convert all your code from Python 2 to Python 3 though, so that's still up to you...

screenshot of change python ui

More debugging info on the "Unhandled Exception" page.

When there's a bug, or your code raises an exception for whatever reason, your site will return our standard "Unhandled Exception" page. We've now enhanced it so that, if it notices you're currently logged into PythonAnywhere and are the owner of the site, it will show you some extra debugging info, that's not visible to other users.

screenshot of new error page

Why not introduce some bugs into your code and see for yourself?

ipad fix, and other bits and pieces

We finally gave up on using the fully-featured syntax-highlighting editor on ipads (it seemed like it worked but it really didn't, once you tried to do anything remotely complicated) and have reverted to using a simple textarea.

If you're trying to use PythonAnywhere on a mobile device and notice any problems with the editor, do let us know, and we'll see if we can do the same for your platform.

Other than that, nothing major! A small improvement to the workflow for people who downgrade and re-upgrade their accounts, a fix to a bug with __init__.py in django projects,

Keep your suggestions and comments coming, thanks for being our users and customers, and speak soon!

Harry + the team.

19 Jan 2017 10:29am GMT

Python Anywhere: New release! File sharing, and some nice little fixes and improvements

Honestly, it's strange. We'll work on a bunch of features, care about them deeply for a few days or weeks, commit them to the CI server, and then when we come to deploy them a little while later, we'll have almost forgotten about them. Take today, when Glenn and I were discussing writing the blog post for the release

-- "Not much user-visible stuff in this one was there? Just infrastructure I think..."

-- "Let's have a look. Oh yes, we fixed the ipad text editor. And we did the disable-webapps-on-downgrade thing. Oh yeah, and change-webapp-python-version, people have been asking for that. Oh, wow, and shared files! I'd almost totally forgotten!"

So actually, dear users, lots of nice things to show you.

File sharing

People have been asking us since forever about whether they could use PythonAnywhere to share code with their friends. or to show off that famous text-based guessing game we've all made early on in our programming careers. And, after years of saying "I keep telling people there's no demand for it", we've finally managed to make a start.

If you open up the Editor on PythonAnywhere you'll see a new button marked Share. screenshot of share button

You'll be able to get a link that you can share with your friends, who'll then be able to view your code, and, if they dare, copy it into their own accounts and run it.

screenshot of share menu

We're keen to know what you think, so do send feedback!

Change Python version for a web app

Another feature request, more minor this time; you'll also see a new button that'll let you change the version of Python for an existing web app. Sadly the button won't magically convert all your code from Python 2 to Python 3 though, so that's still up to you...

screenshot of change python ui

More debugging info on the "Unhandled Exception" page.

When there's a bug, or your code raises an exception for whatever reason, your site will return our standard "Unhandled Exception" page. We've now enhanced it so that, if it notices you're currently logged into PythonAnywhere and are the owner of the site, it will show you some extra debugging info, that's not visible to other users.

screenshot of new error page

Why not introduce some bugs into your code and see for yourself?

ipad fix, and other bits and pieces

We finally gave up on using the fully-featured syntax-highlighting editor on ipads (it seemed like it worked but it really didn't, once you tried to do anything remotely complicated) and have reverted to using a simple textarea.

If you're trying to use PythonAnywhere on a mobile device and notice any problems with the editor, do let us know, and we'll see if we can do the same for your platform.

Other than that, nothing major! A small improvement to the workflow for people who downgrade and re-upgrade their accounts, a fix to a bug with __init__.py in django projects,

Keep your suggestions and comments coming, thanks for being our users and customers, and speak soon!

Harry + the team.

19 Jan 2017 10:29am GMT

18 Jan 2017

feedPlanet Python

Flavio Percoco: On communities: Trading off our values... Sometimes

Not long ago I wrote about how much emotions matter in every community. In that post I explained the importance of emotions, how they affect our work and why I believe they are relevant for pretty much everything we do. Emotions matter is a post quite focused on how we can affect, with our actions, other people's emotional state.

I've always considered myself an almost-thick skinned person. Things affect me but not in a way that would prevent me from keep moving forward. Most of the time, at least. I used to think this was a weakness, I used to think that letting this emotions through would slow me down. With time I came to accept it as a strength. Acknowledging this characteristic of mine has helped me to be more open about the relevance of emotions in our daily interactions and to be mindful about other folks that, like me, are almost-thick skinned or not even skinned at all. I've also come to question the real existence of the so called thick-skinned people and the more I interact with people, the more I'm convinced they don't really exist.

If you would ask me what emotion hits me the most I would probably say frustration. I'm often frustrated about things happening around me, especially about things that I am involved with. I don't spend time on things I can't change but rather try focus on those that not only directly affect me but that I can also have a direct impact on.

At this point, you may be wondering why I'm saying all this and what all this has to do with both, communities and with this post. Bear with me for a bit, I promise you this is relevant.

Culture (as explained in this post), emotions, personality and other factors drive our interactions with other team members. For some people, working in teams is easier than for others, although everyone claims they are awesome team mates (sarcasm intended, sorry). I believe, however, that one of the most difficult things of working with others is the constant evaluation of the things we values as team members, humans, professionals, etc.

There are no perfect teams and there are no perfect team mates. We weight the relevance of our values everyday, in every interaction we have with other people, in every thing we do.

But, what values am I talking about here?

Anything, really. Anything that is important to us. Anything that we stand for and that has slowly become a principle for us, our modus operandi. Our values are our methods. Our values are those beliefs that silently tell us how to react under different circumstances. Our values tell us whether we should care about other people's emotions or not. Controversially, our values are the things that will and won't make us valuable in a team and/or community. Our values are not things we posses, they are things we are and believe. In other words, the things we value are the things we consider important that will determine our behavior, our interaction with our environment and how the events happening around us will affect us.

The constant trading off of our values is hard. It makes us question our own stances. What's even harder is putting other people's values on top of ours from time to time. This constant evaluation is not supposed to be easy, it's never been easy. Not for me, at least. Let's face it, we all like to be stubborn, it feels go when things go the way we like. It's easier to manage, it's easier to reason about things when they go our way.

Have you ever found yourself doing something that will eventually make someone else's work useless? If yes, did you do it without first talking with that person? How much value do you put into splitting the work and keeping other folks motivated instead of you doing most of it just to get it done? Do you think going faster is more important than having a motivated team? How do you measure your success? Do you base success on achieving a common goal or about your personal performance in the process?

Note that the questions above don't try to express an opinion. The answers to those questions can be 2 or more depending on your point of view and that's fine. I don't even think there's a right answer to those questions. However, they do question our beliefs. Choosing one option over the other may go in favor or against of what we value. This is true for many areas of our life, not only our work environment. This applies to our social life, our family life, etc.

Some values are easier to question than others but we should all spend more time thinking about them. I believe the time we spend weighting and re-evaluating our values allow us for adapting faster to new environments and for us to grow as individuals and communities. Your cultural values have a great influence in this process. Whether you come from an individualist culture or a collectivist one (Listen to 'Customs of the world' for more info on this) will make you prefer one option over the other.

Of course, balance is the key. Giving up our beliefs every time is not the answer but not giving them up ever is definitely frustrating for everyone and makes interactions with other cultures more difficult. There are things that cannot be traded and that's fine. That's understandable, that's human. That's how it should be. Nonetheless, there are more things that can be traded than there are things that you shouldn't give up. The reason I'm sure of this is that our world is extremely diverse and we wouldn't be were we are if we wouldn't be able to give up some of our own beliefs from time to time.

I don't think we should give up who we are, I think we should constantly evaluate if our values are still relevant. It's not easy, though. No one said it was.

18 Jan 2017 11:00pm GMT

Flavio Percoco: On communities: Trading off our values... Sometimes

Not long ago I wrote about how much emotions matter in every community. In that post I explained the importance of emotions, how they affect our work and why I believe they are relevant for pretty much everything we do. Emotions matter is a post quite focused on how we can affect, with our actions, other people's emotional state.

I've always considered myself an almost-thick skinned person. Things affect me but not in a way that would prevent me from keep moving forward. Most of the time, at least. I used to think this was a weakness, I used to think that letting this emotions through would slow me down. With time I came to accept it as a strength. Acknowledging this characteristic of mine has helped me to be more open about the relevance of emotions in our daily interactions and to be mindful about other folks that, like me, are almost-thick skinned or not even skinned at all. I've also come to question the real existence of the so called thick-skinned people and the more I interact with people, the more I'm convinced they don't really exist.

If you would ask me what emotion hits me the most I would probably say frustration. I'm often frustrated about things happening around me, especially about things that I am involved with. I don't spend time on things I can't change but rather try focus on those that not only directly affect me but that I can also have a direct impact on.

At this point, you may be wondering why I'm saying all this and what all this has to do with both, communities and with this post. Bear with me for a bit, I promise you this is relevant.

Culture (as explained in this post), emotions, personality and other factors drive our interactions with other team members. For some people, working in teams is easier than for others, although everyone claims they are awesome team mates (sarcasm intended, sorry). I believe, however, that one of the most difficult things of working with others is the constant evaluation of the things we values as team members, humans, professionals, etc.

There are no perfect teams and there are no perfect team mates. We weight the relevance of our values everyday, in every interaction we have with other people, in every thing we do.

But, what values am I talking about here?

Anything, really. Anything that is important to us. Anything that we stand for and that has slowly become a principle for us, our modus operandi. Our values are our methods. Our values are those beliefs that silently tell us how to react under different circumstances. Our values tell us whether we should care about other people's emotions or not. Controversially, our values are the things that will and won't make us valuable in a team and/or community. Our values are not things we posses, they are things we are and believe. In other words, the things we value are the things we consider important that will determine our behavior, our interaction with our environment and how the events happening around us will affect us.

The constant trading off of our values is hard. It makes us question our own stances. What's even harder is putting other people's values on top of ours from time to time. This constant evaluation is not supposed to be easy, it's never been easy. Not for me, at least. Let's face it, we all like to be stubborn, it feels go when things go the way we like. It's easier to manage, it's easier to reason about things when they go our way.

Have you ever found yourself doing something that will eventually make someone else's work useless? If yes, did you do it without first talking with that person? How much value do you put into splitting the work and keeping other folks motivated instead of you doing most of it just to get it done? Do you think going faster is more important than having a motivated team? How do you measure your success? Do you base success on achieving a common goal or about your personal performance in the process?

Note that the questions above don't try to express an opinion. The answers to those questions can be 2 or more depending on your point of view and that's fine. I don't even think there's a right answer to those questions. However, they do question our beliefs. Choosing one option over the other may go in favor or against of what we value. This is true for many areas of our life, not only our work environment. This applies to our social life, our family life, etc.

Some values are easier to question than others but we should all spend more time thinking about them. I believe the time we spend weighting and re-evaluating our values allow us for adapting faster to new environments and for us to grow as individuals and communities. Your cultural values have a great influence in this process. Whether you come from an individualist culture or a collectivist one (Listen to 'Customs of the world' for more info on this) will make you prefer one option over the other.

Of course, balance is the key. Giving up our beliefs every time is not the answer but not giving them up ever is definitely frustrating for everyone and makes interactions with other cultures more difficult. There are things that cannot be traded and that's fine. That's understandable, that's human. That's how it should be. Nonetheless, there are more things that can be traded than there are things that you shouldn't give up. The reason I'm sure of this is that our world is extremely diverse and we wouldn't be were we are if we wouldn't be able to give up some of our own beliefs from time to time.

I don't think we should give up who we are, I think we should constantly evaluate if our values are still relevant. It's not easy, though. No one said it was.

18 Jan 2017 11:00pm GMT

PyTennessee: PyTN Profiles: Kenneth Reitz and Intellovations

image



Speaker Profile: Kenneth Reitz (@kennethreitz)

Kenneth Reitz is a well-known software engineer, international keynote speaker, open source advocate, street photographer, and electronic music producer.

He is the product owner of Python at Heroku and a fellow at the Python Software Foundation. He is well-known for his many open source software projects, specifically Requests: HTTP for Humans.

Kenneth will be presenting "The Reality of Developer Burnout" at 11:00AM Sunday (2/5) in the Auditorium.

image

Sponsor Profile: Intellovations (@ForecastWatch)

Intellovations builds intelligent and innovative software that helps you understand, communicate, and use your data to make better decisions, increase productivity, and discover new knowledge.

We specialize in large-scale data collection and analysis, Internet-based software, and scientific and educational applications. We have experience building systems that have collected over 500 million metrics per day from Internet-based hardware, have created powerful desktop Internet-search products, and have used genetic algorithms and genetic programming for optimization.

Intellovations' main product is ForecastWatch, a service that continually monitors and assesses the accuracy of weather forecasts around the world, and is in use by leaders in the weather forecasting industry such as AccuWeather, Foreca, Global Weather Corporation, MeteoGroup, Pelmorex, and The Weather Company.

18 Jan 2017 7:11pm GMT

PyTennessee: PyTN Profiles: Kenneth Reitz and Intellovations

image



Speaker Profile: Kenneth Reitz (@kennethreitz)

Kenneth Reitz is a well-known software engineer, international keynote speaker, open source advocate, street photographer, and electronic music producer.

He is the product owner of Python at Heroku and a fellow at the Python Software Foundation. He is well-known for his many open source software projects, specifically Requests: HTTP for Humans.

Kenneth will be presenting "The Reality of Developer Burnout" at 11:00AM Sunday (2/5) in the Auditorium.

image

Sponsor Profile: Intellovations (@ForecastWatch)

Intellovations builds intelligent and innovative software that helps you understand, communicate, and use your data to make better decisions, increase productivity, and discover new knowledge.

We specialize in large-scale data collection and analysis, Internet-based software, and scientific and educational applications. We have experience building systems that have collected over 500 million metrics per day from Internet-based hardware, have created powerful desktop Internet-search products, and have used genetic algorithms and genetic programming for optimization.

Intellovations' main product is ForecastWatch, a service that continually monitors and assesses the accuracy of weather forecasts around the world, and is in use by leaders in the weather forecasting industry such as AccuWeather, Foreca, Global Weather Corporation, MeteoGroup, Pelmorex, and The Weather Company.

18 Jan 2017 7:11pm GMT

PyTennessee: PyTN Profiles: Matthew Montgomery and Elevation Search Solutions

image


Speaker Profile: Matthew Montgomery (@signed8bit)

Matthew Montgomery is a Technical Leader at Cisco Systems in the OpenStack group. He has been working professionally on the web since 2000 when he joined Sun Microsystems and worked on a number or high volume customer facing web properties. Moving on after the Oracle acquisition of Sun, he worked briefly in the consultant racket with Accenture and then made some meaningful contributions to clinical workflow with Vanderbilt University Medical Center. Prior to Cisco he was focusing on digital marketing applications deployed on Amazon Web Services. Through most of this, Matthew has called Nashville home and has no plans to change that in the future.

Matthew will be presenting "Test your Automation!" at 3:00PM Saturday (2/4) in Room 300. Learn how to apply the principles of unit testing to your automation code. Using Molecule and Testinfra, this tutorial will provide hands-on guidance for testing an Ansible role.

image

Sponsor Profile: Elevation Search Solutions (@elevationsearch)

Elevation Search Solutions is a boutique search firm specializing in team build outs for growing companies. We are exceptional at sourcing top professionals to fit unique cultures and push business to the next level.

18 Jan 2017 7:03pm GMT

PyTennessee: PyTN Profiles: Matthew Montgomery and Elevation Search Solutions

image


Speaker Profile: Matthew Montgomery (@signed8bit)

Matthew Montgomery is a Technical Leader at Cisco Systems in the OpenStack group. He has been working professionally on the web since 2000 when he joined Sun Microsystems and worked on a number or high volume customer facing web properties. Moving on after the Oracle acquisition of Sun, he worked briefly in the consultant racket with Accenture and then made some meaningful contributions to clinical workflow with Vanderbilt University Medical Center. Prior to Cisco he was focusing on digital marketing applications deployed on Amazon Web Services. Through most of this, Matthew has called Nashville home and has no plans to change that in the future.

Matthew will be presenting "Test your Automation!" at 3:00PM Saturday (2/4) in Room 300. Learn how to apply the principles of unit testing to your automation code. Using Molecule and Testinfra, this tutorial will provide hands-on guidance for testing an Ansible role.

image

Sponsor Profile: Elevation Search Solutions (@elevationsearch)

Elevation Search Solutions is a boutique search firm specializing in team build outs for growing companies. We are exceptional at sourcing top professionals to fit unique cultures and push business to the next level.

18 Jan 2017 7:03pm GMT

DataCamp: Pandas Cheat Sheet for Data Science in Python

The Pandas library is one of the most preferred tools for data scientists to do data manipulation and analysis, next to matplotlib for data visualization and NumPy, the fundamental library for scientific computing in Python on which Pandas was built.

The fast, flexible, and expressive Pandas data structures are designed to make real-world data analysis significantly easier, but this might not be immediately the case for those who are just getting started with it. Exactly because there is so much functionality built into this package that the options are overwhelming.

That's where this Pandas cheat sheet might come in handy.

It's a quick guide through the basics of Pandas that you will need to get started on wrangling your data with Python.

As such, you can use it as a handy reference if you are just beginning their data science journey with Pandas or, for those of you who already haven't started yet, you can just use it as a guide to make it easier to learn about and use it.

Python Pandas Cheat Sheet

The Pandas cheat sheet will guide you through the basics of the Pandas library, going from the data structures to I/O, selection, dropping indices or columns, sorting and ranking, retrieving basic information of the data structures you're working with to applying functions and data alignment.

In short, everything that you need to kickstart your data science learning with Python!

Do you want to learn more? Start the Intermediate Python For Data Science course for free now or try out our Pandas DataFrame tutorial!

Also, don't miss out on our Bokeh cheat sheet for data visualization in Python and our Python cheat sheet for data science.

18 Jan 2017 6:56pm GMT

DataCamp: Pandas Cheat Sheet for Data Science in Python

The Pandas library is one of the most preferred tools for data scientists to do data manipulation and analysis, next to matplotlib for data visualization and NumPy, the fundamental library for scientific computing in Python on which Pandas was built.

The fast, flexible, and expressive Pandas data structures are designed to make real-world data analysis significantly easier, but this might not be immediately the case for those who are just getting started with it. Exactly because there is so much functionality built into this package that the options are overwhelming.

That's where this Pandas cheat sheet might come in handy.

It's a quick guide through the basics of Pandas that you will need to get started on wrangling your data with Python.

As such, you can use it as a handy reference if you are just beginning their data science journey with Pandas or, for those of you who already haven't started yet, you can just use it as a guide to make it easier to learn about and use it.

Python Pandas Cheat Sheet

The Pandas cheat sheet will guide you through the basics of the Pandas library, going from the data structures to I/O, selection, dropping indices or columns, sorting and ranking, retrieving basic information of the data structures you're working with to applying functions and data alignment.

In short, everything that you need to kickstart your data science learning with Python!

Do you want to learn more? Start the Intermediate Python For Data Science course for free now or try out our Pandas DataFrame tutorial!

Also, don't miss out on our Bokeh cheat sheet for data visualization in Python and our Python cheat sheet for data science.

18 Jan 2017 6:56pm GMT

Caktus Consulting Group: Ship It Day Q1 2017

Last Friday, Caktus set aside client projects for our regular quarterly ShipIt Day. From gerrymandered districts to RPython and meetup planning, the team started off 2017 with another great ShipIt.

Books for the Caktus Library

Liza uses Delicious Library to track books in the Caktus Library. However, the tracking of books isn't visible to the team, so Scott used the FTP export feature of Delicious Library to serve the content on our local network. Scott dockerized Caddy and deployed it to our local Dokku PaaS platform and serves it over HTTPS, allowing the team to see the status of the Caktus Library.

Property-based testing with Hypothesis

Vinod researched using property-based testing in Python. Traditionally it's more used with functional programming languages, but Hypothesis brings the concept to Python. He also learned about new Django features, including testing optimizations introduced with setupTestData.

Caktus Wagtail Demo with Docker and AWS

David looked into migrating a Heroku-based Wagtail deployment to a container-driven deployment using Amazon Web Services (AWS) and Docker. Utilizing Tobias' AWS Container Basics isolated Elastic Container Service stack, David created a Dockerfile for Wagtail and deployed it to AWS. Down the road, he'd like to more easily debug performance issues and integrate it with GitLab CI.

Local Docker Development

During Code for Durham Hack Nights, Victor noticed local development setup was a barrier of entry for new team members. To help mitigate this issue, he researched using Docker for local development with the Durham School Navigator project. In the end, he used Docker Compose to run a multi-container docker application with PostgreSQL, NGINX, and Django.

Caktus Costa Rica

Daryl, Nicole, and Sarah really like the idea of opening a branch Caktus office in Costa Rica and drafted a business plan to do so! Including everything from an executive summary, to operational and financial plans, the team researched what it would take to run a team from Playa Hermosa in Central America. Primary criteria included short distances to an airport, hospital, and of course, a beach. They even found an office with our name, the Cactus House. Relocation would be voluntary!

Improving the GUI test runner: Cricket

Charlotte M. likes to use Cricket to see test results in real time and have the ability to easily re-run specific tests, which is useful for quickly verifying fixes. However, she encountered a problem causing the application to crash sometimes when tests failed. So she investigated the problem and submitted a fix via a pull request back to the project. She also looked into adding coverage support.

Color your own NC Congressional District

Erin, Mark, Basia, Neil, and Dmitriy worked on an app that visualizes and teaches you about gerrymandered districts. The team ran a mini workshop to define goals and personas, and help the team prioritize the day's tasks by using agile user story mapping. The app provides background information on gerrymandering and uses data from NC State Board of Elections to illustrate how slight changes to districts can vastly impact the election of state representatives. The site uses D3 visualizations, which is an excellent utility for rendering GeoJSON geospatial data. In the future they hope to add features to compare districts and overlay demographic data.

Releasing django_tinypng

Dmitriy worked on testing and documenting django_tinypng, a simple Django library to allows optimization of images by using TinyPNG. He published the app to PyPI so it's easily installable via pip.

Learning Django: The Django Girls Tutorial

Gerald and Graham wanted to sharpen their Django skills by following the Django Girls Tutorial. Gerald learned a lot from the tutorial and enjoyed the format, including how it steps through blocks of code describing the syntax. He also learned about how the Django Admin is configured. Graham knew that following tutorials can sometimes be a rocky process, so he worked together with Graham so they could talk through problems together and Graham was able to learn by reviewing and helping.

Planning a new meetup for Digital Project Management

When Elizabeth first entered the Digital Project Management field several years ago, there were not a lot of resources available specifically for digital project managers. Most information was related to more traditional project management, or the PMP. She attended the 2nd Digital PM Summit with her friend Jillian, and loved the general tone of openness and knowledge sharing (they also met Daryl and Ben there!). The Summit was a wonderful resource. Elizabeth wanted to bring the spirit of the Summit back to the Triangle, so during Ship It Day, she started planning for a new meetup, including potential topics and meeting locations. One goal is to allow remote attendance through Google Hangouts, to encourage openness and sharing without having to commute across the Triangle. Elizabeth and Jillian hope to hold their first meetup in February.

Kanban: Research + Talk

Charlotte F. researched Kanban to prepare for a longer talk to illustrate how Kanban works in development and how it differs from Scrum. Originally designed by Toyota to improve manufacturing plants, Kanban focuses on visualizing workflows to help reveal and address bottlenecks. Picking the right tool for the job is important, and one is not necessarily better than the other, so Charlotte focused on outlining when to use one over the other.

Identifying Code for Cleanup

Calvin created redundant, a tool for identifying technical debt. Last ShipIt he was able to locate completely identical files, but he wanted to improve on that. Now the tool can identify functions that are almost the same and/or might be generalizable. It searches for patterns and generates a report of your codebase. He's looking for codebases to test it on!

RPython Lisp Implementation, Revisited

Jeff B. continued exploring how to create a Lisp implementation in RPython, the framework behind the PyPy project project. RPython is a restricted subset of the Python language. In addition to learning about RPython, he wanted to better understand how PyPy is capable of performance enhancements over CPython. Jeff also converted his parser to use Alex Gaynor's RPLY project.

Streamlined Time Tracking

At Caktus, time tracking is important, and we've used a variety of tools over the years. Currently we use Harvest, but it can be tedius to use when switching between projects a lot. Dan would like a tool to make this process more efficient. He looked into Project Hampster, but settled on building a new tool. His implementation makes it easy to switch between projects with a single click. It also allows users to sync daily entries to Harvest.

18 Jan 2017 4:39pm GMT

Caktus Consulting Group: Ship It Day Q1 2017

Last Friday, Caktus set aside client projects for our regular quarterly ShipIt Day. From gerrymandered districts to RPython and meetup planning, the team started off 2017 with another great ShipIt.

Books for the Caktus Library

Liza uses Delicious Library to track books in the Caktus Library. However, the tracking of books isn't visible to the team, so Scott used the FTP export feature of Delicious Library to serve the content on our local network. Scott dockerized Caddy and deployed it to our local Dokku PaaS platform and serves it over HTTPS, allowing the team to see the status of the Caktus Library.

Property-based testing with Hypothesis

Vinod researched using property-based testing in Python. Traditionally it's more used with functional programming languages, but Hypothesis brings the concept to Python. He also learned about new Django features, including testing optimizations introduced with setupTestData.

Caktus Wagtail Demo with Docker and AWS

David looked into migrating a Heroku-based Wagtail deployment to a container-driven deployment using Amazon Web Services (AWS) and Docker. Utilizing Tobias' AWS Container Basics isolated Elastic Container Service stack, David created a Dockerfile for Wagtail and deployed it to AWS. Down the road, he'd like to more easily debug performance issues and integrate it with GitLab CI.

Local Docker Development

During Code for Durham Hack Nights, Victor noticed local development setup was a barrier of entry for new team members. To help mitigate this issue, he researched using Docker for local development with the Durham School Navigator project. In the end, he used Docker Compose to run a multi-container docker application with PostgreSQL, NGINX, and Django.

Caktus Costa Rica

Daryl, Nicole, and Sarah really like the idea of opening a branch Caktus office in Costa Rica and drafted a business plan to do so! Including everything from an executive summary, to operational and financial plans, the team researched what it would take to run a team from Playa Hermosa in Central America. Primary criteria included short distances to an airport, hospital, and of course, a beach. They even found an office with our name, the Cactus House. Relocation would be voluntary!

Improving the GUI test runner: Cricket

Charlotte M. likes to use Cricket to see test results in real time and have the ability to easily re-run specific tests, which is useful for quickly verifying fixes. However, she encountered a problem causing the application to crash sometimes when tests failed. So she investigated the problem and submitted a fix via a pull request back to the project. She also looked into adding coverage support.

Color your own NC Congressional District

Erin, Mark, Basia, Neil, and Dmitriy worked on an app that visualizes and teaches you about gerrymandered districts. The team ran a mini workshop to define goals and personas, and help the team prioritize the day's tasks by using agile user story mapping. The app provides background information on gerrymandering and uses data from NC State Board of Elections to illustrate how slight changes to districts can vastly impact the election of state representatives. The site uses D3 visualizations, which is an excellent utility for rendering GeoJSON geospatial data. In the future they hope to add features to compare districts and overlay demographic data.

Releasing django_tinypng

Dmitriy worked on testing and documenting django_tinypng, a simple Django library to allows optimization of images by using TinyPNG. He published the app to PyPI so it's easily installable via pip.

Learning Django: The Django Girls Tutorial

Gerald and Graham wanted to sharpen their Django skills by following the Django Girls Tutorial. Gerald learned a lot from the tutorial and enjoyed the format, including how it steps through blocks of code describing the syntax. He also learned about how the Django Admin is configured. Graham knew that following tutorials can sometimes be a rocky process, so he worked together with Graham so they could talk through problems together and Graham was able to learn by reviewing and helping.

Planning a new meetup for Digital Project Management

When Elizabeth first entered the Digital Project Management field several years ago, there were not a lot of resources available specifically for digital project managers. Most information was related to more traditional project management, or the PMP. She attended the 2nd Digital PM Summit with her friend Jillian, and loved the general tone of openness and knowledge sharing (they also met Daryl and Ben there!). The Summit was a wonderful resource. Elizabeth wanted to bring the spirit of the Summit back to the Triangle, so during Ship It Day, she started planning for a new meetup, including potential topics and meeting locations. One goal is to allow remote attendance through Google Hangouts, to encourage openness and sharing without having to commute across the Triangle. Elizabeth and Jillian hope to hold their first meetup in February.

Kanban: Research + Talk

Charlotte F. researched Kanban to prepare for a longer talk to illustrate how Kanban works in development and how it differs from Scrum. Originally designed by Toyota to improve manufacturing plants, Kanban focuses on visualizing workflows to help reveal and address bottlenecks. Picking the right tool for the job is important, and one is not necessarily better than the other, so Charlotte focused on outlining when to use one over the other.

Identifying Code for Cleanup

Calvin created redundant, a tool for identifying technical debt. Last ShipIt he was able to locate completely identical files, but he wanted to improve on that. Now the tool can identify functions that are almost the same and/or might be generalizable. It searches for patterns and generates a report of your codebase. He's looking for codebases to test it on!

RPython Lisp Implementation, Revisited

Jeff B. continued exploring how to create a Lisp implementation in RPython, the framework behind the PyPy project project. RPython is a restricted subset of the Python language. In addition to learning about RPython, he wanted to better understand how PyPy is capable of performance enhancements over CPython. Jeff also converted his parser to use Alex Gaynor's RPLY project.

Streamlined Time Tracking

At Caktus, time tracking is important, and we've used a variety of tools over the years. Currently we use Harvest, but it can be tedius to use when switching between projects a lot. Dan would like a tool to make this process more efficient. He looked into Project Hampster, but settled on building a new tool. His implementation makes it easy to switch between projects with a single click. It also allows users to sync daily entries to Harvest.

18 Jan 2017 4:39pm GMT

PyCharm: PyCharm 2017.1 EAP 3 (build 171.2455.3)

We're happy to announce the next EAP for PyCharm 2017.1, get it now from our website!

This week, we've fixed several issues, and added some functionality:

Download it now from our website! To keep up-to-date with our EAP releases set your update channel to Early Access Program: Settings | Appearance & Behavior | System Settings | Updates, Automatically check updates for "Early Access Program"

-PyCharm Team
The Drive to Develop

18 Jan 2017 4:34pm GMT

PyCharm: PyCharm 2017.1 EAP 3 (build 171.2455.3)

We're happy to announce the next EAP for PyCharm 2017.1, get it now from our website!

This week, we've fixed several issues, and added some functionality:

Download it now from our website! To keep up-to-date with our EAP releases set your update channel to Early Access Program: Settings | Appearance & Behavior | System Settings | Updates, Automatically check updates for "Early Access Program"

-PyCharm Team
The Drive to Develop

18 Jan 2017 4:34pm GMT

GoDjango: How I Deploy Django Day-to-Day

There are a lot of ways to deploy Django so I think it is one of those topics people are really curious about how other people do it. Generally, in all deploys you need to get the latest code, run migrations, collect your static files and restart web server processes. How yo do those steps, that is the interesting part.

In todays video I go over How I deploy Django day to day, followed by some other ways I have done it. This is definitely a topic you can make as easy or complicated as you want.

Here is the link again: https://www.youtube.com/watch?v=43lIXCPMw_8?vq=hd720

18 Jan 2017 4:00pm GMT

GoDjango: How I Deploy Django Day-to-Day

There are a lot of ways to deploy Django so I think it is one of those topics people are really curious about how other people do it. Generally, in all deploys you need to get the latest code, run migrations, collect your static files and restart web server processes. How yo do those steps, that is the interesting part.

In todays video I go over How I deploy Django day to day, followed by some other ways I have done it. This is definitely a topic you can make as easy or complicated as you want.

Here is the link again: https://www.youtube.com/watch?v=43lIXCPMw_8?vq=hd720

18 Jan 2017 4:00pm GMT

Experienced Django: Django Debug Toolbar

Django debug toolbar is a nifty little utility to allow you to examine what's going on under the hood. It's a fairly easy install and gives quite a lot of info.

Installation

I'm not going to waste your time (or mine) with details of how to install the debug toolbar. The instructions are here.

I will, however, point out that the "tips" page starts with "The toolbar isn't displayed!", which helped me get running. My problem was a lack of <body> </body> tags on my template. (side note: I'm wondering if something like bootstrap would provide those surrounding tags automatically.)

Using The Toolbar

The use of the toolbar is pretty obvious. The information is pretty clearly laid out on each of the sections.

The section I found the most interesting was the SQL tab (shown below), which not only shows which queries were done for the given page, but also how long each took.

The page I instrumented has a task which updates several fields in the database the first time it is loaded on any given date. Using this tab it was clear how much of the page load time was taken up in this update process.

Not only would this be handy for performance troubleshooting, but it's also instructional to see which python statements turn into queries and how.

Conclusion

As a fan of development tools, Django Debug Toolbar certainly makes me happy not only for its features, but also its simplicity in use and design. I would definitely recommend it

18 Jan 2017 2:03pm GMT

Experienced Django: Django Debug Toolbar

Django debug toolbar is a nifty little utility to allow you to examine what's going on under the hood. It's a fairly easy install and gives quite a lot of info.

Installation

I'm not going to waste your time (or mine) with details of how to install the debug toolbar. The instructions are here.

I will, however, point out that the "tips" page starts with "The toolbar isn't displayed!", which helped me get running. My problem was a lack of <body> </body> tags on my template. (side note: I'm wondering if something like bootstrap would provide those surrounding tags automatically.)

Using The Toolbar

The use of the toolbar is pretty obvious. The information is pretty clearly laid out on each of the sections.

The section I found the most interesting was the SQL tab (shown below), which not only shows which queries were done for the given page, but also how long each took.

The page I instrumented has a task which updates several fields in the database the first time it is loaded on any given date. Using this tab it was clear how much of the page load time was taken up in this update process.

Not only would this be handy for performance troubleshooting, but it's also instructional to see which python statements turn into queries and how.

Conclusion

As a fan of development tools, Django Debug Toolbar certainly makes me happy not only for its features, but also its simplicity in use and design. I would definitely recommend it

18 Jan 2017 2:03pm GMT

10 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: King Willams Town Bahnhof

Gestern musste ich morgens zur Station nach KWT um unsere Rerservierten Bustickets für die Weihnachtsferien in Capetown abzuholen. Der Bahnhof selber ist seit Dezember aus kostengründen ohne Zugverbindung - aber Translux und co - die langdistanzbusse haben dort ihre Büros.


Größere Kartenansicht




© benste CC NC SA

10 Nov 2011 10:57am GMT

09 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein

Niemand ist besorgt um so was - mit dem Auto fährt man einfach durch, und in der City - nahe Gnobie- "ne das ist erst gefährlich wenn die Feuerwehr da ist" - 30min später auf dem Rückweg war die Feuerwehr da.




© benste CC NC SA

09 Nov 2011 8:25pm GMT

08 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Brai Party

Brai = Grillabend o.ä.

Die möchte gern Techniker beim Flicken ihrer SpeakOn / Klinke Stecker Verzweigungen...

Die Damen "Mamas" der Siedlung bei der offiziellen Eröffnungsrede

Auch wenn weniger Leute da waren als erwartet, Laute Musik und viele Leute ...

Und natürlich ein Feuer mit echtem Holz zum Grillen.

© benste CC NC SA

08 Nov 2011 2:30pm GMT

07 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Lumanyano Primary

One of our missions was bringing Katja's Linux Server back to her room. While doing that we saw her new decoration.

Björn, Simphiwe carried the PC to Katja's school


© benste CC NC SA

07 Nov 2011 2:00pm GMT

06 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Nelisa Haircut

Today I went with Björn to Needs Camp to Visit Katja's guest family for a special Party. First of all we visited some friends of Nelisa - yeah the one I'm working with in Quigney - Katja's guest fathers sister - who did her a haircut.

African Women usually get their hair done by arranging extensions and not like Europeans just cutting some hair.

In between she looked like this...

And then she was done - looks amazing considering the amount of hair she had last week - doesn't it ?

© benste CC NC SA

06 Nov 2011 7:45pm GMT

05 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Mein Samstag

Irgendwie viel mir heute auf das ich meine Blogposts mal ein bischen umstrukturieren muss - wenn ich immer nur von neuen Plätzen berichte, dann müsste ich ja eine Rundreise machen. Hier also mal ein paar Sachen aus meinem heutigen Alltag.

Erst einmal vorweg, Samstag zählt zumindest für uns Voluntäre zu den freien Tagen.

Dieses Wochenende sind nur Rommel und ich auf der Farm - Katja und Björn sind ja mittlerweile in ihren Einsatzstellen, und meine Mitbewohner Kyle und Jonathan sind zu Hause in Grahamstown - sowie auch Sipho der in Dimbaza wohnt.
Robin, die Frau von Rommel ist in Woodie Cape - schon seit Donnerstag um da ein paar Sachen zur erledigen.
Naja wie dem auch sei heute morgen haben wir uns erstmal ein gemeinsames Weetbix/Müsli Frühstück gegönnt und haben uns dann auf den Weg nach East London gemacht. 2 Sachen waren auf der Checkliste Vodacom, Ethienne (Imobilienmakler) außerdem auf dem Rückweg die fehlenden Dinge nach NeedsCamp bringen.

Nachdem wir gerade auf der Dirtroad losgefahren sind mussten wir feststellen das wir die Sachen für Needscamp und Ethienne nicht eingepackt hatten aber die Pumpe für die Wasserversorgung im Auto hatten.

Also sind wir in EastLondon ersteinmal nach Farmerama - nein nicht das onlinespiel farmville - sondern einen Laden mit ganz vielen Sachen für eine Farm - in Berea einem nördlichen Stadteil gefahren.

In Farmerama haben wir uns dann beraten lassen für einen Schnellverschluss der uns das leben mit der Pumpe leichter machen soll und außerdem eine leichtere Pumpe zur Reperatur gebracht, damit es nicht immer so ein großer Aufwand ist, wenn mal wieder das Wasser ausgegangen ist.

Fego Caffé ist in der Hemmingways Mall, dort mussten wir und PIN und PUK einer unserer Datensimcards geben lassen, da bei der PIN Abfrage leider ein zahlendreher unterlaufen ist. Naja auf jeden Fall speichern die Shops in Südafrika so sensible Daten wie eine PUK - die im Prinzip zugang zu einem gesperrten Phone verschafft.

Im Cafe hat Rommel dann ein paar online Transaktionen mit dem 3G Modem durchgeführt, welches ja jetzt wieder funktionierte - und übrigens mittlerweile in Ubuntu meinem Linuxsystem perfekt klappt.

Nebenbei bin ich nach 8ta gegangen um dort etwas über deren neue Deals zu erfahren, da wir in einigen von Hilltops Centern Internet anbieten wollen. Das Bild zeigt die Abdeckung UMTS in NeedsCamp Katjas Ort. 8ta ist ein neuer Telefonanbieter von Telkom, nachdem Vodafone sich Telkoms anteile an Vodacom gekauft hat müssen die komplett neu aufbauen.
Wir haben uns dazu entschieden mal eine kostenlose Prepaidkarte zu testen zu organisieren, denn wer weis wie genau die Karte oben ist ... Bevor man einen noch so billigen Deal für 24 Monate signed sollte man wissen obs geht.

Danach gings nach Checkers in Vincent, gesucht wurden zwei Hotplates für WoodyCape - R 129.00 eine - also ca. 12€ für eine zweigeteilte Kochplatte.
Wie man sieht im Hintergrund gibts schon Weihnachtsdeko - Anfang November und das in Südafrika bei sonnig warmen min- 25°C

Mittagessen haben wir uns bei einem Pakistanischen Curry Imbiss gegönnt - sehr empfehlenswert !
Naja und nachdem wir dann vor ner Stunde oder so zurück gekommen sind habe ich noch den Kühlschrank geputzt den ich heute morgen zum defrosten einfach nach draußen gestellt hatte. Jetzt ist der auch mal wieder sauber und ohne 3m dicke Eisschicht...

Morgen ... ja darüber werde ich gesondert berichten ... aber vermutlich erst am Montag, denn dann bin ich nochmal wieder in Quigney(East London) und habe kostenloses Internet.

© benste CC NC SA

05 Nov 2011 4:33pm GMT

31 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Sterkspruit Computer Center

Sterkspruit is one of Hilltops Computer Centres in the far north of Eastern Cape. On the trip to J'burg we've used the opportunity to take a look at the centre.

Pupils in the big classroom


The Trainer


School in Countryside


Adult Class in the Afternoon


"Town"


© benste CC NC SA

31 Oct 2011 4:58pm GMT

Benedict Stein: Technical Issues

What are you doing in an internet cafe if your ADSL and Faxline has been discontinued before months end. Well my idea was sitting outside and eating some ice cream.
At least it's sunny and not as rainy as on the weekend.


© benste CC NC SA

31 Oct 2011 3:11pm GMT

30 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Nellis Restaurant

For those who are traveling through Zastron - there is a very nice Restaurant which is serving delicious food at reasanable prices.
In addition they're selling home made juices jams and honey.




interior


home made specialities - the shop in the shop


the Bar


© benste CC NC SA

30 Oct 2011 4:47pm GMT

29 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: The way back from J'burg

Having the 10 - 12h trip from J'burg back to ELS I was able to take a lot of pcitures including these different roadsides

Plain Street


Orange River in its beginngings (near Lesotho)


Zastron Anglican Church


The Bridge in Between "Free State" and Eastern Cape next to Zastron


my new Background ;)


If you listen to GoogleMaps you'll end up traveling 50km of gravel road - as it was just renewed we didn't have that many problems and saved 1h compared to going the official way with all it's constructions sites




Freeway


getting dark


© benste CC NC SA

29 Oct 2011 4:23pm GMT

28 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Wie funktioniert eigentlich eine Baustelle ?

Klar einiges mag anders sein, vieles aber gleich - aber ein in Deutschland täglich übliches Bild einer Straßenbaustelle - wie läuft das eigentlich in Südafrika ?

Ersteinmal vorweg - NEIN keine Ureinwohner die mit den Händen graben - auch wenn hier mehr Manpower genutzt wird - sind sie fleißig mit Technologie am arbeiten.

Eine ganz normale "Bundesstraße"


und wie sie erweitert wird


gaaaanz viele LKWs


denn hier wird eine Seite über einen langen Abschnitt komplett gesperrt, so das eine Ampelschaltung mit hier 45 Minuten Wartezeit entsteht


Aber wenigstens scheinen die ihren Spaß zu haben ;) - Wie auch wir denn gücklicher Weise mussten wir nie länger als 10 min. warten.

© benste CC NC SA

28 Oct 2011 4:20pm GMT