01 May 2016

feedPlanet Python

Podcast.__init__: Episode 55 - LibCloud with Anthony Shaw

Visit our site to listen to past episodes, support the show, join our community, and sign up for our mailing list.

Summary

More and more of our applications are running in the cloud and there are increasingly more providers to choose from. The LibCloud project is a Python library to help us manage the complexity of our environments from a uniform and pleasant API. In this episode Anthony Shaw joins us to explain how LibCloud works, the community that builds and supports it, and the myriad ways in which it can be used. We also got a peek at some of the plans for the future of the project.

Brief Introduction

Rollbar Logo

I'm excited to tell you about a new sponsor of the show, Rollbar.

One of the frustrating things about being a developer, is dealing with errors… (sigh)

  • Relying on users to report errors
  • Digging thru log files trying to debug issues
  • A million alerts flooding your inbox ruining your day…

With Rollbar's full-stack error monitoring, you get the context, insights and control you need to find and fix bugs faster. It's easy to get started tracking the errors and exceptions in your stack.You can start tracking production errors and deployments in 8 minutes - or less, and Rollbar works with all major languages and frameworks, including Ruby, Python, Javascript, PHP, Node, iOS, Android and more.You can integrate Rollbar into your existing workflow such as sending error alerts to Slack or Hipchat, or automatically create new issues in Github, JIRA, Pivotal Tracker etc.

We have a special offer for Podcast.__init__ listeners. Go to rollbar.com/podcastinit, signup, and get the Bootstrap Plan free for 90 days. That's 300,000 errors tracked for free.Loved by developers at awesome companies like Heroku, Twilio, Kayak, Instacart, Zendesk, Twitch and more. Help support Podcast.__init__ and give Rollbar a try a today. Go to rollbar.com/podcastinit

Linode Sponsor Banner

Use the promo code podcastinit20 to get a $20 credit when you sign up!

Interview with Anthony Shaw

Keep In Touch

Picks

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Visit our site to listen to past episodes, support the show, join our community, and sign up for our mailing list.Summary More and more of our applications are running in the cloud and there are increasingly more providers to choose from. The LibCloud project is a Python library to help us manage the complexity of our environments from a uniform and pleasant API. In this episode Anthony Shaw joins us to explain how LibCloud works, the community that builds and supports it, and the myriad ways in which it can be used. We also got a peek at some of the plans for the future of the project.Brief IntroductionHello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.Subscribe on iTunes, Stitcher, TuneIn or RSSFollow us on Twitter or Google+Give us feedback! Leave a review on iTunes, Tweet to us, send us an email or leave us a message on Google+Join our community! Visit discourse.pythonpodcast.com for your opportunity to find out about upcoming guests, suggest questions, and propose show ideas.I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable. For details on how to support the show you can visit our site at pythonpodcast.comLinode is sponsoring us this week. Check them out at linode.com/podcastinit and get a $20 credit to try out their fast and reliable Linux virtual servers for your next projectThe Open Data Science Conference in Boston is happening on May 21st and 22nd. If you use the code EP during registration you will save 20% off of the ticket price. If you decide to attend then let us know, we'll see you there!Your hosts as usual are Tobias Macey and Chris PattiToday we are interviewing Anthony Shaw about the Apache LibCloud project I'm excited to tell you about a new sponsor of the show, Rollbar. One of the frustrating things about being a developer, is dealing with errors… (sigh)Relying on users to report errorsDigging thru log files trying to debug issuesA million alerts flooding your inbox ruining your day...With Rollbar's full-stack error monitoring, you get the context, insights and control you need to find and fix bugs faster. It's easy to get started tracking the errors and exceptions in your stack.You can start tracking production errors and deployments in 8 minutes - or less, and Rollbar works with all major languages and frameworks, including Ruby, Python, Javascript, PHP, Node, iOS, Android and more.You can integrate Rollbar into your existing workflow such as sending error alerts to Slack or Hipchat, or automatically create new issues in Github, JIRA, Pivotal Tracker etc. We have a special offer for Podcast.__init__ listeners. Go to rollbar.com/podcastinit, signup, and get the Bootstrap Plan free for 90 days. That's 300,000 errors tracked for free.Loved by developers at awesome companies like Heroku, Twilio, Kayak, Instacart, Zendesk, Twitch and more. Help support Podcast.__init__ and give Rollbar a try a today. Go to rollbar.com/podcastinit Use the promo code podcastinit20 to get a $20 credit when you sign up!Interview with Anthony ShawIntroductionsHow did you get introduced to Python? - ChrisWhat is LibCloud and how did it get started? - TobiasHow much overhead does using libcloud impose versus native SDKs for performance sensitive APIs like block storage? - ChrisWhat are some of the design patterns and abstractions in the library that allow for supporting such a large number of cloud providers with a mostly uniform API? - TobiasGiven that there are such differing services provided by the different cloud platforms, do you face any difficulties in exposing those capabilities? - TobiasHow does LibCloud compare to similar projects such as the Fog gem in Ruby? - TobiasWhat inspired the choice of Python as the language for creating the LibCloud project? Would you make the same choice again? - TobiasWhich versions of Python are supported and what challenges has that created? - TobiasWhat is your opinion on the state

01 May 2016 12:53am GMT

Podcast.__init__: Episode 55 - LibCloud with Anthony Shaw

Visit our site to listen to past episodes, support the show, join our community, and sign up for our mailing list.

Summary

More and more of our applications are running in the cloud and there are increasingly more providers to choose from. The LibCloud project is a Python library to help us manage the complexity of our environments from a uniform and pleasant API. In this episode Anthony Shaw joins us to explain how LibCloud works, the community that builds and supports it, and the myriad ways in which it can be used. We also got a peek at some of the plans for the future of the project.

Brief Introduction

Rollbar Logo

I'm excited to tell you about a new sponsor of the show, Rollbar.

One of the frustrating things about being a developer, is dealing with errors… (sigh)

  • Relying on users to report errors
  • Digging thru log files trying to debug issues
  • A million alerts flooding your inbox ruining your day…

With Rollbar's full-stack error monitoring, you get the context, insights and control you need to find and fix bugs faster. It's easy to get started tracking the errors and exceptions in your stack.You can start tracking production errors and deployments in 8 minutes - or less, and Rollbar works with all major languages and frameworks, including Ruby, Python, Javascript, PHP, Node, iOS, Android and more.You can integrate Rollbar into your existing workflow such as sending error alerts to Slack or Hipchat, or automatically create new issues in Github, JIRA, Pivotal Tracker etc.

We have a special offer for Podcast.__init__ listeners. Go to rollbar.com/podcastinit, signup, and get the Bootstrap Plan free for 90 days. That's 300,000 errors tracked for free.Loved by developers at awesome companies like Heroku, Twilio, Kayak, Instacart, Zendesk, Twitch and more. Help support Podcast.__init__ and give Rollbar a try a today. Go to rollbar.com/podcastinit

Linode Sponsor Banner

Use the promo code podcastinit20 to get a $20 credit when you sign up!

Interview with Anthony Shaw

Keep In Touch

Picks

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Visit our site to listen to past episodes, support the show, join our community, and sign up for our mailing list.Summary More and more of our applications are running in the cloud and there are increasingly more providers to choose from. The LibCloud project is a Python library to help us manage the complexity of our environments from a uniform and pleasant API. In this episode Anthony Shaw joins us to explain how LibCloud works, the community that builds and supports it, and the myriad ways in which it can be used. We also got a peek at some of the plans for the future of the project.Brief IntroductionHello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.Subscribe on iTunes, Stitcher, TuneIn or RSSFollow us on Twitter or Google+Give us feedback! Leave a review on iTunes, Tweet to us, send us an email or leave us a message on Google+Join our community! Visit discourse.pythonpodcast.com for your opportunity to find out about upcoming guests, suggest questions, and propose show ideas.I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable. For details on how to support the show you can visit our site at pythonpodcast.comLinode is sponsoring us this week. Check them out at linode.com/podcastinit and get a $20 credit to try out their fast and reliable Linux virtual servers for your next projectThe Open Data Science Conference in Boston is happening on May 21st and 22nd. If you use the code EP during registration you will save 20% off of the ticket price. If you decide to attend then let us know, we'll see you there!Your hosts as usual are Tobias Macey and Chris PattiToday we are interviewing Anthony Shaw about the Apache LibCloud project I'm excited to tell you about a new sponsor of the show, Rollbar. One of the frustrating things about being a developer, is dealing with errors… (sigh)Relying on users to report errorsDigging thru log files trying to debug issuesA million alerts flooding your inbox ruining your day...With Rollbar's full-stack error monitoring, you get the context, insights and control you need to find and fix bugs faster. It's easy to get started tracking the errors and exceptions in your stack.You can start tracking production errors and deployments in 8 minutes - or less, and Rollbar works with all major languages and frameworks, including Ruby, Python, Javascript, PHP, Node, iOS, Android and more.You can integrate Rollbar into your existing workflow such as sending error alerts to Slack or Hipchat, or automatically create new issues in Github, JIRA, Pivotal Tracker etc. We have a special offer for Podcast.__init__ listeners. Go to rollbar.com/podcastinit, signup, and get the Bootstrap Plan free for 90 days. That's 300,000 errors tracked for free.Loved by developers at awesome companies like Heroku, Twilio, Kayak, Instacart, Zendesk, Twitch and more. Help support Podcast.__init__ and give Rollbar a try a today. Go to rollbar.com/podcastinit Use the promo code podcastinit20 to get a $20 credit when you sign up!Interview with Anthony ShawIntroductionsHow did you get introduced to Python? - ChrisWhat is LibCloud and how did it get started? - TobiasHow much overhead does using libcloud impose versus native SDKs for performance sensitive APIs like block storage? - ChrisWhat are some of the design patterns and abstractions in the library that allow for supporting such a large number of cloud providers with a mostly uniform API? - TobiasGiven that there are such differing services provided by the different cloud platforms, do you face any difficulties in exposing those capabilities? - TobiasHow does LibCloud compare to similar projects such as the Fog gem in Ruby? - TobiasWhat inspired the choice of Python as the language for creating the LibCloud project? Would you make the same choice again? - TobiasWhich versions of Python are supported and what challenges has that created? - TobiasWhat is your opinion on the state

01 May 2016 12:53am GMT

30 Apr 2016

feedPlanet Python

PythonClub - A Brazilian collaborative blog about Python: Sites Estáticos com Lektor

Publicado originalmente em: humberto.io/2016/4/sites-estaticos-com-lektor

Faz pelo menos 4 anos que eu ensaio para montar um blog, e nessa brincadeira já montei alguns, mas quando chegava na hora de criar o meu eu nunca conseguia publicar.

Inicialmente com ferramentas de publicação como wordpress o problema era a dificuldade de customizar e o tanto de coisa que vinha junto que eu não ia usar mas ficava me tirando a atenção. Em seguida com o GitHub Pages eu descobri o Pelican por indicação do Magnun Leno e comecei a fazer muita coisa com ele, mas da mesma forma que eu ganhei em liberdade de customização, o processo autoral é o mesmo de desenvolvimento, e como descrito no subtitulo do blog, meu lado cientista, pythonista e curioso ficava ali me cutucando para melhorar o site ao invés de escrever conteúdo.

Eis que em uma conversa no grupo de telegram da comunidade python me citam o Lektor e aí começou a aventura.

Lektor?

Lektor é um gerenciador de conteúdo estático criado por Armin Ronacher (sim, o criador do flask) que permite a criação de websites a partir de arquivos de texto.

Porque usar?

Como descrito no próprio site ele bebeu das fontes dos CMS`s, dos frameworks e dos geradores de site estático e chegou em algo que eu considero um ponto de equilíbrio entre eles, e que nos leva as seguintes vantagens:

Instalação

A instalação do Lektor é bem direta:

$ curl -sf https://www.getlektor.com/install.sh | sh

Este comando instala diretamente no sistema, se você prefere instalar em sua virtualenv:

$ virtualenv venv
$ . venv/bin/activate
$ pip install Lektor

Esta forma é desencorajada pelos desenvolvedores pois o lektor gerencia virtualenvs internamente para instalação de seus plugins, portanto caso seja desenvolvedor e quer ter mais controle sobre o lektor instale a versão de desenvolvimento e esteja pronto para sujar as mãos quando for preciso, e quem sabe até contribuir com o desenvolvimento do lektor:

$ git clone https://github.com/lektor/lektor
$ cd lektor
$ make build-js
$ virtualenv venv
$ . venv/bin/activate
$ pip install --editable .

Obs.: requer npm instalado para montar a interface de administração.

Criando o Site

Após a instalação para criar o seu site basta utilizar o comando de criação de projeto:

$ lektor quickstart

Ele irá te fazer algumas perguntas e criar um projeto com o nome que você informou.

Estrutura

Esta é a estrutura básica de um site gerado pelo lektor:

meusite
├── assets/
├── content/
├── templates/
├── models/
└── meusite.lektorproject

Executando localmente

Para rodar o site em sua máquina basta entrar no diretório criado e iniciar o servidor local:

$ cd meusite
$ lektor server

Com o servidor rodando acesse localhost:5000 para ver o resultado:

meusite

Acessando o Admin

Para acessar o admin clique na imagem de lápis no canto superior direito da página que você criou ou acesse localhost:5000

meusite-admin

Publicando o Site

Exitem duas maneiras de se fazer o deploy do site construído com o lektor, a manual, que é basicamente rodar o comando build e copiar manualmente os arquivos para o servidor:

$ lektor build --output-path destino

E a forma automática, que pode ser feita (neste caso para o GitHub Pages) adicionando a seguinte configuração no arquivo meusite.lektorproject:

[servers.production]
target = ghpages://usuario/repositorio

E rodando em seguida o comando:

$ lektor deploy

Obs.: O deploy faz um force push na branch master ou gh-pages dependendo do tipo de repositório, portanto, cuidado para não sobrescrever os dados de seu repositório. Mantenha o código fonte em uma branch separada, você pode dar uma conferida no meu repositório para ter uma idéia.

Para informações mais detalhadas você pode acessar a documentação do lektor e também ficar de olho nas próximas postagens.

30 Apr 2016 3:00pm GMT

PythonClub - A Brazilian collaborative blog about Python: Sites Estáticos com Lektor

Publicado originalmente em: humberto.io/2016/4/sites-estaticos-com-lektor

Faz pelo menos 4 anos que eu ensaio para montar um blog, e nessa brincadeira já montei alguns, mas quando chegava na hora de criar o meu eu nunca conseguia publicar.

Inicialmente com ferramentas de publicação como wordpress o problema era a dificuldade de customizar e o tanto de coisa que vinha junto que eu não ia usar mas ficava me tirando a atenção. Em seguida com o GitHub Pages eu descobri o Pelican por indicação do Magnun Leno e comecei a fazer muita coisa com ele, mas da mesma forma que eu ganhei em liberdade de customização, o processo autoral é o mesmo de desenvolvimento, e como descrito no subtitulo do blog, meu lado cientista, pythonista e curioso ficava ali me cutucando para melhorar o site ao invés de escrever conteúdo.

Eis que em uma conversa no grupo de telegram da comunidade python me citam o Lektor e aí começou a aventura.

Lektor?

Lektor é um gerenciador de conteúdo estático criado por Armin Ronacher (sim, o criador do flask) que permite a criação de websites a partir de arquivos de texto.

Porque usar?

Como descrito no próprio site ele bebeu das fontes dos CMS`s, dos frameworks e dos geradores de site estático e chegou em algo que eu considero um ponto de equilíbrio entre eles, e que nos leva as seguintes vantagens:

Instalação

A instalação do Lektor é bem direta:

$ curl -sf https://www.getlektor.com/install.sh | sh

Este comando instala diretamente no sistema, se você prefere instalar em sua virtualenv:

$ virtualenv venv
$ . venv/bin/activate
$ pip install Lektor

Esta forma é desencorajada pelos desenvolvedores pois o lektor gerencia virtualenvs internamente para instalação de seus plugins, portanto caso seja desenvolvedor e quer ter mais controle sobre o lektor instale a versão de desenvolvimento e esteja pronto para sujar as mãos quando for preciso, e quem sabe até contribuir com o desenvolvimento do lektor:

$ git clone https://github.com/lektor/lektor
$ cd lektor
$ make build-js
$ virtualenv venv
$ . venv/bin/activate
$ pip install --editable .

Obs.: requer npm instalado para montar a interface de administração.

Criando o Site

Após a instalação para criar o seu site basta utilizar o comando de criação de projeto:

$ lektor quickstart

Ele irá te fazer algumas perguntas e criar um projeto com o nome que você informou.

Estrutura

Esta é a estrutura básica de um site gerado pelo lektor:

meusite
├── assets/
├── content/
├── templates/
├── models/
└── meusite.lektorproject

Executando localmente

Para rodar o site em sua máquina basta entrar no diretório criado e iniciar o servidor local:

$ cd meusite
$ lektor server

Com o servidor rodando acesse localhost:5000 para ver o resultado:

meusite

Acessando o Admin

Para acessar o admin clique na imagem de lápis no canto superior direito da página que você criou ou acesse localhost:5000

meusite-admin

Publicando o Site

Exitem duas maneiras de se fazer o deploy do site construído com o lektor, a manual, que é basicamente rodar o comando build e copiar manualmente os arquivos para o servidor:

$ lektor build --output-path destino

E a forma automática, que pode ser feita (neste caso para o GitHub Pages) adicionando a seguinte configuração no arquivo meusite.lektorproject:

[servers.production]
target = ghpages://usuario/repositorio

E rodando em seguida o comando:

$ lektor deploy

Obs.: O deploy faz um force push na branch master ou gh-pages dependendo do tipo de repositório, portanto, cuidado para não sobrescrever os dados de seu repositório. Mantenha o código fonte em uma branch separada, você pode dar uma conferida no meu repositório para ter uma idéia.

Para informações mais detalhadas você pode acessar a documentação do lektor e também ficar de olho nas próximas postagens.

30 Apr 2016 3:00pm GMT

Will McGugan: Capturing standard output in Python

Just landed in inthing is a new and quite interesting feature.

Version 0.1.4 adds a capture method which will record all standard output, i.e. anything you print to the terminal. It works as a context manager. Here's an example:

from inthing import Stream
stream = Stream.new()
with stream.capture() as capture:
    import this

capture.browse()

Any print statement inside the with block will be captured and posted online with the block exits.

You can also do something similar from the command line, with the inthing capture subcommand, which posts anything you pipe in to it as an event.

lets say you wanted to post the version of all you installed Python packages online. You could do something like the following:

pip freeze | inthing capture -b

For more information see the Inthing docs.

Inthing is still technically in beta, but these features are quite solid. Please give them a try, and let me know how it goes!

30 Apr 2016 1:40pm GMT

Will McGugan: Capturing standard output in Python

Just landed in inthing is a new and quite interesting feature.

Version 0.1.4 adds a capture method which will record all standard output, i.e. anything you print to the terminal. It works as a context manager. Here's an example:

from inthing import Stream
stream = Stream.new()
with stream.capture() as capture:
    import this

capture.browse()

Any print statement inside the with block will be captured and posted online with the block exits.

You can also do something similar from the command line, with the inthing capture subcommand, which posts anything you pipe in to it as an event.

lets say you wanted to post the version of all you installed Python packages online. You could do something like the following:

pip freeze | inthing capture -b

For more information see the Inthing docs.

Inthing is still technically in beta, but these features are quite solid. Please give them a try, and let me know how it goes!

30 Apr 2016 1:40pm GMT

Python Software Foundation: We Want You to Run for the 2016 Board of Directors

You don't have to be an expert, or a Python celebrity. If you care about Python and you want to nurture our community and guide our future, we invite you to join the Board.

Nominations are open for the Python Software Foundation's Board of Directors now through the end of May 15. Nominate yourself if you are able and inspired to help the PSF fulfill its mission:

"The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers."


If you know someone who would be an excellent director, ask if they would like you nominate them!

What is the job? Directors do the business of the PSF, including:


Read "Expectations of Directors" for details.

There are 11 directors, elected annually for a term of one year. Directors are unpaid volunteers. They need not be residents of the US.

The deadline for nominations is the end of May 15, Anywhere on Earth ("AoE"). As long as it is May 15 somewhere, nominations are open. A simple algorithm is this: make your nominations by 11:59pm on your local clock and you are certain to meet the deadline. Ballots to vote for the board members will be sent May 20, and the election closes May 30.

If you're moved to nominate yourself or someone else, here are the instructions:

How to nominate candidates in the 2016 PSF Board Election.

While you're on that page, check if your membership makes you eligible to actually vote in the election.

For more info, see the PSF home page and the PSF membership FAQ.

30 Apr 2016 10:05am GMT

Python Software Foundation: We Want You to Run for the 2016 Board of Directors

You don't have to be an expert, or a Python celebrity. If you care about Python and you want to nurture our community and guide our future, we invite you to join the Board.

Nominations are open for the Python Software Foundation's Board of Directors now through the end of May 15. Nominate yourself if you are able and inspired to help the PSF fulfill its mission:

"The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers."


If you know someone who would be an excellent director, ask if they would like you nominate them!

What is the job? Directors do the business of the PSF, including:


Read "Expectations of Directors" for details.

There are 11 directors, elected annually for a term of one year. Directors are unpaid volunteers. They need not be residents of the US.

The deadline for nominations is the end of May 15, Anywhere on Earth ("AoE"). As long as it is May 15 somewhere, nominations are open. A simple algorithm is this: make your nominations by 11:59pm on your local clock and you are certain to meet the deadline. Ballots to vote for the board members will be sent May 20, and the election closes May 30.

If you're moved to nominate yourself or someone else, here are the instructions:

How to nominate candidates in the 2016 PSF Board Election.

While you're on that page, check if your membership makes you eligible to actually vote in the election.

For more info, see the PSF home page and the PSF membership FAQ.

30 Apr 2016 10:05am GMT

EuroPython: EuroPython 2016: Extra Hot Topics - Call for Proposals

The Program work group is happy to announce that there will be an extra Call for Proposals early in June. This call is limited to hot topics and most recent developments in software and technology.

image

Why is there a second call ?

Planning a big conference is a challenge: On one hand people like to know what will be on our talk schedule to make up their mind and make travel arrangements early. On the other hand technology is progressing at the speed of light these days.

So what's the solution ? Attend anyway - EuroPython is always a great idea !

Seriously, we have given this some thought and decided to make another extra Call for Proposals just weeks before the conference.

This CfP is strictly reserved for

Some suggestions for topics:

This call will be open for nine days only:

Saturday June 4th 0:00 to Sunday June 12th 24:00 CEST.

The program work group will select the most exciting and intriguing submissions and will notify the winners on short notice.

(Photo reference: https://www.flickr.com/photos/nasacommons/22911992493/)

With gravitational regards,
-
EuroPython 2016 Team

30 Apr 2016 9:56am GMT

EuroPython: EuroPython 2016: Extra Hot Topics - Call for Proposals

The Program work group is happy to announce that there will be an extra Call for Proposals early in June. This call is limited to hot topics and most recent developments in software and technology.

image

Why is there a second call ?

Planning a big conference is a challenge: On one hand people like to know what will be on our talk schedule to make up their mind and make travel arrangements early. On the other hand technology is progressing at the speed of light these days.

So what's the solution ? Attend anyway - EuroPython is always a great idea !

Seriously, we have given this some thought and decided to make another extra Call for Proposals just weeks before the conference.

This CfP is strictly reserved for

Some suggestions for topics:

This call will be open for nine days only:

Saturday June 4th 0:00 to Sunday June 12th 24:00 CEST.

The program work group will select the most exciting and intriguing submissions and will notify the winners on short notice.

(Photo reference: https://www.flickr.com/photos/nasacommons/22911992493/)

With gravitational regards,
-
EuroPython 2016 Team

30 Apr 2016 9:56am GMT

Bhishan Bhandari: Review : Automate the boring stuff with python

Some while ago, I got myself enrolled in one of the best video lectures at Udemy. I have recently completed the lectures and would like to brief about it. The course is named Automate the boring stuff with python. Well, it is an excellent video lecture A.I Sweigart has brought up. It is good to […]

The post Review : Automate the boring stuff with python appeared first on The Tara Nights.

30 Apr 2016 4:34am GMT

Bhishan Bhandari: Review : Automate the boring stuff with python

Some while ago, I got myself enrolled in one of the best video lectures at Udemy. I have recently completed the lectures and would like to brief about it. The course is named Automate the boring stuff with python. Well, it is an excellent video lecture A.I Sweigart has brought up. It is good to […]

The post Review : Automate the boring stuff with python appeared first on The Tara Nights.

30 Apr 2016 4:34am GMT

Stein Magnus Jodal: March and April contributions

The following is a short summary of my open source work in March and April, almost like in previous months, except that I haven't spent as much time as previously on Open Source the last two months.

Debian

Mopidy

30 Apr 2016 12:00am GMT

Stein Magnus Jodal: March and April contributions

The following is a short summary of my open source work in March and April, almost like in previous months, except that I haven't spent as much time as previously on Open Source the last two months.

Debian

Mopidy

30 Apr 2016 12:00am GMT

29 Apr 2016

feedPlanet Python

Yasoob Khalid: Learning Python For Data Science

For those of you who wish to begin learning Python for Data Science, here is a list of various resources that will get you up and running. Included are things like online tutorials and short interactive course, MOOCs, newsletters, books, useful tools and more. We decided to put this together so that you can begin learning Data Science with Python right of the bat, without having to spend hours surfing the web in search of resources. Please note that while we believe the list is comprehensive, it is by no means exhaustive. We probably have missed out on a couple of nice resources so feel free to mention them in the comments if you are so inclined.:)

Tutorials

Intro to Python for Data Science by DataCamp: This free and interactive tutorial focus on Python skills and tools specifically for Data Science use. Through this course you will learn the foundations of Python as well as the very essential data science tools. This is a very easy and friendly way to introduce yourself the Python syntax and get off to a positive start.

Python Programming by Codecademy: While this course from Codecademy doesn't teach Python in the context of data science it's still a fantastic resource. In this free course you can get exposure to more fundamentals of Python programming and maybe pick up some web development skills along the way. Most importantly you will be getting practice with Python syntax.

A Byte of Python: This is a collection of very friendly tutorials on all the basics of Python that can help you get started and unstuck, especially with simple tasks that you need do when working with Python.

http://www.learnpython.org/ : Learn Python is a pltaform where you will find a series of programming tutrials and in browser exercises that go along with them. This could be useful as a set of examples of how to go about completing certain programming tasks, as well as an exposure to some basic Python programming topics.

Code Mentor Python Tutorials: You will be able to find a number of nice tutorials varying in type and scope on codementor. Some articles will teach you tricks or best practices when it comes to using Python, while others might be full on cases of Python projects and applications of data science to various domains.

Learn Data Science with Python - Dataquest
Dataquest teaches you data science interactively in your browser, using Python. They advertise themselves as a company which teaches you all the skills you need to be a well-rounded data scientist or data analyst. They helps you build your portfolio with projects after teaching you the theory. Dataquest members have been hired at companies like Fitbit and 3M. Dataquest offers beginner and intermediate content on Python for free. The rest of the content is available with a monthly subscription. I have not used them personally but this seems intriguing.

Intermediate Python for Data Science by DataCamp: In this sequel to the Intro to Python for Data Science you will carry on learning the key tools for plotting and visualization, working with data, basic Python programming, and a full hands on Case Study where you use all of your new skills in consortium. In addition you receive a certificate that you can share in your social and professional networks.

Massive Open Online Courses (MOOCs)

Introduction to Python for Data Science by Microsoft: In this course you start with the true basics including variables and arithmetic, and work yourself up to working with NumPy arrays and Pandas DataFrames. Gradually you begin to cover topics central to data science including visualization using Matplotlib, and control flow. This open course comprises of video tutorials, and what sets it apart are the interactive in browser exercises that you complete as you learn.

Python for Everybody by University of Michigan: This course focuses on the behind the scene part of data science, namely retrieval and processing of data, as well as some visualization. You begin by learning basic Python, and then how to work with data structures, access data from databases and the web.

Data Analysis and Interpretation Specialization: This series of courses focuses on the analysis of data implemented with Python, and the interpretation of results. Once you become acquainted with tools for analyzing data in Python, you may participate in a capstone course testing your skills through a DrivenData.org sponcored project.

Data Analysis with Python and Pandas on Udemy: After getting some basic Python knowledge you may want to explore a specific topic in more depth, and this course is a great way to do so. If you are brand new to Python it is advised not to take this course. Nonetheless, this is a great way to learn Pandas library in greater detail.

Data Analysis with Python and Matplotlib on Udemy: Similar to the course above, this is not for complete novices. Rather this course will allow you to delve deeper into the visualization tools on Python with the Matplotlib library.

Intro to Data Science on Udacity: This course is about Data Science, not Python. Python is used as a tool and a good amount of Python experience is necessary. After learning the basics of Python programming however this is a fantastic course to take and utilize and expand your Python abilities.

Resources and Newsletters

Bite Python: Bite Python is a great newsletter to be signed-up to especially if you are always on a look out for Python tips, tutorials, and essential news. As a novice you will find this relevant right of the bat.

PyData,org: PyData is a community of Python data tools users. They organize and host conferences dedicated to Python data tools. From their web page you can learn about the all of the essential Python libraries, technologies, and tools built specifically for data analysis and data science.

Python.org/doc/: This link will lead you to the main source of Python documentation. As a data scientist it is very important to learn how to take consult the while working on a project or task. When you need explanations for certain functions or operations this is the place to go.

http://planetpython.org/: This is a site dedicated to everything Python. On the site you will see numerous articles and news about Python, and frequently posts with data science and data analysis as topics.

Python Weekly: This is one of the most popular and essential newsletters dedicated to Python. While you might not find everything in the newsletter useful or relevant from the beginning, this is a good way of stepping into the community and being up-to-date with what's going on in the Python universe.

Pycoder's Weekly: This newsletter is somewhat more advanced, and is not entirely dedicated to Data Science. Still, you might stumble upon some topic that will be relevant to what you are doing with Python particularly if you are also interested in computer science and development.

DataScienceMaster.org: It becomes easier to take advatage of all of the open online courseware with this website. The site hosts extensive lists of resources for learning data science theory, as well as technologies including Python. If you are confident in yourself-studying abilities completing the curriculum can prepare you for a career in Data Science.

Books

In case you like to learn from books as well, here are a couple of good texts dedicated to learning Python as a tool for data analysis and data science. Most of these books are comprised of examples and exercises, and some are accompanied with actual data which wich you can get your fingers dirty while reading.

Intermediate Python: This online book is free to read and contains intermediate Python concepts which are usually not taught in beginner books. This is a must read if you have already finished beginner books.

Learn Python The Hard Way: This online book is free to read, and contain a ton of examples, exercises and demonstration that will get you started and move you along most of the Python programming topics you are likely to need.

Practical Data Analysis with Python: This book is all about data analysis with Python playing the role of the data analysis tool. The coolest thing about this product is that you may purchase the book with the data used throughout which allows you to reproduce the analysis done in the text.

Python for Data Analysis: As the title suggests this book is all about Python as a data analysis tool. The text covers many of the essential topics such as … . It's always nice to read but the best way to use this book is by consulting sections as needed.

Data Science from Scratch: First Principles with Python: There is a reason why this text has the phrase "from scratch" right in the title. The book is dedicated to teaching you HOW the data science techniques work in principle, using Python. You won't see much of NumPy or Pandas, rather you will see the Python code for the essential algorithms used in data science.

Work Space

While analyzing your data and organizing data science projects with Python, you will need a work space where you will be writing your code and executing your analysis. There are a couple of good options designed for Data Science specifically.

Rodeo IDE: A product developed by Yhat, Inc. This is a relatively new IDE but it deserves a mention because it has been designed specifically for data analysis projects, rather than for general programming purposes that Python is capable of. For those familiar with the RStudio IDE for R, Rodeo is a very similar tool for working with Python.

Anaconda: Anaconda is a platform developed by Continuum Analytics, who's founders and developers are creators and contributors to some of the most popular Python based data science tool. Through Anaconda you will be able to get a package consisting of Python with the essential data analysis libraries (NumPy, SciPy, Pandas..) , Jupyter notebooks, as well as a number of other tools for visualization and analysis.


29 Apr 2016 7:03pm GMT

Yasoob Khalid: Learning Python For Data Science

For those of you who wish to begin learning Python for Data Science, here is a list of various resources that will get you up and running. Included are things like online tutorials and short interactive course, MOOCs, newsletters, books, useful tools and more. We decided to put this together so that you can begin learning Data Science with Python right of the bat, without having to spend hours surfing the web in search of resources. Please note that while we believe the list is comprehensive, it is by no means exhaustive. We probably have missed out on a couple of nice resources so feel free to mention them in the comments if you are so inclined.:)

Tutorials

Intro to Python for Data Science by DataCamp: This free and interactive tutorial focus on Python skills and tools specifically for Data Science use. Through this course you will learn the foundations of Python as well as the very essential data science tools. This is a very easy and friendly way to introduce yourself the Python syntax and get off to a positive start.

Python Programming by Codecademy: While this course from Codecademy doesn't teach Python in the context of data science it's still a fantastic resource. In this free course you can get exposure to more fundamentals of Python programming and maybe pick up some web development skills along the way. Most importantly you will be getting practice with Python syntax.

A Byte of Python: This is a collection of very friendly tutorials on all the basics of Python that can help you get started and unstuck, especially with simple tasks that you need do when working with Python.

http://www.learnpython.org/ : Learn Python is a pltaform where you will find a series of programming tutrials and in browser exercises that go along with them. This could be useful as a set of examples of how to go about completing certain programming tasks, as well as an exposure to some basic Python programming topics.

Code Mentor Python Tutorials: You will be able to find a number of nice tutorials varying in type and scope on codementor. Some articles will teach you tricks or best practices when it comes to using Python, while others might be full on cases of Python projects and applications of data science to various domains.

Learn Data Science with Python - Dataquest
Dataquest teaches you data science interactively in your browser, using Python. They advertise themselves as a company which teaches you all the skills you need to be a well-rounded data scientist or data analyst. They helps you build your portfolio with projects after teaching you the theory. Dataquest members have been hired at companies like Fitbit and 3M. Dataquest offers beginner and intermediate content on Python for free. The rest of the content is available with a monthly subscription. I have not used them personally but this seems intriguing.

Intermediate Python for Data Science by DataCamp: In this sequel to the Intro to Python for Data Science you will carry on learning the key tools for plotting and visualization, working with data, basic Python programming, and a full hands on Case Study where you use all of your new skills in consortium. In addition you receive a certificate that you can share in your social and professional networks.

Massive Open Online Courses (MOOCs)

Introduction to Python for Data Science by Microsoft: In this course you start with the true basics including variables and arithmetic, and work yourself up to working with NumPy arrays and Pandas DataFrames. Gradually you begin to cover topics central to data science including visualization using Matplotlib, and control flow. This open course comprises of video tutorials, and what sets it apart are the interactive in browser exercises that you complete as you learn.

Python for Everybody by University of Michigan: This course focuses on the behind the scene part of data science, namely retrieval and processing of data, as well as some visualization. You begin by learning basic Python, and then how to work with data structures, access data from databases and the web.

Data Analysis and Interpretation Specialization: This series of courses focuses on the analysis of data implemented with Python, and the interpretation of results. Once you become acquainted with tools for analyzing data in Python, you may participate in a capstone course testing your skills through a DrivenData.org sponcored project.

Data Analysis with Python and Pandas on Udemy: After getting some basic Python knowledge you may want to explore a specific topic in more depth, and this course is a great way to do so. If you are brand new to Python it is advised not to take this course. Nonetheless, this is a great way to learn Pandas library in greater detail.

Data Analysis with Python and Matplotlib on Udemy: Similar to the course above, this is not for complete novices. Rather this course will allow you to delve deeper into the visualization tools on Python with the Matplotlib library.

Intro to Data Science on Udacity: This course is about Data Science, not Python. Python is used as a tool and a good amount of Python experience is necessary. After learning the basics of Python programming however this is a fantastic course to take and utilize and expand your Python abilities.

Resources and Newsletters

Bite Python: Bite Python is a great newsletter to be signed-up to especially if you are always on a look out for Python tips, tutorials, and essential news. As a novice you will find this relevant right of the bat.

PyData,org: PyData is a community of Python data tools users. They organize and host conferences dedicated to Python data tools. From their web page you can learn about the all of the essential Python libraries, technologies, and tools built specifically for data analysis and data science.

Python.org/doc/: This link will lead you to the main source of Python documentation. As a data scientist it is very important to learn how to take consult the while working on a project or task. When you need explanations for certain functions or operations this is the place to go.

http://planetpython.org/: This is a site dedicated to everything Python. On the site you will see numerous articles and news about Python, and frequently posts with data science and data analysis as topics.

Python Weekly: This is one of the most popular and essential newsletters dedicated to Python. While you might not find everything in the newsletter useful or relevant from the beginning, this is a good way of stepping into the community and being up-to-date with what's going on in the Python universe.

Pycoder's Weekly: This newsletter is somewhat more advanced, and is not entirely dedicated to Data Science. Still, you might stumble upon some topic that will be relevant to what you are doing with Python particularly if you are also interested in computer science and development.

DataScienceMaster.org: It becomes easier to take advatage of all of the open online courseware with this website. The site hosts extensive lists of resources for learning data science theory, as well as technologies including Python. If you are confident in yourself-studying abilities completing the curriculum can prepare you for a career in Data Science.

Books

In case you like to learn from books as well, here are a couple of good texts dedicated to learning Python as a tool for data analysis and data science. Most of these books are comprised of examples and exercises, and some are accompanied with actual data which wich you can get your fingers dirty while reading.

Intermediate Python: This online book is free to read and contains intermediate Python concepts which are usually not taught in beginner books. This is a must read if you have already finished beginner books.

Learn Python The Hard Way: This online book is free to read, and contain a ton of examples, exercises and demonstration that will get you started and move you along most of the Python programming topics you are likely to need.

Practical Data Analysis with Python: This book is all about data analysis with Python playing the role of the data analysis tool. The coolest thing about this product is that you may purchase the book with the data used throughout which allows you to reproduce the analysis done in the text.

Python for Data Analysis: As the title suggests this book is all about Python as a data analysis tool. The text covers many of the essential topics such as … . It's always nice to read but the best way to use this book is by consulting sections as needed.

Data Science from Scratch: First Principles with Python: There is a reason why this text has the phrase "from scratch" right in the title. The book is dedicated to teaching you HOW the data science techniques work in principle, using Python. You won't see much of NumPy or Pandas, rather you will see the Python code for the essential algorithms used in data science.

Work Space

While analyzing your data and organizing data science projects with Python, you will need a work space where you will be writing your code and executing your analysis. There are a couple of good options designed for Data Science specifically.

Rodeo IDE: A product developed by Yhat, Inc. This is a relatively new IDE but it deserves a mention because it has been designed specifically for data analysis projects, rather than for general programming purposes that Python is capable of. For those familiar with the RStudio IDE for R, Rodeo is a very similar tool for working with Python.

Anaconda: Anaconda is a platform developed by Continuum Analytics, who's founders and developers are creators and contributors to some of the most popular Python based data science tool. Through Anaconda you will be able to get a package consisting of Python with the essential data analysis libraries (NumPy, SciPy, Pandas..) , Jupyter notebooks, as well as a number of other tools for visualization and analysis.


29 Apr 2016 7:03pm GMT

Weekly Python StackOverflow Report: (xvii) stackoverflow python report

These are the ten most rated questions at Stack Overflow last week.
Between brackets: [question score / answers count]
Build date: 2016-04-29 17:15:38 GMT


  1. What does Python mean by printing "[...]" for an object reference? - [35/4]
  2. Pairwise circular Python 'for' loop - [27/14]
  3. In Python, when are two objects the same? - [24/2]
  4. Dictionary comprehension with lambda functions gives wrong results - [13/2]
  5. How do I identify sequences of values in a boolean array? - [12/3]
  6. How int() object using "==" operator without __eq__() method in python2? - [9/2]
  7. Why is "import" a statement but "reload" a function? - [9/1]
  8. Python Recursive Search of Dict with Nested Keys - [7/5]
  9. How to fetch a substring from text file in python? - [7/4]
  10. Using generator send() within a for loop - [7/3]

29 Apr 2016 5:16pm GMT

Weekly Python StackOverflow Report: (xvii) stackoverflow python report

These are the ten most rated questions at Stack Overflow last week.
Between brackets: [question score / answers count]
Build date: 2016-04-29 17:15:38 GMT


  1. What does Python mean by printing "[...]" for an object reference? - [35/4]
  2. Pairwise circular Python 'for' loop - [27/14]
  3. In Python, when are two objects the same? - [24/2]
  4. Dictionary comprehension with lambda functions gives wrong results - [13/2]
  5. How do I identify sequences of values in a boolean array? - [12/3]
  6. How int() object using "==" operator without __eq__() method in python2? - [9/2]
  7. Why is "import" a statement but "reload" a function? - [9/1]
  8. Python Recursive Search of Dict with Nested Keys - [7/5]
  9. How to fetch a substring from text file in python? - [7/4]
  10. Using generator send() within a for loop - [7/3]

29 Apr 2016 5:16pm GMT

Continuum Analytics News: Open Data Science: Bringing “Magic” to Modern Analytics

Company Blog

Posted Friday, April 29, 2016

Michele Chambers

Chief Marketing Officer & VP Product

Science fiction author Arthur C. Clarke once wrote, "any sufficiently advanced technology is indistinguishable from magic."

We're nearer than ever to that incomprehensible, magical future. Our gadgets understand our speech, driverless cars have made their debut and we'll soon be viewing virtual worlds at home.

These "magical" technologies spring from a 21st-century spirit of innovation-but not only from big companies. Thanks to the Internet-and to the open source movement-companies of all sizes are able to spur advancements in science and technology.

It's no different for advanced analytics. And it's about time.

In the past, our analytics tools were proprietary, product-oriented solutions. These were necessarily limited in flexibility and they locked customers into the slow innovation cycles and whims of vendors. These closed-source solutions forced a "one size fits all" approach to analytics with monolithic tools that did not offer easy customization for different needs.

Open Data Science has changed that. It offers innovative software-free of proprietary restrictions and tailorable for all varieties of data science teams-created in the transparent collaboration that is driving today's tech boom.

The Magic 8-Ball of Automated Modeling

One of Open Data Science's most visible points of innovation is in the sphere of data science modeling.

Initially, models were created exclusively by statisticians and analysts for business professionals, but demand from the business sector for software that could do this job gave rise to automatic model fitting-often called "black box" analytics-in which analysts let software algorithmically generate models that fit data and create predictive models.

Such a system creates models, but much like a magic 8-ball, it offers its users answers without business explanations. Mysteries are fun for toys, but no business will bet on them. Quite understandably, no marketing manager or product manager wants to approach the CEO with predictions, only to be stumped when he asks how the manager arrived at them. As Clarke knew, it's not really magic creating the models, it's advanced technology and it too operates under assumptions that might or might not make sense for the business.

App Starters Means More Transparent Modeling

Today's business professionals want faster time-to-value and are dazzled by advanced technologies like automated model fitting, but they also want to understand exactly how and why the work.

That's why Continuum Analytics is hard at work on Open Data Science solutions including Anaconda App Starters, expected to debut later this year. App Starters are solution "templates" aimed to be a 60-80 percent data science solution that make it easy for businesses to have a starting point. App Starters serve the same purpose as the "black box"-faster time-to-value- but are not a "black box" in that it allows analysts to see exactly how the model was created and to tweak models as desired.

Because the App Starters are are based on Open Data Science, they don't include proprietary restrictions that keep business professionals or data scientists in the dark regarding the analytics pipeline including the algorithms. It still provides the value of "automagically" creating models, but the details of how it does so are transparent and accessible to the team. With App Starters, business professionals will finally have confidence in the models they're using to formulate business strategies, while getting faster time-to-value from their growing data.

Over time App Starters will get more sophisticated and will include recommendations-just like how Netflix offers up movie and tv show recommendations for your watching pleasure-that will learn and suggest algorithms and visualizations that best fit the data. Unlike "black boxes" the entire narrative as to why recommendations are offered will be available for the business analyst to learn and gain confidence in the recommendations. However, the business analyst can choose to use the recommendation, tweak the recommendation, use the template without recommendations or they could try tuning the suggested models to find a perfect fit. This type of innovation will further the advancement of sophisticated data science solutions that realize more business value, while instilling confidence in the solution.

Casting Spells with Anaconda

Although App Starters are about to shake up automated modeling, businesses require melding new ideas with tried-and-true solutions. In business analytics, for instance, tools like Microsoft Excel are a staple of the field and being able to integrate them with newer "magic" is highly desirable.

Fortunately, interoperability is one of the keystones of the Open Data Science philosophy and Anaconda provides a way to bridge the reliable old world with the magical new one. With Anaconda, analysts who are comfortable using Excel have an entry point into the world of predictive analytics from the comfort of their spreadsheets. By using the same familiar interface, analysts can access powerful Python libraries to apply cutting-edge analytics to their data. Anaconda recognizes that business analysts want to improve-not disrupt-a proven workflow.

Because Anaconda leverages the Python ecosystem, analysts using Anaconda will achieve powerful results. They might apply a formula to an Excel sheet with a million data rows to predict repeat customers or they may create beautiful, informative visualizations to show how sales have shifted to a new demographic after the company's newest marketing campaign kicked off. With Anaconda, business analysts can continue using Excel as their main interface, while harnessing the newest "magic" available in the open source community.

Open Data Science for Wizards…and Apprentices

Open Data Science is an inclusive movement. Although open source languages like Python and R dominate data science and allow for the most advanced-and therefore "magical"-analytics technology available, the community is open to all levels of expertise.

Anaconda is a great way for business analysts, for example, to embark on the road toward advanced analytics. But solutions, like App Starters, give advanced wizards the algorithmic visibility to alter and improve models as they see fit.

Open Data Science gives us the "sufficiently advanced technology" that Arthur C. Clarke mentioned-but it puts the power of that magic in our hands.

29 Apr 2016 1:05pm GMT

Continuum Analytics News: Open Data Science: Bringing “Magic” to Modern Analytics

Company Blog

Posted Friday, April 29, 2016

Michele Chambers

Chief Marketing Officer & VP Product

Science fiction author Arthur C. Clarke once wrote, "any sufficiently advanced technology is indistinguishable from magic."

We're nearer than ever to that incomprehensible, magical future. Our gadgets understand our speech, driverless cars have made their debut and we'll soon be viewing virtual worlds at home.

These "magical" technologies spring from a 21st-century spirit of innovation-but not only from big companies. Thanks to the Internet-and to the open source movement-companies of all sizes are able to spur advancements in science and technology.

It's no different for advanced analytics. And it's about time.

In the past, our analytics tools were proprietary, product-oriented solutions. These were necessarily limited in flexibility and they locked customers into the slow innovation cycles and whims of vendors. These closed-source solutions forced a "one size fits all" approach to analytics with monolithic tools that did not offer easy customization for different needs.

Open Data Science has changed that. It offers innovative software-free of proprietary restrictions and tailorable for all varieties of data science teams-created in the transparent collaboration that is driving today's tech boom.

The Magic 8-Ball of Automated Modeling

One of Open Data Science's most visible points of innovation is in the sphere of data science modeling.

Initially, models were created exclusively by statisticians and analysts for business professionals, but demand from the business sector for software that could do this job gave rise to automatic model fitting-often called "black box" analytics-in which analysts let software algorithmically generate models that fit data and create predictive models.

Such a system creates models, but much like a magic 8-ball, it offers its users answers without business explanations. Mysteries are fun for toys, but no business will bet on them. Quite understandably, no marketing manager or product manager wants to approach the CEO with predictions, only to be stumped when he asks how the manager arrived at them. As Clarke knew, it's not really magic creating the models, it's advanced technology and it too operates under assumptions that might or might not make sense for the business.

App Starters Means More Transparent Modeling

Today's business professionals want faster time-to-value and are dazzled by advanced technologies like automated model fitting, but they also want to understand exactly how and why the work.

That's why Continuum Analytics is hard at work on Open Data Science solutions including Anaconda App Starters, expected to debut later this year. App Starters are solution "templates" aimed to be a 60-80 percent data science solution that make it easy for businesses to have a starting point. App Starters serve the same purpose as the "black box"-faster time-to-value- but are not a "black box" in that it allows analysts to see exactly how the model was created and to tweak models as desired.

Because the App Starters are are based on Open Data Science, they don't include proprietary restrictions that keep business professionals or data scientists in the dark regarding the analytics pipeline including the algorithms. It still provides the value of "automagically" creating models, but the details of how it does so are transparent and accessible to the team. With App Starters, business professionals will finally have confidence in the models they're using to formulate business strategies, while getting faster time-to-value from their growing data.

Over time App Starters will get more sophisticated and will include recommendations-just like how Netflix offers up movie and tv show recommendations for your watching pleasure-that will learn and suggest algorithms and visualizations that best fit the data. Unlike "black boxes" the entire narrative as to why recommendations are offered will be available for the business analyst to learn and gain confidence in the recommendations. However, the business analyst can choose to use the recommendation, tweak the recommendation, use the template without recommendations or they could try tuning the suggested models to find a perfect fit. This type of innovation will further the advancement of sophisticated data science solutions that realize more business value, while instilling confidence in the solution.

Casting Spells with Anaconda

Although App Starters are about to shake up automated modeling, businesses require melding new ideas with tried-and-true solutions. In business analytics, for instance, tools like Microsoft Excel are a staple of the field and being able to integrate them with newer "magic" is highly desirable.

Fortunately, interoperability is one of the keystones of the Open Data Science philosophy and Anaconda provides a way to bridge the reliable old world with the magical new one. With Anaconda, analysts who are comfortable using Excel have an entry point into the world of predictive analytics from the comfort of their spreadsheets. By using the same familiar interface, analysts can access powerful Python libraries to apply cutting-edge analytics to their data. Anaconda recognizes that business analysts want to improve-not disrupt-a proven workflow.

Because Anaconda leverages the Python ecosystem, analysts using Anaconda will achieve powerful results. They might apply a formula to an Excel sheet with a million data rows to predict repeat customers or they may create beautiful, informative visualizations to show how sales have shifted to a new demographic after the company's newest marketing campaign kicked off. With Anaconda, business analysts can continue using Excel as their main interface, while harnessing the newest "magic" available in the open source community.

Open Data Science for Wizards…and Apprentices

Open Data Science is an inclusive movement. Although open source languages like Python and R dominate data science and allow for the most advanced-and therefore "magical"-analytics technology available, the community is open to all levels of expertise.

Anaconda is a great way for business analysts, for example, to embark on the road toward advanced analytics. But solutions, like App Starters, give advanced wizards the algorithmic visibility to alter and improve models as they see fit.

Open Data Science gives us the "sufficiently advanced technology" that Arthur C. Clarke mentioned-but it puts the power of that magic in our hands.

29 Apr 2016 1:05pm GMT

Nikola: Nikola v7.7.8 is out!

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v7.7.8. It fixes some bugs and adds (minor) new features.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown - and can even turn Jupyter (IPython) Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website: https://getnikola.com/

Downloads

Install using pip install Nikola or download tarballs on GitHub and PyPI.

Changes

Features

  • Template-based shortcodes now receive positional arguments too (Issue #2319)

Bugfixes

  • Use state files in nikola github_deploy and nikola status (Issue #2317)
  • Add align options for youtube, vimeo, soundcloud reST directives (Issue #2304)
  • Update FILE_METADATA_REGEXP example in docs (Issue #2296)
  • Show "tags too similar" error instead of cryptic doit crash (Issue #2325)
  • Fix crashes when tag appears multiple times in a post (Issue #2315)
  • Use binary I/O for .svg files in galleries
  • Accept .svgz extension by default
  • Don't randomly load plugins when Nikola is called with no arguments (Issue #2297)

29 Apr 2016 12:22pm GMT

Nikola: Nikola v7.7.8 is out!

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v7.7.8. It fixes some bugs and adds (minor) new features.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown - and can even turn Jupyter (IPython) Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website: https://getnikola.com/

Downloads

Install using pip install Nikola or download tarballs on GitHub and PyPI.

Changes

Features

  • Template-based shortcodes now receive positional arguments too (Issue #2319)

Bugfixes

  • Use state files in nikola github_deploy and nikola status (Issue #2317)
  • Add align options for youtube, vimeo, soundcloud reST directives (Issue #2304)
  • Update FILE_METADATA_REGEXP example in docs (Issue #2296)
  • Show "tags too similar" error instead of cryptic doit crash (Issue #2325)
  • Fix crashes when tag appears multiple times in a post (Issue #2315)
  • Use binary I/O for .svg files in galleries
  • Accept .svgz extension by default
  • Don't randomly load plugins when Nikola is called with no arguments (Issue #2297)

29 Apr 2016 12:22pm GMT

hypothesis.works articles: Testing performance optimizations

Once you've flushed out the basic crashing bugs in your code, you're going to want to look for more interesting things to test.

The next easiest thing to test is code where you know what the right answer is for every input.

Obviously in theory you think you know what the right answer is - you can just run the code. That's not very helpful though, as that's the answer you're trying to verify.

But sometimes there is more than one way to get the right answer, and you choose the one you run in production not because it gives a different answer but because it gives the same answer faster.

Read more...

29 Apr 2016 11:00am GMT

hypothesis.works articles: Testing performance optimizations

Once you've flushed out the basic crashing bugs in your code, you're going to want to look for more interesting things to test.

The next easiest thing to test is code where you know what the right answer is for every input.

Obviously in theory you think you know what the right answer is - you can just run the code. That's not very helpful though, as that's the answer you're trying to verify.

But sometimes there is more than one way to get the right answer, and you choose the one you run in production not because it gives a different answer but because it gives the same answer faster.

Read more...

29 Apr 2016 11:00am GMT

Vasudev Ram: Exploring sizes of data types in Python

By Vasudev Ram

I was doing some experiments in Python to see how much of various data types could fit into the memory of my machine. Things like creating successively larger lists of integers (ints), to see at what point it ran out of memory.

At one point, I got a MemoryError while trying to create a list of ints that I thought should fit into memory. Sample code:

>>> lis = range(10 ** 9)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError

After thinking a bit, I realized that the error was to be expected, since data types in dynamic languages such as Python tend to take more space than they do in static languages such as C, due to metadata, pre-allocation (for some types) and interpreter book-keeping overhead.

And I remembered the sys.getsizeof() function, which shows the number of bytes used by its argument. So I wrote this code to display the types and sizes of some commonly used types in Python:

from __future__ import print_function
import sys

# data_type_sizes_w_list_comp.py
# A program to show the sizes in bytes, of values of various
# Python data types.`

# Author: Vasudev Ram
# Copyright 2016 Vasudev Ram - https://vasudevram.github.io

#class Foo:
class Foo(object):
pass

def gen_func():
yield 1

def setup_data():
a_bool = bool(0)
an_int = 0
a_long = long(0)
a_float = float(0)
a_complex = complex(0, 0)
a_str = ''
a_tuple = ()
a_list = []
a_dict = {}
a_set = set()
an_iterator = iter([1, 2, 3])
a_function = gen_func
a_generator = gen_func()
an_instance = Foo()

data = (a_bool, an_int, a_long, a_float, a_complex,
a_str, a_tuple, a_list, a_dict, a_set,
an_iterator, a_function, a_generator, an_instance)
return data

data = setup_data()

print("\nPython data type sizes:\n")

header = "{} {} {}".format(\
"Data".center(10), "Type".center(15), "Length".center(10))
print(header)
print('-' * 40)

rows = [ "{} {} {}".format(\
repr(item).center(10), str(type(item)).center(15), \
str(sys.getsizeof(item)).center(10)) for item in data[:-4] ]
print('\n'.join(rows))
print('-' * 70)

rows = [ "{} {} {}".format(\
repr(item).center(10), str(type(item)).center(15), \
str(sys.getsizeof(item)).center(10)) for item in data[-4:] ]
print('\n'.join(rows))
print('-' * 70)

(I broke out the last 4 objects above into a separate section/table, since the output for them is wider than for the ones above them.)

Although iterators, functions, generators and instances (of classes) are not traditionally considered as data types, I included them as well, since they are all objects (see: almost everything in Python is an object), so they are data in a sense too, at least in the sense that programs can manipulate them. And while one is not likely to create tens of thousands or more of objects of these types (except maybe class instances [1]), it's interesting to have an idea of how much space instances of them take in memory.

[1] As an aside, if you have to create thousands of class instances, the flyweight design pattern might be of help.

Here is the output of running the program with:

$ python data_type_sizes.py

Python data type sizes:
----------------------------------------
Data Type Length
----------------------------------------
False <type 'bool'> 12
0 <type 'int'> 12
0L <type 'long'> 12
0.0 <type 'float'> 16
0j <type 'complex'> 24
'' <type 'str'> 21
() <type 'tuple'> 28
[] <type 'list'> 36
{} <type 'dict'> 140
set([]) <type 'set'> 116
----------------------------------------------------------------------

----------------------------------------------------------------------
<listiterator object at 0x021F0FF0> <type 'listiterator'> 32
<function gen_func at 0x021EBF30> <type 'function'> 60
<generator object gen_func at 0x021F6C60> <type 'generator'> 40
<__main__.Foo object at 0x022E6290> <class '__main__.Foo'> 32
----------------------------------------------------------------------


[ When I used the old-style Python class definition for Foo (see the comment near the class keyword in the code), the output for an_instance was this instead:
<__main__.Foo instance at 0x021F6C88> <type 'instance'> 36
So old-style class instances actually take 36 bytes vs. new-style ones taking 32.
]

We can draw a few deductions from the above output.

- bool is a subset of the int type, so takes the same space - 12 bytes.
- float takes a bit more space than long.
- complex takes even more.
- strings and the data types below it in the first table above, have a fair amount of overhead.

Finally, I first wrote the program with two for loops, then changed (and slightly shortened) it by using the two list comprehensions that you see above - hence the file name data_type_sizes_w_list_comp.py :)

- Enjoy.

- Vasudev Ram - Online Python training and consulting Signup to hear about my new courses and products. My Python posts Subscribe to my blog by email My ActiveState recipes

Share |
Vasudev Ram

29 Apr 2016 12:49am GMT

Vasudev Ram: Exploring sizes of data types in Python

By Vasudev Ram

I was doing some experiments in Python to see how much of various data types could fit into the memory of my machine. Things like creating successively larger lists of integers (ints), to see at what point it ran out of memory.

At one point, I got a MemoryError while trying to create a list of ints that I thought should fit into memory. Sample code:

>>> lis = range(10 ** 9)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError

After thinking a bit, I realized that the error was to be expected, since data types in dynamic languages such as Python tend to take more space than they do in static languages such as C, due to metadata, pre-allocation (for some types) and interpreter book-keeping overhead.

And I remembered the sys.getsizeof() function, which shows the number of bytes used by its argument. So I wrote this code to display the types and sizes of some commonly used types in Python:

from __future__ import print_function
import sys

# data_type_sizes_w_list_comp.py
# A program to show the sizes in bytes, of values of various
# Python data types.`

# Author: Vasudev Ram
# Copyright 2016 Vasudev Ram - https://vasudevram.github.io

#class Foo:
class Foo(object):
pass

def gen_func():
yield 1

def setup_data():
a_bool = bool(0)
an_int = 0
a_long = long(0)
a_float = float(0)
a_complex = complex(0, 0)
a_str = ''
a_tuple = ()
a_list = []
a_dict = {}
a_set = set()
an_iterator = iter([1, 2, 3])
a_function = gen_func
a_generator = gen_func()
an_instance = Foo()

data = (a_bool, an_int, a_long, a_float, a_complex,
a_str, a_tuple, a_list, a_dict, a_set,
an_iterator, a_function, a_generator, an_instance)
return data

data = setup_data()

print("\nPython data type sizes:\n")

header = "{} {} {}".format(\
"Data".center(10), "Type".center(15), "Length".center(10))
print(header)
print('-' * 40)

rows = [ "{} {} {}".format(\
repr(item).center(10), str(type(item)).center(15), \
str(sys.getsizeof(item)).center(10)) for item in data[:-4] ]
print('\n'.join(rows))
print('-' * 70)

rows = [ "{} {} {}".format(\
repr(item).center(10), str(type(item)).center(15), \
str(sys.getsizeof(item)).center(10)) for item in data[-4:] ]
print('\n'.join(rows))
print('-' * 70)

(I broke out the last 4 objects above into a separate section/table, since the output for them is wider than for the ones above them.)

Although iterators, functions, generators and instances (of classes) are not traditionally considered as data types, I included them as well, since they are all objects (see: almost everything in Python is an object), so they are data in a sense too, at least in the sense that programs can manipulate them. And while one is not likely to create tens of thousands or more of objects of these types (except maybe class instances [1]), it's interesting to have an idea of how much space instances of them take in memory.

[1] As an aside, if you have to create thousands of class instances, the flyweight design pattern might be of help.

Here is the output of running the program with:

$ python data_type_sizes.py

Python data type sizes:
----------------------------------------
Data Type Length
----------------------------------------
False <type 'bool'> 12
0 <type 'int'> 12
0L <type 'long'> 12
0.0 <type 'float'> 16
0j <type 'complex'> 24
'' <type 'str'> 21
() <type 'tuple'> 28
[] <type 'list'> 36
{} <type 'dict'> 140
set([]) <type 'set'> 116
----------------------------------------------------------------------

----------------------------------------------------------------------
<listiterator object at 0x021F0FF0> <type 'listiterator'> 32
<function gen_func at 0x021EBF30> <type 'function'> 60
<generator object gen_func at 0x021F6C60> <type 'generator'> 40
<__main__.Foo object at 0x022E6290> <class '__main__.Foo'> 32
----------------------------------------------------------------------


[ When I used the old-style Python class definition for Foo (see the comment near the class keyword in the code), the output for an_instance was this instead:
<__main__.Foo instance at 0x021F6C88> <type 'instance'> 36
So old-style class instances actually take 36 bytes vs. new-style ones taking 32.
]

We can draw a few deductions from the above output.

- bool is a subset of the int type, so takes the same space - 12 bytes.
- float takes a bit more space than long.
- complex takes even more.
- strings and the data types below it in the first table above, have a fair amount of overhead.

Finally, I first wrote the program with two for loops, then changed (and slightly shortened) it by using the two list comprehensions that you see above - hence the file name data_type_sizes_w_list_comp.py :)

- Enjoy.

- Vasudev Ram - Online Python training and consulting Signup to hear about my new courses and products. My Python posts Subscribe to my blog by email My ActiveState recipes

Share |
Vasudev Ram

29 Apr 2016 12:49am GMT

28 Apr 2016

feedPlanet Python

Audrey Roy Greenfeld: Lazy Evaluation and SQL Queries in the Django Shell

In Django terms, a QuerySet is an iterable of database records. What's nice about them is that they are evaluated only when you're ready for the results.

This means that even if it takes you a few lines of code to chain multiple queries, the Django ORM combines them into a single query. Less queries mean your database doesn't have to work as hard, and your website runs faster.


Evaluating a QuerySet Repeatedly

Imagine that we work for Häagen-Dazs and have access to their Django shell. We can use this to our advantage by hunting for free ice cream promotions.

Here, we get the active Promo objects. We evaluate the results just to see what promos are available. Then we filter them on the word free.

>>> results = Promo.objects.active()

>>> results
[<Promo: Free Flavors on Your Birthday>, <Promo: 10% Off All Cones>,
<Promo: Buy 1, Get 1 Free>]

>>> results = results.filter(
>>> Q(name__istartswith='free') |
>>> Q(description__icontains='free')
>>> )

>>> results
[<Promo: Free Flavors on Your Birthday>]


The queries generated by the above are:

from django.db import connection

>>> connection.queries
[ {'sql': 'SELECT "flavors_promo"."id", "flavors_promo"."name",
"flavors_promo"."description", "flavors_promo"."status" FROM
"flavors_promo" WHERE "flavors_promo"."status" = \'active\'
LIMIT 21',
'time': '0.000'},
{'sql': 'SELECT "flavors_promo"."id", "flavors_promo"."name",
"flavors_promo"."description", "flavors_promo"."status" FROM
"flavors_promo" WHERE ("flavors_promo"."status" = \'active\'
AND ("flavors_promo"."name" LIKE \'free%\' ESCAPE \'\\\' OR
"flavors_promo"."description" LIKE \'%free%\' ESCAPE \'\\\'))
LIMIT 21',
'time': '0.001'}]


There are 2 queries because we evaluated the results twice.

The first query was from the first time we retrieved all the active promos. It's pretty short. It just selects Promo records where promo.status is active.

The second query was from the second time we evaluated results, after we filtered for "free" in the promo names and descriptions.

As a side note, there is a bit of extra work in the second query as the second query still has that WHERE 'flavors_promo'.'status' = 'active' part. One might expect filter() to simply filter on the already-retrieved results rather than hitting the database again. But that's alright because the extra time is negligible.

Before we move on, let's reset the list of queries:

>>> from django.db import reset_queries
>>> reset_queries()

Evaluating a QuerySet Once

Now, let's look at what the queries would be if we only evaluated the results QuerySet once. Let's try building the same QuerySet again. Oh wait, just for fun, let's chain another operation so that we can be really sure that lazy evaluation is happening.

>>> results = Promo.objects.active()

>>> results = results.filter(
... Q(name__istartswith=name) |
... Q(description__icontains=name)
... )

>>> results = results.exclude(status='melted')

>>> results
[<Promo: Free Flavors on Your Birthday>]


As you can see, there's only one query:

>>> connection.queries
[{'sql': 'SELECT "flavors_promo"."id", "flavors_promo"."name",
"flavors_promo"."description", "flavors_promo"."status" FROM
"flavors_promo" WHERE ("flavors_promo"."status" = \'active\' AND
("flavors_promo"."name" LIKE \'free%\' ESCAPE \'\\\' OR
"flavors_promo"."description" LIKE \'%free%\' ESCAPE \'\\\') AND
NOT ("flavors_promo"."status" = \'melted\')) LIMIT 21',
'time': '0.001'}]


Thanks to lazy evaluation, only one query was constructed, despite chaining multiple operations. That was nice.

Sure, the query could have been more optimal without the AND NOT melted part, but arguably that wasn't Django's fault, it was mine. But it gives me a clue about which operation I didn't need to chain in the Python code.

Next Steps

Try this on one of your projects. Open the Django shell, then try out some queries and see how they are evaluated. In particular, look at queries from one of your slower views.

You can also do similar things with Django Debug Toolbar. However, in the shell you can dissect your Python code line by line, which can be very helpful.

28 Apr 2016 11:53pm GMT

Audrey Roy Greenfeld: Lazy Evaluation and SQL Queries in the Django Shell

In Django terms, a QuerySet is an iterable of database records. What's nice about them is that they are evaluated only when you're ready for the results.

This means that even if it takes you a few lines of code to chain multiple queries, the Django ORM combines them into a single query. Less queries mean your database doesn't have to work as hard, and your website runs faster.


Evaluating a QuerySet Repeatedly

Imagine that we work for Häagen-Dazs and have access to their Django shell. We can use this to our advantage by hunting for free ice cream promotions.

Here, we get the active Promo objects. We evaluate the results just to see what promos are available. Then we filter them on the word free.

>>> results = Promo.objects.active()

>>> results
[<Promo: Free Flavors on Your Birthday>, <Promo: 10% Off All Cones>,
<Promo: Buy 1, Get 1 Free>]

>>> results = results.filter(
>>> Q(name__istartswith='free') |
>>> Q(description__icontains='free')
>>> )

>>> results
[<Promo: Free Flavors on Your Birthday>]


The queries generated by the above are:

from django.db import connection

>>> connection.queries
[ {'sql': 'SELECT "flavors_promo"."id", "flavors_promo"."name",
"flavors_promo"."description", "flavors_promo"."status" FROM
"flavors_promo" WHERE "flavors_promo"."status" = \'active\'
LIMIT 21',
'time': '0.000'},
{'sql': 'SELECT "flavors_promo"."id", "flavors_promo"."name",
"flavors_promo"."description", "flavors_promo"."status" FROM
"flavors_promo" WHERE ("flavors_promo"."status" = \'active\'
AND ("flavors_promo"."name" LIKE \'free%\' ESCAPE \'\\\' OR
"flavors_promo"."description" LIKE \'%free%\' ESCAPE \'\\\'))
LIMIT 21',
'time': '0.001'}]


There are 2 queries because we evaluated the results twice.

The first query was from the first time we retrieved all the active promos. It's pretty short. It just selects Promo records where promo.status is active.

The second query was from the second time we evaluated results, after we filtered for "free" in the promo names and descriptions.

As a side note, there is a bit of extra work in the second query as the second query still has that WHERE 'flavors_promo'.'status' = 'active' part. One might expect filter() to simply filter on the already-retrieved results rather than hitting the database again. But that's alright because the extra time is negligible.

Before we move on, let's reset the list of queries:

>>> from django.db import reset_queries
>>> reset_queries()

Evaluating a QuerySet Once

Now, let's look at what the queries would be if we only evaluated the results QuerySet once. Let's try building the same QuerySet again. Oh wait, just for fun, let's chain another operation so that we can be really sure that lazy evaluation is happening.

>>> results = Promo.objects.active()

>>> results = results.filter(
... Q(name__istartswith=name) |
... Q(description__icontains=name)
... )

>>> results = results.exclude(status='melted')

>>> results
[<Promo: Free Flavors on Your Birthday>]


As you can see, there's only one query:

>>> connection.queries
[{'sql': 'SELECT "flavors_promo"."id", "flavors_promo"."name",
"flavors_promo"."description", "flavors_promo"."status" FROM
"flavors_promo" WHERE ("flavors_promo"."status" = \'active\' AND
("flavors_promo"."name" LIKE \'free%\' ESCAPE \'\\\' OR
"flavors_promo"."description" LIKE \'%free%\' ESCAPE \'\\\') AND
NOT ("flavors_promo"."status" = \'melted\')) LIMIT 21',
'time': '0.001'}]


Thanks to lazy evaluation, only one query was constructed, despite chaining multiple operations. That was nice.

Sure, the query could have been more optimal without the AND NOT melted part, but arguably that wasn't Django's fault, it was mine. But it gives me a clue about which operation I didn't need to chain in the Python code.

Next Steps

Try this on one of your projects. Open the Django shell, then try out some queries and see how they are evaluated. In particular, look at queries from one of your slower views.

You can also do similar things with Django Debug Toolbar. However, in the shell you can dissect your Python code line by line, which can be very helpful.

28 Apr 2016 11:53pm GMT

PyCon: Open Spaces — plan a day ahead this year at PyCon 2016!

What's so awesome about PyCon's Open Spaces?

Open Spaces are spontaneous, grassroots, and attendee focused. While most of the conference is scheduled months ahead of time, Open Spaces are created on-site by the participants themselves! They offer groups the ability to self-gather, self-define, and self-organize in a way that often doesn't happen anywhere else at PyCon.

Open Spaces are little one-hour meetups during the three main conference days, held in free meeting rooms that PyCon provides at the convention center. Some people reserve spaces to talk about a favorite technology, whether web frameworks, neural nets, or natural language processing. Academics and scientists plan spaces around topics like astronomy, data science, and weather forecasting. Other attendees schedule actual activities during open spaces like yoga, nail painting, and board games!

Any topic that two or more attendees are interested in, or an activity that more than two people would like to do, is a great candidate for an open space. You can find a list of sample ideas a few pages down in the Open Spaces guide on our web site:

https://us.pycon.org/2016/events/open-spaces/

If you have additional ideas, please email us at pycon-openspaces@python.organd we can add them to the list.

For 2016, an extra day to plan each Open Space!

This year we are doing things a little differently. Instead of the sign-up board for each conference day only making its first appearance that morning, we are going to go ahead and make each day's board available the previous day as well. This means that each day will feature two sign-up boards, which will be placed closed to the registration area: one for the current day, and one for the following day.

This will give Open Space hosts and their attendees the ability to plan further ahead. Hosts will be able to reserve a slot one day in advance - creating a longer window for them to advertise the space and let other interested attendees know. And attendees will be able to go ahead and start planning which Open Spaces they want to attend the next day.

In fact, the very first Open Spaces board will be up on Sunday evening during the Opening Reception, the evening before the main conference even starts! This will give hosts a chance to go ahead and reserve a slot for the first day of the conference while it is still the night before.

Promote Your Open Space

We are introducing the hashtag #PyConOpenSpace this year. We encourage you to use it as you promote your Open Space and let people know about it. It's also a great idea to add your Twitter handle to the card that you pin on the Open Space schedule board, in case anyone interested in attending your open space has a question or wants to contact you about it.

If you're unsure about whether people like your open space idea or whether they would attend, we encourage you to use the new Twitter polls function and mark your tweet with the hashtag #PyConOpenSpace so those interested in Open Spaces can vote on topic ideas.

The committee is looking forward to all of the great Open Spaces that are awaiting us at PyCon US 2016!

28 Apr 2016 10:36pm GMT

PyCon: Open Spaces — plan a day ahead this year at PyCon 2016!

What's so awesome about PyCon's Open Spaces?

Open Spaces are spontaneous, grassroots, and attendee focused. While most of the conference is scheduled months ahead of time, Open Spaces are created on-site by the participants themselves! They offer groups the ability to self-gather, self-define, and self-organize in a way that often doesn't happen anywhere else at PyCon.

Open Spaces are little one-hour meetups during the three main conference days, held in free meeting rooms that PyCon provides at the convention center. Some people reserve spaces to talk about a favorite technology, whether web frameworks, neural nets, or natural language processing. Academics and scientists plan spaces around topics like astronomy, data science, and weather forecasting. Other attendees schedule actual activities during open spaces like yoga, nail painting, and board games!

Any topic that two or more attendees are interested in, or an activity that more than two people would like to do, is a great candidate for an open space. You can find a list of sample ideas a few pages down in the Open Spaces guide on our web site:

https://us.pycon.org/2016/events/open-spaces/

If you have additional ideas, please email us at pycon-openspaces@python.organd we can add them to the list.

For 2016, an extra day to plan each Open Space!

This year we are doing things a little differently. Instead of the sign-up board for each conference day only making its first appearance that morning, we are going to go ahead and make each day's board available the previous day as well. This means that each day will feature two sign-up boards, which will be placed closed to the registration area: one for the current day, and one for the following day.

This will give Open Space hosts and their attendees the ability to plan further ahead. Hosts will be able to reserve a slot one day in advance - creating a longer window for them to advertise the space and let other interested attendees know. And attendees will be able to go ahead and start planning which Open Spaces they want to attend the next day.

In fact, the very first Open Spaces board will be up on Sunday evening during the Opening Reception, the evening before the main conference even starts! This will give hosts a chance to go ahead and reserve a slot for the first day of the conference while it is still the night before.

Promote Your Open Space

We are introducing the hashtag #PyConOpenSpace this year. We encourage you to use it as you promote your Open Space and let people know about it. It's also a great idea to add your Twitter handle to the card that you pin on the Open Space schedule board, in case anyone interested in attending your open space has a question or wants to contact you about it.

If you're unsure about whether people like your open space idea or whether they would attend, we encourage you to use the new Twitter polls function and mark your tweet with the hashtag #PyConOpenSpace so those interested in Open Spaces can vote on topic ideas.

The committee is looking forward to all of the great Open Spaces that are awaiting us at PyCon US 2016!

28 Apr 2016 10:36pm GMT

Import Python: ImportPython Issue 70


Worthy Read


Create a new Python app, free with Azure App Service. Curator's Note - Select Python from the language dropdown and select Python. You have templates for django, flask, bottle. Check it out.
Sponsor

core python
Python is a little unusual regarding sorted collection types as compared with other programming languages. Three of the top five programming languages in the TIOBE Index include sorted list, sorted dict or sorted set data types. But neither Python nor C include these. For a language heralded as "batteries included" that's a little strange.

django
A very handy and to the point list of checklist. If you are a Django developer do read them.

Django Rest Framework
The Django REST Framework (DRF for short) allows Django developers to build simple yet robust standards-based REST API for their applications. Even seemingly simple, straightforward usage of the Django REST Framework and its nested serializers can kill performance of your API endpoints. At it's root, the problem is called the "N+1 selects problem"; the database is queried once for data in a table (say, Customers), and then, one or more times per customer inside a loop to get, say, customer.country.Name. Using the Django ORM, this mistake is easy to make. Using DRF, it is hard not to make.

core python
The select module provides access to platform-specific I/O monitoring functions. The most portable interface is the POSIX function select(), which is available on UNIX and Windows. The module also includes poll(), a UNIX-only API, and several options that only work with specific variants of UNIX.

podcast
As Python developers we have all used pip to install the different libraries and projects that we need for our work, but have you ever wondered about who works on pip and how the package archive we all know and love is maintained? In this episode we interviewed Donald Stufft who is the primary maintainer of pip and the Python Package Index about how he got involved with the projects, what kind of work is involved, and what is on the roadmap. Give it a listen and then give him a big thank you for all of his hard work!

Main goal of developing this framework is to be able to edit any workflow item on the fly. This means, all elements in workflow like states, transitions, user authorizations(permission), group authorization are editable. To do this, all data about the workflow item is persisted into DB. Hence, they can be changed without touching the code and re-deploying your application.


Syncano. Database. Backend. Middleware. Real-time. Support. Start for free!
Sponsor

When people think computational geometry, in my experience, they typically think one of two things: Wow, that sounds complicated. Oh yeah, convex hull. In this post, I'd like to shed some light on computational geometry, starting with a brief overview of the subject before moving into some practical advice based on my own experiences

core python
This is part 3 in the series of articles on multiple dispatch. Part 1 introduced the problem and discussed the issues surrounding it, along with a couple of possible solutions in C++. Part 2 revisited multiple dispatch in Python, implementing a couple of variations. In this part, I'm going back to the roots of multiple dispatch - Common Lisp - one of the first mainstream programming languages to introduce multi-methods and an OOP system based on multiple dispatch at its core.

interview
Margaret Myrick is a program manager at Indeed, and a musician on the side. She has lived in Texas since she was a child.

Master Yoda, I am. Left Degobah I have, and come to messenger as a bot. Message me. Reply in my own style, I would. Curator Note - Pretty funny check it out.

core python
prompt_toolkit is a library for building powerful interactive command lines and terminal applications in Python.

core python
Hey guys, Recently, I informed my manager that I was willing to go above-and-beyond and help improve some of our team operations by writing a handful of Python scripts. He is a stickler for following the rules, so he ordered me to ask permission to install Python on my development machine (I wasn't intending on asking permission). My request has been refused. Despite providing evidence from the Python Foundation on their open licensing (especially for my purposes, which is just local machine -- not production), they are still refusing on account that Python is type of GPL and it is interpretative. Have you guys run into something like this before? It seems ridiculous to me.

video
If the title doesn't make any sense, then there's no hope that the description will be any better. This talk will be a strange dive into interpreter hacks, the pointlessness of the Python 2 vs 3 debate, and the twisted artistic drive that pushes the speaker to come up with these perversions of the Python language. Prepared to be simultaneously repulsed, intrigued, and completely bored.

Chrome stores its data locally in an SQLite database. So all we need to do here is write a consistent Python code that would make a connection to the database, query the necessary fields and extract the required data, which is the URLs visited and the corresponding total visit counts, and churn it out like a puppy.


Learn how to write code to automatically extract and analyze data from the web and social media. Join students from around the world from law enforcement, journalism, information security and more.
Sponsor


New Books

Learn to perform forensic analysis and investigations with the help of Python, and gain an advanced understanding of the various Python libraries and frameworks. Analyze Python scripts to extract metadata and investigate forensic artifacts. The writers, Dr. Michael Spreitzenbarth and Dr. Johann Uhrmann, have used their experience to craft this hands-on guide to using Python for forensic analysis and investigations

Jobs

Mumbai


San Antonio, TX, United States




Upcoming Conference / User Group Meet





Projects

gym - 635 Stars, 71 Fork
A toolkit for developing and comparing reinforcement learning algorithms.

rllab - 86 Stars, 12 Fork
rllab is a framework for developing and evaluating reinforcement learning algorithms.

otek - 45 Stars, 2 Fork
An unopinionated project builder for everyone.

LowRankPropagation - 22 Stars, 3 Fork
Propagation Technique for Solving Low Rank Matrix Completion

detux - 20 Stars, 3 Fork
The Multiplatform Linux Sandbox

doorman - 15 Stars, 1 Fork
an osquery fleet manager

ballade - 14 Stars, 1 Fork
Ballade is a light weight http proxy based on tornado and an upstream proxy switcher using SwitchyOmega rules

LearnProgrammingBot - 14 Stars, 1 Fork
Bot for /r/learnprogramming using supervised learning

falcon-api - 11 Stars, 0 Fork
Web APIs for Falcon.

flatdoc - 4 Stars, 0 Fork
Flat documentation generator

slactorbot - 4 Stars, 0 Fork
A Slack bot with hot patch!

elastic-bill - 3 Stars, 1 Fork
Elastic bill is a multi cloud platform billing management tool.

28 Apr 2016 1:40pm GMT

Import Python: ImportPython Issue 70


Worthy Read


Create a new Python app, free with Azure App Service. Curator's Note - Select Python from the language dropdown and select Python. You have templates for django, flask, bottle. Check it out.
Sponsor

core python
Python is a little unusual regarding sorted collection types as compared with other programming languages. Three of the top five programming languages in the TIOBE Index include sorted list, sorted dict or sorted set data types. But neither Python nor C include these. For a language heralded as "batteries included" that's a little strange.

django
A very handy and to the point list of checklist. If you are a Django developer do read them.

Django Rest Framework
The Django REST Framework (DRF for short) allows Django developers to build simple yet robust standards-based REST API for their applications. Even seemingly simple, straightforward usage of the Django REST Framework and its nested serializers can kill performance of your API endpoints. At it's root, the problem is called the "N+1 selects problem"; the database is queried once for data in a table (say, Customers), and then, one or more times per customer inside a loop to get, say, customer.country.Name. Using the Django ORM, this mistake is easy to make. Using DRF, it is hard not to make.

core python
The select module provides access to platform-specific I/O monitoring functions. The most portable interface is the POSIX function select(), which is available on UNIX and Windows. The module also includes poll(), a UNIX-only API, and several options that only work with specific variants of UNIX.

podcast
As Python developers we have all used pip to install the different libraries and projects that we need for our work, but have you ever wondered about who works on pip and how the package archive we all know and love is maintained? In this episode we interviewed Donald Stufft who is the primary maintainer of pip and the Python Package Index about how he got involved with the projects, what kind of work is involved, and what is on the roadmap. Give it a listen and then give him a big thank you for all of his hard work!

Main goal of developing this framework is to be able to edit any workflow item on the fly. This means, all elements in workflow like states, transitions, user authorizations(permission), group authorization are editable. To do this, all data about the workflow item is persisted into DB. Hence, they can be changed without touching the code and re-deploying your application.


Syncano. Database. Backend. Middleware. Real-time. Support. Start for free!
Sponsor

When people think computational geometry, in my experience, they typically think one of two things: Wow, that sounds complicated. Oh yeah, convex hull. In this post, I'd like to shed some light on computational geometry, starting with a brief overview of the subject before moving into some practical advice based on my own experiences

core python
This is part 3 in the series of articles on multiple dispatch. Part 1 introduced the problem and discussed the issues surrounding it, along with a couple of possible solutions in C++. Part 2 revisited multiple dispatch in Python, implementing a couple of variations. In this part, I'm going back to the roots of multiple dispatch - Common Lisp - one of the first mainstream programming languages to introduce multi-methods and an OOP system based on multiple dispatch at its core.

interview
Margaret Myrick is a program manager at Indeed, and a musician on the side. She has lived in Texas since she was a child.

Master Yoda, I am. Left Degobah I have, and come to messenger as a bot. Message me. Reply in my own style, I would. Curator Note - Pretty funny check it out.

core python
prompt_toolkit is a library for building powerful interactive command lines and terminal applications in Python.

core python
Hey guys, Recently, I informed my manager that I was willing to go above-and-beyond and help improve some of our team operations by writing a handful of Python scripts. He is a stickler for following the rules, so he ordered me to ask permission to install Python on my development machine (I wasn't intending on asking permission). My request has been refused. Despite providing evidence from the Python Foundation on their open licensing (especially for my purposes, which is just local machine -- not production), they are still refusing on account that Python is type of GPL and it is interpretative. Have you guys run into something like this before? It seems ridiculous to me.

video
If the title doesn't make any sense, then there's no hope that the description will be any better. This talk will be a strange dive into interpreter hacks, the pointlessness of the Python 2 vs 3 debate, and the twisted artistic drive that pushes the speaker to come up with these perversions of the Python language. Prepared to be simultaneously repulsed, intrigued, and completely bored.

Chrome stores its data locally in an SQLite database. So all we need to do here is write a consistent Python code that would make a connection to the database, query the necessary fields and extract the required data, which is the URLs visited and the corresponding total visit counts, and churn it out like a puppy.


Learn how to write code to automatically extract and analyze data from the web and social media. Join students from around the world from law enforcement, journalism, information security and more.
Sponsor


New Books

Learn to perform forensic analysis and investigations with the help of Python, and gain an advanced understanding of the various Python libraries and frameworks. Analyze Python scripts to extract metadata and investigate forensic artifacts. The writers, Dr. Michael Spreitzenbarth and Dr. Johann Uhrmann, have used their experience to craft this hands-on guide to using Python for forensic analysis and investigations

Jobs

Mumbai


San Antonio, TX, United States




Upcoming Conference / User Group Meet





Projects

gym - 635 Stars, 71 Fork
A toolkit for developing and comparing reinforcement learning algorithms.

rllab - 86 Stars, 12 Fork
rllab is a framework for developing and evaluating reinforcement learning algorithms.

otek - 45 Stars, 2 Fork
An unopinionated project builder for everyone.

LowRankPropagation - 22 Stars, 3 Fork
Propagation Technique for Solving Low Rank Matrix Completion

detux - 20 Stars, 3 Fork
The Multiplatform Linux Sandbox

doorman - 15 Stars, 1 Fork
an osquery fleet manager

ballade - 14 Stars, 1 Fork
Ballade is a light weight http proxy based on tornado and an upstream proxy switcher using SwitchyOmega rules

LearnProgrammingBot - 14 Stars, 1 Fork
Bot for /r/learnprogramming using supervised learning

falcon-api - 11 Stars, 0 Fork
Web APIs for Falcon.

flatdoc - 4 Stars, 0 Fork
Flat documentation generator

slactorbot - 4 Stars, 0 Fork
A Slack bot with hot patch!

elastic-bill - 3 Stars, 1 Fork
Elastic bill is a multi cloud platform billing management tool.

28 Apr 2016 1:40pm GMT

10 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: King Willams Town Bahnhof

Gestern musste ich morgens zur Station nach KWT um unsere Rerservierten Bustickets für die Weihnachtsferien in Capetown abzuholen. Der Bahnhof selber ist seit Dezember aus kostengründen ohne Zugverbindung - aber Translux und co - die langdistanzbusse haben dort ihre Büros.


Größere Kartenansicht




© benste CC NC SA

10 Nov 2011 10:57am GMT

09 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein

Niemand ist besorgt um so was - mit dem Auto fährt man einfach durch, und in der City - nahe Gnobie- "ne das ist erst gefährlich wenn die Feuerwehr da ist" - 30min später auf dem Rückweg war die Feuerwehr da.




© benste CC NC SA

09 Nov 2011 8:25pm GMT

08 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Brai Party

Brai = Grillabend o.ä.

Die möchte gern Techniker beim Flicken ihrer SpeakOn / Klinke Stecker Verzweigungen...

Die Damen "Mamas" der Siedlung bei der offiziellen Eröffnungsrede

Auch wenn weniger Leute da waren als erwartet, Laute Musik und viele Leute ...

Und natürlich ein Feuer mit echtem Holz zum Grillen.

© benste CC NC SA

08 Nov 2011 2:30pm GMT

07 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Lumanyano Primary

One of our missions was bringing Katja's Linux Server back to her room. While doing that we saw her new decoration.

Björn, Simphiwe carried the PC to Katja's school


© benste CC NC SA

07 Nov 2011 2:00pm GMT

06 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Nelisa Haircut

Today I went with Björn to Needs Camp to Visit Katja's guest family for a special Party. First of all we visited some friends of Nelisa - yeah the one I'm working with in Quigney - Katja's guest fathers sister - who did her a haircut.

African Women usually get their hair done by arranging extensions and not like Europeans just cutting some hair.

In between she looked like this...

And then she was done - looks amazing considering the amount of hair she had last week - doesn't it ?

© benste CC NC SA

06 Nov 2011 7:45pm GMT

05 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Mein Samstag

Irgendwie viel mir heute auf das ich meine Blogposts mal ein bischen umstrukturieren muss - wenn ich immer nur von neuen Plätzen berichte, dann müsste ich ja eine Rundreise machen. Hier also mal ein paar Sachen aus meinem heutigen Alltag.

Erst einmal vorweg, Samstag zählt zumindest für uns Voluntäre zu den freien Tagen.

Dieses Wochenende sind nur Rommel und ich auf der Farm - Katja und Björn sind ja mittlerweile in ihren Einsatzstellen, und meine Mitbewohner Kyle und Jonathan sind zu Hause in Grahamstown - sowie auch Sipho der in Dimbaza wohnt.
Robin, die Frau von Rommel ist in Woodie Cape - schon seit Donnerstag um da ein paar Sachen zur erledigen.
Naja wie dem auch sei heute morgen haben wir uns erstmal ein gemeinsames Weetbix/Müsli Frühstück gegönnt und haben uns dann auf den Weg nach East London gemacht. 2 Sachen waren auf der Checkliste Vodacom, Ethienne (Imobilienmakler) außerdem auf dem Rückweg die fehlenden Dinge nach NeedsCamp bringen.

Nachdem wir gerade auf der Dirtroad losgefahren sind mussten wir feststellen das wir die Sachen für Needscamp und Ethienne nicht eingepackt hatten aber die Pumpe für die Wasserversorgung im Auto hatten.

Also sind wir in EastLondon ersteinmal nach Farmerama - nein nicht das onlinespiel farmville - sondern einen Laden mit ganz vielen Sachen für eine Farm - in Berea einem nördlichen Stadteil gefahren.

In Farmerama haben wir uns dann beraten lassen für einen Schnellverschluss der uns das leben mit der Pumpe leichter machen soll und außerdem eine leichtere Pumpe zur Reperatur gebracht, damit es nicht immer so ein großer Aufwand ist, wenn mal wieder das Wasser ausgegangen ist.

Fego Caffé ist in der Hemmingways Mall, dort mussten wir und PIN und PUK einer unserer Datensimcards geben lassen, da bei der PIN Abfrage leider ein zahlendreher unterlaufen ist. Naja auf jeden Fall speichern die Shops in Südafrika so sensible Daten wie eine PUK - die im Prinzip zugang zu einem gesperrten Phone verschafft.

Im Cafe hat Rommel dann ein paar online Transaktionen mit dem 3G Modem durchgeführt, welches ja jetzt wieder funktionierte - und übrigens mittlerweile in Ubuntu meinem Linuxsystem perfekt klappt.

Nebenbei bin ich nach 8ta gegangen um dort etwas über deren neue Deals zu erfahren, da wir in einigen von Hilltops Centern Internet anbieten wollen. Das Bild zeigt die Abdeckung UMTS in NeedsCamp Katjas Ort. 8ta ist ein neuer Telefonanbieter von Telkom, nachdem Vodafone sich Telkoms anteile an Vodacom gekauft hat müssen die komplett neu aufbauen.
Wir haben uns dazu entschieden mal eine kostenlose Prepaidkarte zu testen zu organisieren, denn wer weis wie genau die Karte oben ist ... Bevor man einen noch so billigen Deal für 24 Monate signed sollte man wissen obs geht.

Danach gings nach Checkers in Vincent, gesucht wurden zwei Hotplates für WoodyCape - R 129.00 eine - also ca. 12€ für eine zweigeteilte Kochplatte.
Wie man sieht im Hintergrund gibts schon Weihnachtsdeko - Anfang November und das in Südafrika bei sonnig warmen min- 25°C

Mittagessen haben wir uns bei einem Pakistanischen Curry Imbiss gegönnt - sehr empfehlenswert !
Naja und nachdem wir dann vor ner Stunde oder so zurück gekommen sind habe ich noch den Kühlschrank geputzt den ich heute morgen zum defrosten einfach nach draußen gestellt hatte. Jetzt ist der auch mal wieder sauber und ohne 3m dicke Eisschicht...

Morgen ... ja darüber werde ich gesondert berichten ... aber vermutlich erst am Montag, denn dann bin ich nochmal wieder in Quigney(East London) und habe kostenloses Internet.

© benste CC NC SA

05 Nov 2011 4:33pm GMT

31 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Sterkspruit Computer Center

Sterkspruit is one of Hilltops Computer Centres in the far north of Eastern Cape. On the trip to J'burg we've used the opportunity to take a look at the centre.

Pupils in the big classroom


The Trainer


School in Countryside


Adult Class in the Afternoon


"Town"


© benste CC NC SA

31 Oct 2011 4:58pm GMT

Benedict Stein: Technical Issues

What are you doing in an internet cafe if your ADSL and Faxline has been discontinued before months end. Well my idea was sitting outside and eating some ice cream.
At least it's sunny and not as rainy as on the weekend.


© benste CC NC SA

31 Oct 2011 3:11pm GMT

30 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Nellis Restaurant

For those who are traveling through Zastron - there is a very nice Restaurant which is serving delicious food at reasanable prices.
In addition they're selling home made juices jams and honey.




interior


home made specialities - the shop in the shop


the Bar


© benste CC NC SA

30 Oct 2011 4:47pm GMT

29 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: The way back from J'burg

Having the 10 - 12h trip from J'burg back to ELS I was able to take a lot of pcitures including these different roadsides

Plain Street


Orange River in its beginngings (near Lesotho)


Zastron Anglican Church


The Bridge in Between "Free State" and Eastern Cape next to Zastron


my new Background ;)


If you listen to GoogleMaps you'll end up traveling 50km of gravel road - as it was just renewed we didn't have that many problems and saved 1h compared to going the official way with all it's constructions sites




Freeway


getting dark


© benste CC NC SA

29 Oct 2011 4:23pm GMT

28 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Wie funktioniert eigentlich eine Baustelle ?

Klar einiges mag anders sein, vieles aber gleich - aber ein in Deutschland täglich übliches Bild einer Straßenbaustelle - wie läuft das eigentlich in Südafrika ?

Ersteinmal vorweg - NEIN keine Ureinwohner die mit den Händen graben - auch wenn hier mehr Manpower genutzt wird - sind sie fleißig mit Technologie am arbeiten.

Eine ganz normale "Bundesstraße"


und wie sie erweitert wird


gaaaanz viele LKWs


denn hier wird eine Seite über einen langen Abschnitt komplett gesperrt, so das eine Ampelschaltung mit hier 45 Minuten Wartezeit entsteht


Aber wenigstens scheinen die ihren Spaß zu haben ;) - Wie auch wir denn gücklicher Weise mussten wir nie länger als 10 min. warten.

© benste CC NC SA

28 Oct 2011 4:20pm GMT