18 Nov 2015

feedPlanet Arch Linux

C100P Tweaks

Now with charging limits!

18 Nov 2015 5:37am GMT

13 Nov 2015

feedPlanet Arch Linux

Xorg 1.18.0 enters [testing]

Xorg 1.18.0 is entering [testing] with the following changes:

Update: Nvidia drivers are now compatible with xorg-1.18.0 (ABI 20)

13 Nov 2015 12:42pm GMT

06 Nov 2015

feedPlanet Arch Linux

C100P Tweaks

All the little things I do for the Asus Flip

06 Nov 2015 8:06am GMT

02 Nov 2015

feedPlanet Arch Linux

Shooting and Stitching Panoramas in Hugin - Step by step

I was asked to write something about my workflow in creating panoramas. Some of you maybe have seen my huge panoramas on my Google+ or 23 account or if you haven't then have a look at them ;-)

For demonstrating this step by step guide I will use my latest panorama, which was generated from 39 portrait photos. At the end the final processed panorama will look like this:

The shooting

As I mentioned in my G+ post you can see the Kinzig valley near Haslach (Black Forest in Germany). You have this view on the newly builded observation tower on top of Urenkopf, the tower is simply called Urenkopf-Tower. If you need more information about the tower, you will find all information on the website of Haslach.

I had no tripod with me, most of the time I don't use a tripod for my panoramas. If you really hold your camera in the same horizontal position during the shooting you will be able to shoot one shot after the other with a little overlaps of each shot. Sure it will be better and easier if you use a tripod, so decide on your own how you will do the shots.

For the shots I used a focal length of 28mm on my Tamron 28-75mm f/2.8 lens and an ISO setting of 100. Shutter speed was around 1/250s at f/8. With these setting I had shot 39 photos:
I know that less photos would also work, but if you shoot handheld then you have better more shots than missing important parts of the panorama and then the work were useless.

The processing

For my photo processing and my complete workflow I use open source software under Linux. May favourite distribution is Arch Linux, because I'm a member of the developer team, so it's normal that I use it. My photo management software is Digikam, photo processing and editing is done in GIMP and in RawTherapee. I know that Digikam has a panorama plugin, but I prefer the usage of Hugin.

All these programs are available for Mircosoft Windows, too. I don't know how well they work on Windows, so if you try these on Windows then good luck, but I can't give you any support on this.

Since some versions Hugin has the "Simple" interface mode which hides most of the expert settings from the GUI. That's okay for the most of panoramas you will generate.

So I have imported all 39 images into hugin. If you start up Hugin and you have the simple interfaces activatd, then you have just to click on the "1. Load images" button, select the panorama images and press "Open".
Hugin starts automatically to analyze the images and tries to find control points for the stitching process.
After some seconds or minutes, depends on the number of shots, Hugin will present you a first preview of your panorama:
Hugin gives you some information about your images in the right corner, how good your shots fits and other maybe useful information.

Now you can change the projection of the panorama, move single images around, straighten the sky and crop the panorama to your needs. Also you can identify the control points or the overlapping images. Hugin gives a huge set of options here. Some options are displayed in the next screenshots:
Showing control points

Identify each single photo

Select the projection

Straighten the horizon

Play around with the setting of the projection, maybe you will find a better projection for your panorama. I used the cylindrical projection.
Most of the time all these settings are not necessary to change, so if everything looks good, then you just have to click the "Create panorama..." button. Hugin is now starting the complete process with stitching and with blending all the images, this takes some time depending on your images. Hugin will inform you about the process all the time with its status window:

After the complete process Hugin should output the final panorama. Normally it looks like the one in the preview. Mine looks like this:

After editing the final panorama in GIMP and RawTherapee the following panorama which you have seen in my posts was created:

Deconstructing Featured Photo

Camera: Canon EOS 7D
Lens: Tamron 28-75mm f/2.4
Focal Length: 28mm
ISO: 100
Aperture: F8
Shutter Speed: 1/250 sec
Tripod: none used for this shot, but my new travel tripod is a Sirui T-005X
Ballhead: C-10X and plate TY-C10

I hope you liked my small step by step tutorial about my workflow in creating panoramas. Leave a comment, a +1, a like or tweet about it.

02 Nov 2015 3:35pm GMT

08 Oct 2015

feedPlanet Arch Linux

Downtime (rsync, mail)

Update: All fixed now.

I just installed a kernel update on our rsync and mail server and it seems we have broken hardware so it is unable to reboot right now. Mailing lists are running on a different system however you need to use the lists.archlinux.org domain rather than archlinux.org. So for arch-general you'd use arch-general@lists.archlinux.org. Mails sent to the normal domain will go through once the server is up again.

The rsync master will stay unavailable for now.

I've asked the hoster to look into the issue, but I can't currently estimate when I'll get a reply/fix.

Sorry for the inconvenience, Florian

08 Oct 2015 8:10am GMT

05 Oct 2015

feedPlanet Arch Linux

From Ghost To Nanoc

I completed my blog migration from Ghost to nanoc.

About 2 years ago I did setup a blog on blog.as.it using Ghost. It's UI was very minimal and I liked the default theme (Casper) a lot.

However, I kept nanoc for my main website, until I decided to give Hakyll a try. It's not that nanoc didn't satisfy me at that time, but that I was affascinated by Haskell - I'm still affascinated by Haskell, but I've no much time to play with it, while I play with Ruby more often.

Someday ago I tought it's time to merge my website and my blog; both could be handled by a static site generator and since I'm fluent in Ruby more then Haskell, I went for nanoc again.

The migration has not been hard because one of the main features of Ghost is that you write your post using Markdown, then I wrote this shell script to migrate my posts from Ghost to a "nanoc compatible format" like:

kind: article
created_at: 2015-10-06
title: My Ghost Post
tags: ['example']
This is a post in **Ghost**!

With that script my posts were split and ready in the content folder to be built by nanoc. Nothing more to do! Well, in truth I had to fix the path to the linked images manually…

The second step was to put some redirect to allow the old links around the web to continue to work, specifically the Ghost pattern was http://blog.as.it/my-ghost-post/ while in nanoc I went for http://www.andreascarpino.it/posts/my-ghost-post.html. I fixed this in my blog nginx configuration:

location = / {
  rewrite ^ $scheme://www.andreascarpino.it permanent;

location / {
  rewrite ^(.*) $scheme://www.andreascarpino.it/posts$request_uri permanent;

location = /rss/ {
  rewrite ^ $scheme://www.andreascarpino.it/feed.xml permanent;

While in the website nginx configuration I put:

location /posts/ {
  root   /srv/http/website;
  if ($request_filename ~* ^.+.html$) {
  if ($request_uri ~* ^.+/$) {
    rewrite ^/(.*)/$ /$1.html permanent;

And that's!

Hope this helps someone that plans to do the same migration. If you are interested at looking at my nanoc setup, the configuration is here.

05 Oct 2015 10:00pm GMT

02 Oct 2015

feedPlanet Arch Linux

Interview with Matt Reiferson, creator of NSQ

I'm a fan of the NSQ message processing system written in golang. I've studied the code, transplanted its diskqueue code into another project, and have used NSQ by itself. The code is well thought out, organized and written.

Inspired by the book coders at work and the systems live podcast, I wanted to try something I've never done before: spend an hour talking to Matt Reiferson - the main author of NSQ - about software design and Go programming patterns, and post the video online for whomever might be interested.

We talked about Matt's background, starting the NSQ project at Bitly as his first (!) Go project, (code) design patterns in NSQ and the nsqd diskqueue in particular and the new WAL (write-ahead-log) approach in terms of design and functionality.

You can watch it on youtube

Unfortunately, the video got cut a bit short. But basically in the cut off part i asked about the new go internals convention that prevents importing packages that are in an internals subdirectory. Matt wants to make it very clear that certain implementation details are not supported (by the NSQ team) and may change, whereas my take was that it's annoying when i want to reuse code some I find in a project. We ultimately both agreed that while a bit clunky, it gets the job done, and is probably a bit crude because there is also no proper package management yet.

I'ld like to occasionally interview other programmers in a similar way and post on my site later.

02 Oct 2015 8:25am GMT

01 Oct 2015

feedPlanet Arch Linux

It's time for a review or my 1 1/2 years with the Renault Zoe

Renault Zoe (photo credits: Renault)

Since 1 1/2 years I drive my all-electrical Renault Zoe. Since my very first drive I know one thing exactly: I'll never go back to a gasoline car.

I could go on and write lots of arguments for an electric car, but I don't. There are lots of pros and cons around the web, I think that anyone must decide on its own if he is willing (or able because of the price) to drive an electric car or not.
I will only post some details about my charging behavior and the total costs I had the last 1 1/2 year, so this will be very objective review.

So let us start with some facts:
  • daily usage (Monday to Friday) for driving to work and back (12 kilometers one way)
  • some trips during the weekends
  • 14.284 kilometers (~ 8.875 miles) driven
  • from October 1st, 2014 until September 30th, 2015 I charged the Zoe 89 times:
    • 37 charges at home
    • 3 charges at public charging points
    • 49 charges at public charging points with no costs (the local power company near my office provides them)
  • overall charging costs of 85€ (~ $94) between October 2014 and September 2015
  • 1 planned service appointment (90€)
  • 420€ per year for full comprehensive insurance
  • 948€ leasing rate for the battery which is rented in all electric cars of Renault (included are replacement of the battery (if the battery capacity falls under 75% or if the battery fails at all) and free 24/7 assistance service in emergency)
Overall costs (incl. everything from above) for 1 year: 1.543€ (~ $1.720)

The next two diagrams (you can click on each diagram to see it better) will show you how often I have charged and how the charging costs is spread over the year:

In the next diagrams you can see each charging process for every month. I have listed the SoC (state of charge of the battery) before charging and which state the battery had after charging.
I hope that I was able to give you a short review and summary of my usage of my Zoe. If you have any question, just leave a comment, I will try to answer them as good I can.

01 Oct 2015 7:23am GMT

24 Sep 2015

feedPlanet Arch Linux


Hey all

UPDATE: A rebuild has been done against an initial patch. If you already have gdal1, please uninstall and reinstall vtk [6.1.0-10 -> 6.1.0-11] so that gdal1 is removed from the system.

I should have posted this sooner but there is a bit of a mess with VTK and GDAL for which I am responsible. If you have vtk installed in your system along with any one of the following:


Any attempt to update your system will be prevented by the conflict between gdal and a gdal1. [1] A gdal 2.x rebuild [2] was staged for a long while and vtk was preventing it from moving forward (it is not API-compatible with the new gdal, which is a core dependency) -- there were more rebuilds that were waiting in line.

The gdal1 package is only a tentative inconvenience as I did not want to drop vtk. Due to time constraints I could not attempt a patch in the several months that the rebuild was sitting idle and upstream was not ready to migrate. Another developer (anatolik) has stepped up to help and may take over maintenance of the package. Anyone is also welcome to contribute.

Please bear with us in the meantime, thanks!

[1] https://bugs.archlinux.org/task/46346
[2] https://www.archlinux.org/todo/gdal-200/

24 Sep 2015 3:03pm GMT

20 Sep 2015

feedPlanet Arch Linux

D-Bus now launches user buses

The packages systemd 226-1 plus dbus 1.10.0-3 now launch dbus-daemon once per user; all sessions of a user will share the same D-Bus "session" bus. The pam_systemd module ensures that the right DBUS_SESSION_BUS_ADDRESS is set at login.

This also permits dbus-daemon to defer to systemd for activation instead of spawning server processes itself. However, currently this is not commonly used for session services (as opposed to system services).

kdbus will only support this model, so this is also an opportunity to iron out some bugs and make a future transition to kernel buses easier. Please let us know of any issues.

20 Sep 2015 8:31pm GMT

11 Sep 2015

feedPlanet Arch Linux

KDE Telepathy ThinkLight Plugin

Do you own a ThinkPad? Good! Does it have the ThinkLight? Good! Then this post might interest you!

I just wrote a KDE Telepathy plugin that blinks the ThinkLight when you get an incoming message. Sounds almost useless, isn't? Maybe not.

I found a good use case for it: sometime you could be away from keyboard, but near your ThinkPad (e.g. studying), the screen goes black, sounds are off, but you see the ThinkLight blinking - you got a message!

To enable it you just have to fetch the source code, build and install as usual with CMake.

There's just an annoyance at the moment: you need write permission over /proc/acpi/ibm/light. I'm looking for a solution for this, but found nothing if not changing that file permissions manually. Any idea?

There's also a tool, thinkalert (mirror), which allows to turn on/off the ThinkLight without being root by using suid. If you prefer this way, you can fetch the code from the thinkalert branch instead.

Have fun!

11 Sep 2015 10:00pm GMT

10 Sep 2015

feedPlanet Arch Linux

September’s TalkingArch is here

The TalkingArch team is pleased to bring you the latest iso. Based on September's Arch Linux iso, this TalkingArch features all the latest base and rescue packages, along with Linux kernel 4.1.6. Due to the steadily growing size of both the Arch Linux iso and the TalkingArch iso, this will be the first image that [...]

10 Sep 2015 2:33am GMT

08 Sep 2015

feedPlanet Arch Linux

Getting Started with Zshell

Getting Started

The first thing I will say about zsh is that all the extra tools to put more information in your prompt for your every day use, aren't needed with zshell. It can handle all of that fancyness for you.

All of that is done only in zsh code here. I don't have much written in there to do anything, it is mostly just configuring using zstyle.

Parameter Expansion


ZSH Parameter expansion flags

Parameter Expansion documentation

These are probably the links I still use the most. The first one is just to the zsh wiki, there is a ton of good information to get started in there. The second link is what I use the most, because I never remember what all the parameter expamsion stuff does.

The one I use the most is probably (f), which will split output of a parameter on newlines. The below takes the output from ls -l and then splits it on newline, and then grabs the 4th line from the output.

└─╼ echo ${"${(f)"$(ls -l)"}"[4]}
-rw-r--r--  1 daniel daniel   1770341 Jun  2  2014 2014-06-02-182425_1918x1079_scrot.png

You can also do matching. The below uses the (M) to tell the expansion to only show lines that start with 'drwx'.

┌─ daniel at hailey in ~
└─╼ print -l -- ${(M)${(f)"$(ls -l)"}:#drwx*}
drwxr-xr-x 18 daniel daniel      4096 Aug 24 00:06 aurinstall
drwxr-xr-x  2 daniel daniel      4096 Nov  8  2014 bin
drwxr-xr-x  3 daniel daniel      4096 Mar 10 22:17 Colossal Order
drwxr-xr-x  2 daniel daniel      4096 Apr 26 09:44 Desktop
drwxr-xr-x  2 daniel daniel      4096 Apr  1  2014 Documents
drwxr-xr-x  5 daniel daniel      4096 Jul 14 17:34 Downloads
drwxr-xr-x 41 daniel users       4096 Jul 20 18:49 github
drwxr-xr-x  2 daniel daniel      4096 Apr  1  2014 Music
drwxr-xr-x  4 daniel daniel      4096 Sep  8 14:12 Pictures
drwxr-xr-x  2 daniel daniel      4096 Apr  1  2014 Public
drwxr-xr-x  6 daniel daniel      4096 Jul 30 11:04 python-systemd
drwxr-xr-x  2 daniel daniel      4096 Apr  1  2014 Videos

Things to remember about zsh parameter expansion, like bash # means from the beginning of the line, and % means from the end of the line. But, you can chain them together on one line, unlike in bash. The below is a somewhat contrived example. First we use # and % to do replacement on each line of the array to find out what groups are available, then we assign the array to the names variable, and use ${array1:|array2} to get all variables in array1 that aren't in array2.

└─╼ print -l ${${${(M)${(f)"$(ls -l)"}:#drwx*}#* * * }%% *}
┌─ daniel at hailey in ~
└─╼ names=(${${${(M)${(f)"$(ls -l)"}:#drwx*}#* * * }%% *})
┌─ daniel at hailey in ~
└─╼ groups=(daniel)
┌─ daniel at hailey in ~
└─╼ echo ${names:|groups}


Filename Expansion

Just like with parameter expansion, zshell has a bunch of extra flags to expand filenames. One of my favorite examples is to just find filenames that are only directories. Which can also be expanded using the information from before to find all files that aren't directories.

┌─ daniel at hailey in ~/example
└─╼ tree
├── dir1
├── dir2
│   └── file1
├── dir3
├── file1
├── file2
└── file3
3 directories, 4 files
┌─ daniel at hailey in ~/example
└─╼ print -- *(/)
dir1 dir2 dir3
┌─ daniel at hailey in ~/example
└─╼ print -- *(/F) # expand to only Full directories...
┌─ daniel at hailey in ~/example
└─╼ dirs=(*(/))
┌─ daniel at hailey in ~/example
└─╼ everything=(*)
┌─ daniel at hailey in ~/example
└─╼ print ${everything:|dirs}
file1 file2 file3
└─╼ print *(.)      # or just match plain files
file1 file2 file3

Another nice one is being able to recursively glob for files.

┌─ daniel at hailey in ~/example
└─╼ tree
├── dir1
│   ├── dir1
│   │   └── file3
│   ├── dir2
│   │   ├── dir1
│   │   ├── dir2
│   │   └── dir3
│   │       ├── file1
│   │       └── file2
│   └── dir3
├── dir2
├── dir3
├── file1
├── file2
└── file3

9 directories, 6 files
─ daniel at hailey in ~/example
└─╼ for file in **/*(.); do mv $file{,.sh}; done
┌─ daniel at hailey in ~/example
└─╼ tree
├── dir1
│   ├── dir1
│   │   └── file3.sh
│   ├── dir2
│   │   ├── dir1
│   │   ├── dir2
│   │   └── dir3
│   │       ├── file1.sh
│   │       └── file2.sh
│   └── dir3
├── dir2
├── dir3
├── file1.sh
├── file2.sh
└── file3.sh

9 directories, 6 files

and maybe you want to remove suffixes from each file that is found in the glob

┌─ daniel at hailey in ~/example
└─╼ print -- **/file*.sh(:r)
dir1/dir1/file3 dir1/dir2/dir3/file1 dir1/dir2/dir3/file2 file1 file2 file3

or maybe you only want to show the files, with the removed path, like using basename.

┌─ daniel at hailey in ~/example
└─╼ rename .sh .zip *.sh
┌─ daniel at hailey in ~/example
└─╼ ls
dir1  dir2  dir3  file1.zip  file2.zip  file3.zip
┌─ daniel at hailey in ~/example
└─╼ tree
├── dir1
│   ├── dir1
│   │   └── file3.sh
│   ├── dir2
│   │   ├── dir1
│   │   ├── dir2
│   │   └── dir3
│   │       ├── file1.sh
│   │       └── file2.sh
│   └── dir3
├── dir2
├── dir3
├── file1.zip
├── file2.zip
└── file3.zip
┌─ daniel at hailey in ~/example
└─╼ print -- **/*(.:t)
file1.sh file1.zip file2.sh file2.zip file3.sh file3.zip
┌─ daniel at hailey in ~/example
└─╼ print -- **/*(.:h)      # or head
. . . dir1/dir1 dir1/dir2/dir3 dir1/dir2/dir3

The :h modifier is great for moving into directories with files you just opened or referenced on the commandline. (!$ works just like in bash, it grabs the last argument of the previous line)

┌─ daniel at hailey in ~/example
└─╼ ls dir1/dir2/dir3/file2.sh -l
-rw-r--r-- 1 daniel daniel 0 Sep  8 14:56 dir1/dir2/dir3/file2.sh
┌─ daniel at hailey in ~/example
└─╼ cd !:1:h        # index 1 of the previous line array, grab the head
cd dir1/dir2/dir3
┌─ daniel at hailey in ~/example/dir1/dir2/dir3
└─╼ pwd

You can do the above with !:1 in bash, but you would need to put it in a subshell and run it through dirname first... cd $(dirname !:1)

And there are a ton more things you can do with this that I haven't even covered.


Alaises in zsh are for the most part similar to bash. You have your regular command replacement, but you also have extra stuff like global aliases.

┌─ daniel at hailey in ~
└─╼ alias -g AWK="|awk"
┌─ daniel at hailey in ~
└─╼ ip a AWK '/^\w/'
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000

This comes in handy from time to time when you want to do something like

alias -g NULL="2>&1 >/dev/null"

and redirect everything to dev null just by appending the word NULL to the end of the line.

Commands and fpath!

Anyone that has ever done bash scripting before has to have run into doing this.

if ! which git 2>&1 >/dev/null; then
    yum install -y git

In zsh, you get an array with all of the commands in it.

┌─ daniel at hailey in ~
└─╼ echo ${commands[pacman]}
f [[ -n ${commands[pacman]} ]]; then echo yes; else echo no; fi

You also have a new variable $path that is an array for your $PATH variable, so that you can manage it using +=.

┌─ daniel at hailey in ~
└─╼ echo $path[@]
/home/daniel/bin /usr/local/sbin /usr/local/bin /usr/bin

And a new variable called $fpath which is the location of all the zsh function stuff. The first thing that is really in the function path is your completion files, and it defines the inheritance.

┌─ daniel at hailey in ~
└─╼ print -l -- ${^fpath}/**/_pacman(N)

You can use this to maintain your own completion files outside of the system locations, but have them automatically loaded. To enable completion, you have to load the compinit module in zsh, so you don't have to source stuff like you do in bash. That is another one of the big differences.

autoload -U is used to load files as functions. I have this function here that I use to manage dtach sessions, since the only thing I used to use of tmux or screen was the dtach feature and I use a tiling window manager, and tach will put the socket files in my $XDG_RUNTIME_DIR. Then it is just loaded as a function as if it was defined like tach() {...;} in a zsh script.

┌─ daniel at hailey in ~
└─╼ which tach
tach () {
        # undefined
            builtin autoload -X

To load and enable completion, you do basically the same thing, load the compinit function, and run it.

autoload -U compinit && compinit

And zsh will look for compdef options at the top of files in your fpath to specify tab completions.

The other big things that are specified in fpath are the different prompts. First you have to autoload them, then you can choose from one of the default prompts, or preview them.

hailey% autoload compinit promptinit && compinit && promptinit
hailey% prompt
Usage: prompt <options>
    -c              Show currently selected theme and parameters
    -l              List currently available prompt themes
    -p [<themes>]   Preview given themes (defaults to all)
    -h [<theme>]    Display help (for given theme)
    -s <theme>      Set and save theme
    <theme>         Switch to new theme immediately (changes not saved)

Use prompt -h <theme> for help on specific themes.
hailey% prompt redhat
[daniel@hailey ~]$ prompt zefram
[2/5.0.8]daniel@hailey:~> prompt adam1
daniel@hailey ~ % prompt -l
Currently available prompt themes:
adam1 adam2 bart bigfade clint elite2 elite fade fire off oliver pws redhat suse walters zefram

This makes it easy for other people to share the same setup too, if that is what you want to do, and you can just set up different prompts and switch between them.

I have my 2 prompts stored in ~/.config/zsh/themes/prompt_<name>_setup and can switch back and forth between them. (I try to put stuff in ~/.config when possible, if you specify ZDOTDIR to ~/.config/zsh in your zprofile, or on linux in your .pam_environment, which could be moved as well, then you should be all set to go.)

Prompt docs here


This is where zsh gets fun. Instead of having a bunch of variables to configure in your environment, zsh uses zstyles to configure things. The best example I have is in my prompt.

zstyle ':vcs_info:*' enable bzr git hg svn                      # enable different source control plugins
zstyle ':vcs_info:*' check-for-changes true                     # check for changes when in a vcs directory
zstyle ':vcs_info:*' stagedstr '%F{g}●%f'                       # if we have an staged, uncommitted file, put a green dot
zstyle ':vcs_info:*' unstagedstr '%F{y}!%f'                     # if there are unstaged files, that are tracked, put a yellow !
zstyle ':vcs_info:*' formats 'on %F{m}%b%c%u%m%F{n}'            # display the branch and bold it in magento
zstyle ':vcs_info:*' actionformats "%b%c%u|%F{c}%a%f"           # display the branch/commit during an action (bisect, etc)
zstyle ':vcs_info:(sv[nk]|bzr):*' branchformat '%b|%F{c}%r%f'   # different branch information for svn and bzr
zstyle ':vcs_info:git*+set-message:*' hooks git-status          # I do not remember

Also for this to work, you have to include the vcs_info function as part of your zsh precmd.

# Example 1
autoload -Uz vcs_info
precmd(){ vcs_info }

# Example 2
autoload -Uz add-zsh-hook vcs_info
add-zsh-hook precmd prompt_gtmanfred_precmd

Then you just need to put your vcs_info variables into your prompt ${vcs_info_msg_0_}

More info about vcs_info can be found here

The other big one that I configure is zsh completion, with some documentation here with my configuration here


I would really strongly encourage anyone wanting to get started with zshell to dive into the docs. I really started diving in when I wanted to write some zshell completion for different stuff I was using. You will get more out of it if you spend time actually learning the different ins and outs of how other people are configuring plugins instead of just using stuff already out there.

If you do want to get a jumpstart, I really don't like oh-my-zsh, back when I would hangout in the #zsh irc channel on freenode, the majority of the problems that came in there were caused by something weird with omz. The better one, is the fork that was made off of it, and then basically made independent, prezto is really solid, if you need help beginning with plugins.

Happy Hacking :)!


Extra links

setting up bindkeys and zle

08 Sep 2015 5:51pm GMT

02 Sep 2015

feedPlanet Arch Linux

Transplanting Go packages for fun and profit

crazy Gopher scientist

A while back I read coders at work, which is a book of interviews with some great computer scientists who earned their stripes, the questions just as thoughtful as the answers. For one thing, it re-ignited my interest in functional programming, for another I got interested in literate programming but most of all, it struck me how common of a recommendation it was to read other people's code as a means to become a better programmer. (It also has a good section of Brad Fitzpatrick describing his dislike for programming languages, and dreaming about his ideal language. This must have been shortly before Go came about and he became a maintainer.)

I hadn't been doing a good job reading/studying other code out of fear that inferior patterns/style would rub off on me. But I soon realized that was an irrational, perhaps slightly absurd excuse. So I made the decision to change. Contrary to my presumption I found that by reading code that looks bad you can challenge and re-evaluate your mindset and get out with a more nuanced understanding and awareness of the pros and cons of various approaches.

I also realized if code is proving too hard to get into or is of too low quality, you can switch to another code base with negligible effort and end up spending almost all of your time reading code that is worthwhile and has plenty of learnings to offer. There is a lot of high quality Go code, easy to find through sites like Github or Golang weekly, just follow your interests and pick a project to start reading.

It gets really interesting though once you find bodies of code that are not only a nice learning resource, but can be transplanted into your code with minimal work to solve a problem you're having, but in a different context then the author of the code originally designed it for. Components often grow and mature in the context of an application without being promoted as reusable libraries, but you can often use them as if they were. I would like to share 2 such success cases below.

Nsq's diskqueue code

I've always had an interest in code that manages the same binary data both in memory and on a block device. Think filesystems, databases, etc. There's some interesting concerns like robustness in light of failures combined with optimizing for performance (infrequent syncs to disk, maintaining the hot subset of data in memory, etc), combined with optimizing for various access patterns, this can be a daunting topic to get into.

Luckily there's a use case that I see all the time in my domain (telemetry systems) and that covers just enough of the problems to be interesting and fun, but not enough to be overwhelming. And that is: for each step in a monitoring data pipeline, you want to be able to buffer data if the endpoint goes down, in memory and to disk if the amount of data gets too much. Especially to disk if you're also concerned with your software crashing or the machine power cycling.

This is such a common problem that applies to all metrics agents, relays, etc that I was longing for a library that just takes care of spooling data to disk for you without really affecting much of the rest of your software. All it needs to do is sequentially write pieces of data to disk and have a sequential reader catching up and read newer data as it finishes processing the older.

NSQ is a messaging platform from bitly, and it has diskqueue code that does exactly that. And it does so oh so elegantly. I had previously found a beautiful pattern in bitly's go code that I blogged about and again I found a nice and elegant design that builds further on this pattern, with concurrent access to data protected via a single instance of a for loop running a select block which assures only one piece of code can make changes to data at the same time (see bottom of the file), not unlike ioloops in other languages. And method calls such as Put() provide a clean external interface, though their implementation simply hooks into the internal select loop that runs the code that does the bulk of the work. Genius.

func (d *diskQueue) Put(data []byte) error {
  // some details
  d.writeChan <- data
  return <-d.writeResponseChan

In addition the package came with extensive tests and benchmarks out of the box.

After finding and familiarizing myself with this diskqueue code about a year ago I had an easy time introducing disk spooling to Carbon-relay-ng, by transplanting the code into it. The only change I had to make was capitalizing the Diskqueue type to export it outside of the package. It has proven a great fit, enabling a critical feature through little work of transplanting mature, battle-tested code into a context that original authors probably never thought of.

Note also how the data unit here is the []byte, the queue does not deal with the higher level nsq.Message (!). The authors had the foresight of keeping this generic, enabling code reuse and rightfully shot down a PR of mine that had a side effect of making the queue aware of the Message type. In NSQ you'll find thoughtful and deliberate api design and pretty sound code all around. Also, they went pretty far in detailing some lessons learned and providing concrete advice, a very interesting read, especially around managing goroutines & synchronizing their exits, and performance optimizations. At Raintank, we had a need for a messaging solution for metrics so we will so be rolling out NSQ as part of the raintank stack. This is an interesting case where my past experience with the NSQ code and ideas helped to adopt the full solution.

Bosun expression package

I'm a fan of the bosun alerting system which came out of Stack Exchange. It's a full-featured alerting system that solves a few problems like no other tool I've seen does (see my linked post), and timeseries data storage aside, comes with basically everything built in to the one program. I've used it with success. However, for litmus I needed an alerting handler that integrated well into the Grafana backend. I needed the ability to do arbitrarily complex computations. Graphite's api only takes you so far. We also needed (desired) reduction functions, boolean logic, etc. This is where bosun's expression language is really strong. I found the expression package quite interesting, they basically built their own DSL for metrics processing. so it deals with expression parsing, constructing AST's, executing them, dealing with types (potentially mixed types in the same expression), etc.

But bosun also has incident management, contacts, escalations, etc. Stuff that we either already had in place, or didn't want to worry about just yet. So we could run bosun standalone and talk to it as a service via its API which I found too loosely coupled and risky, hook all its code into our binary at once - which seemed overkill - or the strategy I chose: gradually familiarize ourself and adopt pieces of Bosun on a case by case basis, making sure there's a tight fit and without ever building up so much technical debt that it would become a pain to move away from the transplanted code if it becomes clear it's not/no longer well suited. For the foreseeable future we only need one piece, the expression package. Potentially ultimately we'll adopt the entire thing, but without the upfront commitment and investment.

So practically, our code now simply has one line where we create a bosun expression object from a string, and another where we ask bosun to execute the expression for us, which takes care of parsing the expression, querying for the data, evaluating and processing the results and distilling everything down into a final result. We get all the language features (reduction functions, boolean logic, nested expressions, …) for free.

This transplantation was again probably not something the bosun authors expected, but for us it was tremendously liberating. We got a lot of power for free. The only thing I had to do was spend some time reading code, and learning in the process. And I knew the code was well tested so we had zero issues using it.

Much akin to the NSQ example above, there was another reason the transplantation went so smoothly: the expression package is not tangled into other stuff. It just needs a string expression and a graphite instance. To be precise, any struct instance that satisfies the graphiteContext interface that is handily defined in the bosun code. While the bosun design aims to make its various clients (graphite, opentsdb, …) applicable for other projects, it also happens to let us do opposite: reuse some of its core code - the expression package - and pass in a custom graphite Context, such as our implementation which has extensive instrumentation. This lets us use the bosun expression package as a "black box" and still inject our own custom logic into the part that queries data from graphite. Of course, once we want to change the logic of anything else in the black box, we will need come up with something else, perhaps fork the package, but it doesn't seem like we'll need that any time soon.


If you want to become a better programmer I highly recommend you go read some code. There's plenty of good code out there. Pick something that deals with a topic that is of interest to you and looks mature. You typically won't know if code is good before you start reading but you'll find out really fast, and you might be pleasantly surprised, as was I, several times. You will learn a bunch, possibly pretty fast. However, don't go for the most advanced, complex code straight away. Pick projects and topics that are out of your comfort zone and do things that are new to you, but nothing too crazy. Once you truly grok those, proceed to other, possibly more advanced stuff.

Often you'll read reusable libraries that are built to be reused, or you might find ways to transplant smaller portions of code into your own projects. Either way is a great way to tinker and learn, and solve real problems. Just make sure the code actually fits in so you don't end up with the software version of Frankenstein's monster. It is also helpful to have the authors available to chat if you need help or have issues understanding something, though they might be surprised if you're using their code in a way they didn't envision and might not be very inclined to provide support to what they consider internal implementation details. So that could be a hit or miss. Luckily the people behind both nsq and bosun were supportive of my endeavors but I also made sure to try to figure out things by myself before bothering them. Another reason why it's good to pick mature, documented projects.

Gopher frankenstein

Part of the original meaning of hacking, extended into open source, is a mindset and practice of seeing how others solve a problem, discussion and building on top of it. We've gotten used to - and fairly good at - doing this on a project and library level but forgot about it on the level of code, code patterns and ideas. I want to see these practices come back to life.

We also apply this at Raintank: not only are we trying to build the best open source monitoring platform by reusing (and often contributing to) existing open source tools and working with different communities, we realize it's vital to work on a more granular level, get to know the people and practice cross-pollination of ideas and code.

Next stuff I want to read and possibly implement or transplant parts of: dgryski/go-trigram, armon/go-radix, especially as used in the dgryski/carbonmem server to search through Graphite metrics. Other fun stuff by dgryski: an implementation of the ARC caching algorithm and bloom filters. (you might want to get used to reading Wikipedia pages also). And mreiferson/wal, a write ahead log by one of the nsqd authors, which looks like it'll become the successor of the beloved diskqueue code.

Go forth and transplant!

Also posted on the Raintank blog

02 Sep 2015 4:25pm GMT

22 Aug 2015

feedPlanet Arch Linux

Python 3 Object-oriented Programming Second Edition

One of several reasons this blog has been so quiet this year is the time I've invested in the second edition of my first book. I am extremely proud of the end result. The first edition of Python 3 Object Oriented Programming was great and garnered 30 five star reviews on Amazon. This edition is […]

22 Aug 2015 4:43am GMT

14 Aug 2015

feedPlanet Arch Linux

openssh-7.0p1 deprecates ssh-dss keys

In light of recently discovered vulnerabilities, the new openssh-7.0p1 release deprecates keys of ssh-dss type, also known as DSA keys. See the upstream announcement for details.

Before updating and restarting sshd on a remote host, make sure you do not rely on such keys for connecting to it. To enumerate DSA keys granting access to a given account, use:

    grep ssh-dss ~/.ssh/authorized_keys

If you have any, ensure you have alternative means of logging in, such as key pairs of a different type, or password authentication.

Finally, host keys of ssh-dss type being deprecated too, you might have to confirm a new fingerprint (for a host key of a different type) when connecting to a freshly updated server.

14 Aug 2015 5:10am GMT