08 Sep 2014

feedPlanet Arch Linux

Influx-cli: a commandline interface to Influxdb.

Time for another side project: influx-cli, a commandline interface to influxdb.
Nothing groundbreaking, and it behaves pretty much as you would expect if you've ever used the mysql, pgsql, vsql, etc tools before.
But I did want to highlight a few interesting features.


read more

08 Sep 2014 12:36pm GMT

06 Sep 2014

feedPlanet Arch Linux

The September 2014 TalkingArch iso is online

The TalkingArch team is very happy to announce the availability of the newest TalkingArch iso, which sports the latest 3.16.1 Linux kernel. It can be dounloaded from the usual place. This release also brings with it a new BitTorrent tracker, as the public trackers we were using all stopped working for some reason. It took [...]

06 Sep 2014 1:42am GMT

05 Sep 2014

feedPlanet Arch Linux

Simple Reminders

Due to a rather embarrassing episode in #archlinux a couple of weeks ago, where I naively shared one of the first bash scripts I had written without first looking back over it1, and had to subsequently endure what felt like the ritual code mocking, but was in fact some helpful pointers as to how I could make the script suck less (a lot less) I have been going through those older scripts and applying the little knowledge that I have picked up in the interim; reappraising the usefulness of the scripts as I go.

One that has proved to be of some utility for many years now is a simple wrapper script I wrote to help manage my finances. Like many useful scripts, it was written quickly and has been in constant use ever since; becoming almost transparent it is so ingrained in my workflow.

The script allows me to manage the lag between when a company emails me an invoice and when the payment is actually due. I find that companies will typically email their invoices to me some weeks in advance, whereupon I will make a mental note and then, unsurprisingly, promptly forget all about it, thereby opening myself up for penalties for late payment. It didn't take me long (well, in my defence, a lot less time than it took for invoices to become digital) to realise that there was a better way™ - a script.

The at command is purpose built for running aperiodic commands at a later time (whereas cron is for periodic tasks). So, using at(1), once I receive an invoice, I can set a reminder closer to the final payment window, thereby avoiding both the late payment penalty-and the loss of interest were I to pay it on receipt. I just needed a script to make it painless to achieve.

The main function of the script is pretty self-explanatory:

todo
aread() {
  read -p "Time of message? [HH:MM] " attime
  read -p "Date of message? [DD.MM.YY] " atdate
  read -p "Message body? " message</p>

<p>  timexp='<sup>[0-9]{2}:[0-9]{2}'</sup>
  datexp='<sup>[0-9]{2}.[0-9]{2}.[0-9]{2}'</sup></p>

<p>  if [[ $attime =~ $timexp &amp;&amp; $atdate =~ $datexp ]]; then</p>

<pre><code> at "$attime" "$atdate" &lt;&lt; EOF
 printf '%s\n' "$message" | mutt -s "REMINDER" jasonwryan@gmail.com
</code></pre>

<p>EOF
  else</p>

<pre><code> printf '%s\n' "Incorrectly formatted values, bailing..." &amp;&amp; exit 1
</code></pre>

<p>  fi<br/>
}

Now, an invoice arrives, I open it and fire up a scratchpad, and follow the prompts. A couple of weeks later, the reminder email arrives and I login to my bank account and dispatch payment. You could, of course, have the script trigger some other form of notification, but an email works well for me.

The rest of the script is similarly basic; just some options for listing and reading any queued jobs and some more rudimentary checking. The full script is in my bitbucket repo2.

Update 7/09/14

Not more than a couple of hours after posting this, Florian Pritz pinged me in #archlinux with some great suggestions for improving the script. I particularly liked relying on date(1) handling the input format for the time and date values. He also suggested a readline wrapper called (appropriately enough) rlwrap and a tmpfile to better manage input validation. You can see his full diff of changes. In the end, I adopted the date suggestion but passed on rlwrap. Thanks for the great pointers, Florian.

Notes

  1. In the interests of full disclosure, the most egregious line was myterm=$(echo $TERM) which I would hope I copied blindly from somewhere else, but accept full responsibility for nonetheless.
  2. Don't poke around too much in there, I still have quite a lot of cleaning up to do…

Creative Commons image by Adelle and Justin on Flickr.

05 Sep 2014 10:59pm GMT

30 Aug 2014

feedPlanet Arch Linux

Need your help for a photo contest

Hi there,

as sum of you may know one of my hobby is photography.
Sometimes I take part in some photo contest, so this time I'm taking part again.

If you want to support me then you can vote online for my submitted photos. You can vote until August 31st (only two days left), then the best voted 20 photos will be judged by a jury to find the best 10 photos.

The online vote itself is done with a single click on the cup which appears if you move your mouse over the photo. After your vote you can fill out a form to win a trip to Chile, but you won't have to, so the vote is anonymous if you want.

I take part with the following photos and it would be great if you could vote for all of them:

- http://www.vaudevisions.com/contest/photos/detail/beauty-of-the-mountain/
- http://www.vaudevisions.com/contest/photos/detail/the-hiker/
- http://www.vaudevisions.com/contest/photos/detail/on-the-edge/

Feel free to forward this post and links to any people you may know who would vote for my photos, more votes means a better chance to be under the first 20 photos for the jury.

MANY THANKS....Daniel

30 Aug 2014 12:32am GMT

27 Aug 2014

feedPlanet Arch Linux

E-mail infrastructure you can blog about

The "e" in eCryptfs stands for "enterprise". Interestingly in the enterprise I'm in its uses were few and far apart. I built a lot of e-mail infrastructure this year. In fact it's almost all I've been doing, and "boring old e-mail" is nothing interesting to tell your friends about. With inclusion of eCryptfs and some other bits and pieces I think it may be something worth looking at, but first to do an infrastructure design overview.

I'm not an e-mail infrastructure architect (even if we make up that term for a moment), or in other words I'm not an expert in MS Exchange, IBM Domino and some other "collaborative software", and most importantly I'm not an expert in all the laws and legal issues related to E-mail in major countries. I consult with legal departments, and so should you. Your infrastructure designs are always going to be driven by corporate e-mail policies and local law - which can, for example, require from you to archive mail for a period of 7-10 years, and do so while conforming with data protection legislation... and that makes a big difference on your infrastructure. I recommend this overview of the "Climategate" case as a good cautionary tale. With that said I now feel comfortable describing infrastructure ideas someone may end up borrowing from one day.

E-mail is critical for most business today. Wait, that sounds like a stupid generalization. As a fact I can say this for types of businesses I've been working with; managed service providers and media production companies. They all operate with teams around the world and losing their e-mail system severely degrades their ability to get work done. That is why:

The system must be highly-available and fault-tolerant


Before I go on to the pretty pictures I have to note that good network design and engineering I am taking as a given here. The network has to be redundant well in advance of services. Network engineers I worked with were very good at their jobs and I had it easy, inheriting good infrastructure.

The first layer deployed on the network is the MX frontend. If you already have, or rent, an HA frontend that can sustain abuse traffic it's an easy choice to pull mail through it too. But your mileage may vary, as it's not trivial to proxy SMTP for a SPAM filter. If the filter sees connections only from the LB cluster it would be impossible for it to perform well; no rate limiting, no reputation scoring... I prefer HAProxy. People making it are great software engineers and their software and services are superior to anything else I've used (it's true I consulted for them once as a sysadmin but that has nothing to do with my endorsements). The HAProxy PROXY protocol, or TPROXY mode can be used in some cases. Or if you are a Barracuda Networks customer instead you might have their load balancers which are supposed to integrate with their SPAM firewalls, but I've been unable to find a single implementation detail to verify their claim. Without load balancers using the SPAM filtering cluster as the MX, and load balancing across it with round-robin DNS is a common deployment:

Network diagram

I wouldn't say much about the SPAM filter, obviously it's supposed to do a very good job at rating and scanning incoming mail, and everyone has their favorites. My own favorite classifier component for many years has been the crm114 discriminator, but you can't expect from (many) people to train their own filters and that it takes 3-6 months to achieve >99% accuracy, Gmail has spoiled the world. The important thing in the context of the diagram above is that the SPAM filter needs to be redundant, and that it must have the capability to spool incoming mail if all the Mailstore backends fail.

The system must have backups and DR fail-over strategy


For building the backend, the "Mailstores", some of my favorites are Postfix, sometimes Qmail, and Dovecot. It's not relevant, but I guess someone would want to hear that too.

eCryptfs (stacked) file-system runs on top of the storage file-system, and all the mailboxes and spools are stored on it. The reasons for using it are not just related to data protection legislation. There are other solutions and faster too, block-level or hardware-based solutions for doing full disk encryption. But, being a file-system eCryptfs allows us to manipulate mail on the individual (mail) file or (mailbox) directory level. Encrypted mail can be transferred over the network to the remote backup backend very efficiently because of it. If you require, or are allowed to do, snapshots they don't necessarily have to be done at the (fancy) file-system or volume level. Common ext4/xfs and a little rsync hard-links magic work just as well (up to about 1TB on cheap slow drives).

When doing backup restores or a backend fail-over eCryptfs keys can be inserted into the kernel keyring, and data mounted on the remote file-system to take over.

The system must be secure


Everyone has their IPS and IDS favorites, and implementations. But those, together with firewalls, application firewalls, virtual private networks, access controls, two-factor authentication and file-system encryption... still do not make your private and confidential data safe. E-mail is not confidential as SMTP is a plain-text protocol. I personally think of it as being in the public domain. The solution to authenticating correspondents and to protecting your data and intellectual property of your company, both in transit and stored on the Mailstore, is PGP/GPG encryption. It is essential.

Even then, confidential data and attachments from mailboxes of employees will find their way onto your project management suite, bug tracker, wiki... But that is another topic entirely. Thanks for reading.

27 Aug 2014 8:42pm GMT

22 Aug 2014

feedPlanet Arch Linux

Building from Source

One of the real strengths of Arch is its ability to be customised. Not just in terms of the packages that you choose to install, but how those packages themselves can be patched, altered or otherwise configured to suit your workflow and setup. I have posted previously about, for example, building Vim or hacking PKGBUILDS. What makes all this possible is the wonderful ABS, the Arch Build System.

Essentially a tree of all of the PKGBUILDs (and other necessary files) for the packages in the official repositories, the ABS is the means by which you can easily acquire, compile and install any of the packages on your system:

ABS is made up of a directory tree (the ABS tree) residing under /var/abs. This tree contains many subdirectories, each within a category and each named by their respective package. This tree represents (but does not contain) all official Arch software, retrievable through the SVN system.

Arch Wiki ABS

I have been using ABS since I started running Arch and it has worked well. I wrote a simple script to check for and download updates when required to help simplify the process and have been generally content with that approach. That isn't to say that elements of this process couldn't be improved. One of the small niggles is that the ABS only syncs once a day so there is almost always-for me down here in .nz, anyway-at least a full day's wait between the package hitting the local mirror and the updated ABS version arriving. The other issue is that you download and sync the entire tree…

That all changed when, at the start of this month, one of the Arch developers, Dave Reisner, opened a thread on the Arch boards announcing asp, the Arch Source Package management tool, a git-based alternative for abs1.

Basically a 200-line bash script, asp is an improvement over abs insofar as you get the updated PKGBUILDs immediately; you can choose between just pulling the necessary source files (as per abs), or checking out the package branch so that you can create your own development branch and, for example, keep your patch set in git as well.

You can elect to locate the local git repository in a directory of your choosing by exporting ASPROOT, there are Tab completion scripts for bash and zsh and a succinct man page. Overall, for a utility that is only three weeks old, asp is already fulfilling the function of a drop-in replacement; a faster, more flexible tool for building Arch packages from source.

With thy sharp teeth this knot intrinsicate
Of life at once untie…

Antony and Cleopatra V.ii

Notes

  1. The package, not the entire build system…

Creative Commons image, Red Lego Brick by Brian Dill on Flickr.

22 Aug 2014 9:41pm GMT

21 Aug 2014

feedPlanet Arch Linux

Reorganization of Vim packages

Thomas Dziedzic wrote:

The Vim suite of packages has been reorganized to better provide advanced features in the standard vim package, and to split the CLI and GUI versions; the new packages are:

21 Aug 2014 3:12am GMT

20 Aug 2014

feedPlanet Arch Linux

How I lost my blog content...

...and, luckily, how I restored it!

Let me say this before you start reading: backup your data NOW!!!

Really, do it. I post-poned this for so long and, as result, I had a drammatic weekend.

Last Friday I had the wonderful idea to update my Ghost setup to the newer 0.5. I did this from my summer house via SSH, but the network isn't the culprit here.

You have to know that some months ago, maybe more, I switched from a package installation, through this PKGBUILD, to an installation via npm. So, as soon as I typed npm update, all my node_modules/ghost content was gone. Yep, I must be dumb.

After some minute, which helped me to better understand how the situation was, I immediately shutdown the BeagleBone Black.

The day after I went home, I installed Arch Linux ARM on a microSD and obviously the super TestDisk which got SQLite support since a while now. Cool!

This way I restored the Ghost database, BUT it was corrupted. However, a StackOverflow search pointed me to this commad:

cat <( sqlite3 ghost.db .dump | grep "^ROLLBACK" -v ) <( echo "COMMIT;" ) | sqlite3 ghost-fixed.db  

After that, I was able to open the database and to restore 14 of 40 posts.

My second attempt has been to use the Google cache. Using this method I recovered about 10 posts. Nice, I already had more than 50% of the total content! I was feeling optimistic.

The Arch Linux Planet let me recover 3 posts more, which however I could recover anyway using Bartle Doo; I never heard of this website before, but thanks to it I recovered some posts by looking for my First and Last Name.

I was almost here. About 10 posts missing, but how to recover them?? I didn't remember titles and googling without specific keywords didn't help neither.

I went back on the broken SQLite database, Vim can open it so let's look into for some data. Bingo! The missing posts titles are still there!

And then I started googling again, but for specific titles, which pointed me to websites mirroring my posts content.
At the end of this step I had 38 of 40 posts!

I can't stop now, it's more than a challenge now.

I went back again on the broken database where posts content is corrupted: there's some text, then symbols and then another text which doesn't make any sense in union with the first part. This looks like a tedious job. This Saturday can end here.

It's sunday; I'm motivated and I can't lose those 2 posts because of my laziness.
I've the missing posts titles and I now remember their content, so I started to look for their phrases in the database and, with all my surprise and a lot of patience, I recovered their content!
This mainly because Ghost keeps both the markdown and the HTML text in the database and then the post content is duplicated which decrease the chance of a corruption in the same phrase.

Another summer, another Linux survival experience (that I'm pleased to link to!).

20 Aug 2014 6:23pm GMT

12 Aug 2014

feedPlanet Arch Linux

Darktable: a magnificent photo manager and editor

A post about the magnificent darktable photo manager/editor and why I'm abandoning pixie
read more

12 Aug 2014 12:36pm GMT

03 Aug 2014

feedPlanet Arch Linux

The TalkingArch August 2014 iso is out

Announcing the TalkingArch iso for august 2014. This month's snapshot features the Linux kernel 3.15.8, and fixes the problem that was reported last month where the pick-a-card script wasn't working correctly. Get it now from the usual place. Share and enjoy, and of course, keep those torrents seeding :-).

03 Aug 2014 8:08pm GMT

01 Aug 2014

feedPlanet Arch Linux

pass{,word} manager

After posting last week about KeePassC as a password manager, a couple of people immediately commented about a utility billed as "the standard Unix password manager." This is definitely one of the reasons I continue to write up my experiences with free and open source software: as soon as you think that you have learned something, someone will either offer a correction or encourage you to explore something else that is similar, related or interesting for some other tangential reason.

So, I was off down that path… Called simply pass, it is a 600 line bash script that uses GPG encryption and some other standard tools and scripts to organize and manage your password files. I had never heard of it but, based on Cayetano and Bigby's recommendations, I thought it would be worth a look.

On of the reasons that I had not come across it before was that, after using KeePassX for so long, I had assumed that I would need to continue to use that database format; so when I was looking for an alternative, KeePassC was a natural fit (and a fine application). The question of migrating my data hadn't even occurred to me…

It turns out that the migration process to pass is extraordinarily well catered for: there are 10 migration scripts for a range of different formats, including keepassx2pass.py, which takes the exported XML KeePassX database file and creates your pass files,ordered by the schema you had used in that application. You just need to make sure you amend the shebang to python2 before running the script, otherwise it will fail with an unhelpful error message.

After using KeePassX to dump my database, before I could use the script to create my pass directories, I had to export the PASSWORD_STORE_DIR environment variable to place the top level pass directory in an alternate location. This way, instead of initializing a git repository, I could have the store synced by Syncthing. The git idea is a good one, but I'm not particularly interested in version controlling these directories, and I have no intention, encrypted or not, of pushing them to someone else's server.

That constitutes the basic setup. It took a grand total of five minutes. The real strength of pass, however, is in its integration with two other fantastic tools: keychain and dmenu. Together with pass, these constitute a secure, convenient and effortless workflow for managing your passwords. With your GPG key loaded into keychain, you are only prompted for your master passphrase once1 and with Chris Down's excellent passmenu script, you can use dmenu to sort through your password files, Tab complete the one you are looking for and have it copied to your clipboard with a couple of keystrokes.

After using Chris' script for a couple of days, I made a few alterations to suit my setup: removed the xdotool stuff (as I don't need it), included dmenu formatting options to match my dwm statusbar and, most significantly, changed the way that the files are printed in dmenu to remove the visual clutter of the parent directories, ie., print archwiki as opposed to internet/archwiki:

dpass
</p>

<h1>!/usr/bin/env bash</h1>

<h1>based on: https://github.com/cdown/passmenu</h1>

<p>shopt -s nullglob globstar</p>

<p>nb='#121212'
nf='#696969'
sb='#121212'
sf='#914E89'
font="Dejavu Sans Mono:medium:size=7.5"
dmenucmd=( dmenu -i -fn "$font" -nb "$nb" -nf "$nf" -sb "$sb" -sf "$sf" )</p>

<p>prefix=${PASSWORD_STORE_DIR:-~/.password-store}
files=( "$prefix"/<em><em>/</em>.gpg )
files=( "${files[@]#"$prefix"/}" )
files=( "${files[@]%.gpg}" )
fbase=( "${files[@]##</em>/}" )</p>

<p>word=$(printf '%s\n' "${fbase[@]}" | "${dmenucmd[@]}" "$@")</p>

<p>if [[ -n $word ]]; then
  for match in "${files[@]}"; do</p>

<pre><code>if [[ $word == ${match#*/} ]]; then
  /usr/bin/pass show -c "$match" 2&gt;/dev/null
fi
</code></pre>

<p>  done
fi

It does introduce some more complexity into the script, but it makes it a lot easier for me to identify the desired password when reading it in dmenu.

Now, when I need a to enter a password, I hit my dmenu hotkey, type dpass Enter and the first couple of letters of the desired password filename, TabEnter and the password is loaded and ready to go. There are also completion scripts for the main shells, and even one for fish2 for the iconoclasts…

While I have no complaints at all with KeePassC, I have found this pass setup to be a lot less intrusive to use, it seamlessly integrates with my workflow, and the passwords themselves are much simpler to manage. Short of someone else popping up in the comments with another compelling proposition, I'm content with the way this has worked out. Many thanks to Cayetano Santos and Bigby James for the push.

Notes

  1. There is a very annoying bug open for keychain that means if, as I do, you start keychain from your $HOME/.profile or $ZDOTDIR/.zprofile you will need to enter the same passphrase to unlock a sub-key before you can use pass (the same thing applies to Mutt). This gets really ugly if you attempt to use dmenu before unlocking your key…
  2. Finally, a command line shell for the 90s… Indeed.

Creative Commons image by Intel Free Press on Flickr.

01 Aug 2014 9:28pm GMT

python-zarafa monthly update July

It's been almost a month since my previous post on the changes in python-zarafa. This month we continued adding new features to python-zarafa, the following git command shows the changes made since the last post.

i[jelle@P9][~/projects/python-zarafa]%git log --since "JUN 29 2014" --until "AUG 1 2014" --pretty=format:"%h %ar : %s"
20a391b 2 days ago : fix partial rename in class Property
35e5c3e 6 days ago : - address pylint warnings - add Server.remove_user - Server.{get_store, get_user, get_company} now return None instead of throwing an exception - added Folder.state by merging with Server.state - rename mapifolder etc. to mapiobj
842b98d 11 days ago : Update README.md
75f9220 2 weeks ago : properties => prop rename
219351f 3 weeks ago : - Add support for unicode user names - Add support for unicode for Server.create_use - Let Store.folders return nothing if there is no IPM Subtree
746949c 3 weeks ago : - fix some issues with unicode company names - recreate exception for single-tenant setup - improve {Server, Company}.create_user
5820b83 3 weeks ago : add example code for handling tables
816982f 3 weeks ago : Fix shebang
6582594 3 weeks ago : - expose associated - reverse-sort Folder.items() on received date by default - added more folder-level tables
e3d7e4f 4 weeks ago : zarafa-stats.py: zarafa-stats in Python except --top support
0598914 4 weeks ago : Remove property tag since the generator accepts arguments
984d58d 4 weeks ago : - new class Table for MAPI tables - refactor delete - Item.tables() for recipeints and attachments - Rename property\_ and properties to prop and props

In the following chapters we walk through the main new features in python-zarafa.

Table support

In python-zarafa we added a table class which abstracts MAPI tables. It provids a few methods which makes it easier to display a MAPI table in various formats, for example csv.

for item in zarafa.Server().user('user').store.inbox:
        print item.table(PR_MESSAGE_RECIPIENTS, columns=[PR_EMAIL_ADDRESS, PR_ENTRYID]).csv(delimiter=';')
        print item.table(PR_MESSAGE_ATTACHMENTS).text()
        print item
        for table in item.tables():
                print table
                print table.csv()

Address class

The new Address class represents the sender and recipient of a MAPI message.

item = zarafa.Server().user('user').store.inbox.items().next()
print 'from:', item.sender.name, item.sender.email
for r in item.recipients():
                print 'rec:', r.name, r.email

Which prints:

from: john@localhost john@localhost
rec: jaap@localhost jaap@localhost

Associated folder support

An associated folder in MAPI is a "hidden" table of a folder, which is usually used to store configuration messages for example quota information. In the Zarafa-Inspector this functionality is used to look into these MAPI objects. You can access the associated folder by calling associated method on a folder.

associated = zarafa.Server().user('user').store.inbox.associated

User creation/removal

The API now also supports the addition and removal of users, which is as simple as the code example below.

server = zarafa.Server()
server.create_user('cowboy', fullname='cowboy bebop')
server.remove_user('cowboy')

Entryid access

It wasn't possible to use MAPI Object's entryid to access items directly. Previously we had to loop through the entire inbox to access a partiuclar item. We can now directly access the mapi item if we know it's entryid as you can see in the example below.

user = zarafa.Server().user('user')
entryid = user.store.inbox.items().next().entryid
print 'enytryid', entryid
# Access via store
print 'store   ', user.store.item(entryid).entryid
# Access via folder
print 'inbox   ', user.store.inbox.item(entryid).entryid

Example output

enytryid 00000000C80AB3E59F3E420D984664AF5049F1A401000000050000002496BD8A547C46B881BFAC8E9392019700000000
store    00000000C80AB3E59F3E420D984664AF5049F1A401000000050000002496BD8A547C46B881BFAC8E9392019700000000
inbox    00000000C80AB3E59F3E420D984664AF5049F1A401000000050000002496BD8A547C46B881BFAC8E9392019700000000

These where all the main new features, there are also numerous other small changes which I didn't discuss.

python-zarafa monthly update July was originally published by Jelle van der Waa at Jelly's Blog on August 01, 2014.

01 Aug 2014 8:00pm GMT

A Month of RTL-SDR

Day 10 campaign report

01 Aug 2014 1:18pm GMT

28 Jul 2014

feedPlanet Arch Linux

xorg-server 1.16 is now available

Laurent Carlier wrote:

The new version comes with the following changes:

28 Jul 2014 9:39pm GMT

27 Jul 2014

feedPlanet Arch Linux

Rtl Power

Making pretty pictures with the CLI-junky's waterfall

27 Jul 2014 8:09am GMT

26 Jul 2014

feedPlanet Arch Linux

Beautiful Go patterns for concurrent access to shared resources and coordinating responses

It's a pretty common thing in backend go programs to have multiple coroutines concurrently needing to modify a shared resource, and needing a response that tells them whether the operation succeeded and/or other auxiliary information. Something centralized manages the shared state, the changes to it and the responses.


read more

26 Jul 2014 5:22pm GMT