24 Oct 2014

feedPlanet Arch Linux

Pruning Tarsnap Archives

I started using Tarsnap about three years ago and I have been nothing but impressed with it since. It is simple to use, extremely cost effective and, more than once, it has saved me from myself; making it easy to retrieve copies of files that I have inadvertently overwritten or otherwise done stupid things with1. When I first posted about it, I included a simple wrapper script, which has held up pretty well over that time.

What became apparent over the last couple of months, as I began to consciously make more regular backups, was that pruning the archives was a relatively tedious business. Given that Tarsnap de-duplicates data, there isn't much mileage in keeping around older archives because, if you do have to retrieve a file, you don't want to have to search through a large number of archives to find it; so there is a balance between making use of Tarsnap's efficient functionality, and not creating a rod for your back if your use case is occasionally retrieving single-or small groups of-files, rather than large dumps.

I have settled on keeping five to seven archives, depending on the frequency of my backups, which is somewhere around two to three times a week. Pruning these archives was becoming tedious, so I wrote a simple script to make it less onerous. Essentially, it writes a list of all the archives to a tmpfile, runs sort(1) to order them from oldest to newest, and then deletes the oldest minus whatever the number to keep is set to.

The bulk of the code is simple enough:

snapclean
</p>

<h1>generate list</h1>

<p>tarsnap --list-archives > "$tmpfile"</p>

<h1>sort by descending date, format is: host-ddmmyy_hh:mm</h1>

<p>{
  rm "$tmpfile" &amp;&amp; sort -k 1.11,1.10 -k 1.8,1.9 -k 1.7,1.6 > "$tmpfile"
} &lt; "$tmpfile"</p>

<h1>populate the list</h1>

<p>mapfile -t archives &lt; "$tmpfile"</p>

<h1>print the full list</h1>

<p>printf "%s\n%s\n" "${cyn}Current archives${end}:" "${archives[@]#*-}"</p>

<h1>identify oldest archives</h1>

<p>remove=$(( ${#archives[@]} - keep ))
targets=( $(head -n "$remove" "$tmpfile") )</p>

<h1>if there is at least one to remove</h1>

<p>if (( ${#targets[@]} >= 1 )); then
  printf "%s\n" "${red}Archives to delete${end}:"
  printf "%s\n" "${targets[@]#*-}"</p>

<p>  read -p "Proceed with deletion? [${red}Y${end}/N] " YN</p>

<p>  if [[ $YN == Y ]]; then</p>

<pre><code>for archive in "${targets[@]}"; do
  tarsnap -d --no-print-stats -f "$archive"
done &amp;&amp; printf "%s\n" "${yel}Archives successfully deleted...${end}"

printf "\n%s\n" "${cyn}Remaining archives:${end}"
tarsnap --list-archives
</code></pre>

<p>  else</p>

<pre><code>printf "%s\n" "${yel}Operation aborted${end}"
</code></pre>

<p>  fi
else
  printf "%s\n" "Nothing to do"
  exit 0
fi

You can see the rest of the script in my bitbucket repo. It even comes with colour.

Once every couple of weeks, I run the script, review the archives marked for deletion and then blow them away. Easy. If you aren't using Tarsnap, you really should check it out; it is an excellent service and-for the almost ridiculously small investment-you get rock solid, encrypted peace of mind. Why would you not do that?

Coda

This is the one hundredth post on this blog: a milestone that I never envisaged getting anywhere near. Looking back through the posts, nearly 60,000 words worth, there are a couple there that continue to draw traffic and are obviously seen at some level as helpful. There are also quite a few that qualify as "filler", but blogging is a discipline like any other and sometimes you just have to push something up to keep the rhythm going. In any event, this is a roundabout way of saying that, for a variety of reasons both personal and professional, I am no longer able to fulfil my own expectations of regularly pushing these posts out.

I will endeavour to, from time to time when I find something that I genuinely think is worth sharing, make an effort to write about it, but I can't see that happening all that often. I'd like to thank all the people that have read these posts; especially those of you that have commented. With every post, I always looked forward to people telling me where I got something wrong or how I could have approached a problem differently or more effectively2; I learned a lot from these pointers and I am grateful to the people that were generous enough to share them.

Notes

  1. The frequency with which this happens is, admittedly, low; but not low enough to confidently abandon a service like this…
  2. Leaving a complimentary note is just as welcome, don't get me wrong…

24 Oct 2014 8:38pm GMT

22 Oct 2014

feedPlanet Arch Linux

SysV init on Arch Linux, and Debian

Arch Linux distributes systemd as its init daemon, and has deprecated SysV init in June 2013. Debian is doing the same now and we see panic and terror sweep through that community, especially since this time thousands of my sysadmin colleagues are affected. But like with Arch Linux we are witnessing irrational behavior, loud protests all the way to the BSD camp and public threats of Debian forking. Yet all that is needed, and let's face it much simpler to achieve, is organizing a specialized user group interested in keeping SysV (or your alternative) usable in your favorite GNU/Linux distribution with members that support one another, exactly as I wrote back then about Arch Linux.

Unfortunately I'm not aware of any such group forming in the Arch Linux community around sysvinit, and I've been running SysV init alone as my PID 1 since then. It was not a big deal, but I don't always have time or the willpower to break my personal systems after a 60 hour work week, and the real problems are yet to come anyway - if (when) for example udev stops working without systemd PID 1. If you had a support group, and especially one with a few coding gurus among you most of the time chances are they would solve a difficult problem first, and everyone benefits. On some other occasions an enthusiastic user would solve it first, saving gurus from a lousy weekend.

For anyone else left standing at the cheapest part of the stadium, like me, maybe uselessd as a drop-in replacement is the way to go after major subsystems stop working in our favorite GNU/Linux distributions. I personally like what they reduced systemd to (inspired by suckless.org philosophy?), but chances are without support the project ends inside 2 years, and we would be back here duct taping in isolation.

22 Oct 2014 9:51pm GMT

Changes to Intel microcode updates

Microcode on Intel CPUs is no longer loaded automatically, as it needs to be loaded very early in the boot process. This requires adjustments in the bootloader. If you have an Intel CPU, please follow the instructions in the wiki.

22 Oct 2014 9:29pm GMT

18 Oct 2014

feedPlanet Arch Linux

Rtl Power

Basic scripting.

18 Oct 2014 2:49pm GMT

12 Oct 2014

feedPlanet Arch Linux

Java users: manual intervention required before upgrade

To circumvent a conflicting files issue, manual intervention is required only if package java-common is installed. This can be checked with the following command:

$ pacman -Q java-common
java-common ...

If so, please run the following prior to upgrading:

# archlinux-java unset
# pacman -Sydd --asdeps java-runtime-common
:: java-runtime-common and java-common are in conflict. Remove java-common? [y/N] y
# archlinux-java fix

You can then go ahead and upgrade:

# pacman -Su

Please note that new package java-runtime-common does not use nor support forcing JAVA_HOME as former package java-common did. See the Java wiki page for more info.

12 Oct 2014 7:46pm GMT

06 Oct 2014

feedPlanet Arch Linux

python-zarafa monthly update October

It's been two months since I've written about a python-zarafa update. Lately it has been a busy time and there aren't that much changes made to the API. In this post I will describe the new features we added.

The changes in Git where as following.

3236e26 2 days ago : Add User.admin which checks if user is admin
34e3e54 2 days ago : Add try/except to last_logon/last_logof since a new user has no logon property
dc3e916 2 days ago : Add Store.last_login/last_logoff
bb3327b 2 days ago : dictionary comprehension is py2.7 only
a3443ab 5 days ago : - lint fixes - table.dict_rows() make an exception for PR_EC_STATSTABLE_SYSTEM New f
older related functions: - folder.copy - folder.delete - folder.movie - folder.submit
9d982c7 8 days ago : Add z-barplot.py to the readme
31a4c23 8 days ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
dc31ea6 8 days ago : Remove whitespace
a0e54ba 8 days ago : Add barplot stastics script
2c3df17 2 weeks ago : Merge pull request #9 from fbartels/patch-1
33e45fe 2 weeks ago : force default encoding
bd71906 3 weeks ago : Add documentation url
703ba1d 3 weeks ago : Add description for zarafa-spamhandler
82d73f0 3 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
7e216f4 3 weeks ago : - added sort option to z-plot - add README section for z-plot
75935eb 3 weeks ago : Simple program that uses matplotlib to plot graphs about user data usage
22a24e1 5 weeks ago : Update delete_olditems.py
fad089a 5 weeks ago : Add more MAPI defined folders to Store class
351062e 5 weeks ago : Do not look at admin.cfg if ZARAFA_SOCKET is defined
ff83d14 5 weeks ago : Add auth_user and auth_pass options to zarafa.Server()
3beb326 5 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
4fc55cc 6 weeks ago : - Store.folders(mail=True) to only generate folders which contain mail - fix Item.b
ody bug - Add folder property to Item class
0f1d23d 6 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
974dca8 6 weeks ago : Tabs to spaces
5606cf5 6 weeks ago : Add initial release of zarafa-spamhandler
5509cda 7 weeks ago : Fix remote=False to actually show results
d625941 7 weeks ago : Add list_sentitems.py
1440ee4 7 weeks ago : Add outbox
1efa35e 7 weeks ago : Add password authentication option
35826f1 7 weeks ago : Remove print
90328b1 7 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
df0dbdb 7 weeks ago : Add description for monitor,tables and zarfa-stats.py
a3229ab 7 weeks ago : Redo command line arguments, add -U and -P for user session support
ac1c13b 8 weeks ago : update README
ab5ed57 8 weeks ago : Bump version to 0.1
daba691 8 weeks ago : Rename doc to docs
a482a3c 8 weeks ago : - Add verbose flag - Rename spam to junk
cb54d78 8 weeks ago : Make code a bit more fancy...
4273d2f 8 weeks ago : Actually restrict to folder given as parameter
6bf66ef 8 weeks ago : Addition of the spam property.
8bd5778 8 weeks ago : Rename email: to item:
d3a4c2e 8 weeks ago : Add fix for items without received time
128907b 8 weeks ago : Rename examples to scripts - Add several new examples replacing the examples.py
329aca8 9 weeks ago : Add README.md
4d186c3 9 weeks ago : Add delete_olditems.py: Allows a system administrator to delete old email for a user. e.g. python delete_olditems.py -u username 30 will delete all mails older than 30 days based on the receive date of the emails.
c6e977a 9 weeks ago : - fix subfolder bug in Folder.folders - make Folder.create_folder return a mapi folder - add store.folder(key)
a790523 9 weeks ago : fix documentation chapter
1ec3062 9 weeks ago : add sphinx documentation part
63d69f7 9 weeks ago : add sphinx Makefile to create documentation
eb303e6 9 weeks ago : - Add docstrings for classes - Add Body class, which can provide an html or plaintext representation of a MAPI message body - add folder.item(entryid)

Sphinx documentation

We've added online Sphinx documentation, which should make it easier to hack on Python-Zarafa. The documentation is still work in progress so expect us to add missing parts and improvements.

User session support

The API had initial support for only a SYSTEM session, but now it also supports user sessions. User sessions can be used with the command line switches -U (username) and -P (password). In a Python program the user session support looks as following.

import zarafa

server = zarafa.Server(auth_user='bob',auth_pass='test')

# Bob's inbox Folder
inbox = server.users().next().store.inbox

The generator server.users() still returns a generator, but it only contains one User object.

New properties

User, Store, Folder, etc. have received new exposed MAPI properties, so that you don't need to look up the MAPI property definition.

import zarafa

server = zarafa.Server(auth_user='bob',auth_pass='test')
bob = server.users().next()
store = bob.store
print "last logoff:", store.last_logoff

# More store Folders definitions
print = store.sentmail
print store.junk

# User.admin to verify if a user is an admin
print "Is bob admin? ", bob.admin

Which prints:

last logoff: 2014-10-04 19:44:39
Folder(Sent Items)
Folder(Junk E-mail)
Is bob admin? False

Folder related functions

There are three new Folder related functions, Folder.copy(), Folder.move() and Folder.delete().

import zarafa
store = zarafa.Server(auth_user='bob',auth_pass='test').users().next().store
print list(store.inbox.items())
# Copy all items to the junk folder
store.inbox.copy(store.inbox.items(),store.junk)
print list(store.junk.items())
# Move all items to the junk folder
store.inbox.move(store.inbox.items(),store.junk)
print list(store.inbox.items())
print list(store.junk.items())
# Empty junk
store.junk.delete(store.junk.items())

Which prints:

# Inbox
[Item(Undelivered Mail Returned to Sender)]
# Junk
[Item(Undelivered Mail Returned to Sender)]
# Inbox
[]
# Junk
[Item(Undelivered Mail Returned to Sender), Item(Undelivered Mail Returned to Sender)]
# Junk
[]

These functions combined with ICS can for example create a client-side rule.

import zarafa
import time

class importer:
    def __init__(self, folder, target):
        self.folder = folder
        self.target = target

    def update(self, item, flags):
        if 'spam' in item.subject:
            print 'trashing..', item
            self.folder.move(item, self.target)

    def delete(self, item, flags):
        pass

server = zarafa.Server()
store = server.user(server.options.auth_user).store
inbox, junk = store.inbox, store.junk

state = inbox.state
while True:
    state = inbox.sync(importer(inbox, junk), state)
    time.sleep(1)

These where the most interesting changes in Python-Zarafa from 1 August till 6 October.

python-zarafa monthly update October was originally published by Jelle van der Waa at Jelly's Blog on October 06, 2014.

06 Oct 2014 8:00pm GMT

05 Oct 2014

feedPlanet Arch Linux

nvidia-340xx and nvidia

As NVIDIA dropped support for G8x, G9x, and GT2xx GPUs with the release of 343.22, there now is set of nvidia-340xx packages supporting those older GPUs. 340xx will receive support until the end of 2019 according to NVIDIA.

Users of older GPUs should consider switching to nvidia-340xx. The nvidia-343.22 and nvidia-340xx-340.46 packages will be in testing for a few days.

05 Oct 2014 5:36am GMT

04 Oct 2014

feedPlanet Arch Linux

October 2014: a new month, a new iso and a new home

The TalkingArch team is pleased to announce the availability of the newest TalkingArch iso for October 2014. This iso includes the latest base packages, including Linux kernel 3.16.3. It can be downloaded via BitTorrent or HTTP from the TalkingArch download page. In other news, TalkingArch, its torrent tracker and part of its supporting IRC network [...]

04 Oct 2014 3:44am GMT

02 Oct 2014

feedPlanet Arch Linux

mesa updated to 10.3.0

mesa is now available with some packaging changes:

02 Oct 2014 12:44pm GMT

30 Sep 2014

feedPlanet Arch Linux

A real whisper-to-InfluxDB program.

The whisper-to-influxdb migration script I posted earlier is pretty bad. A shell script, without concurrency, and an undiagnosed performance issue. I hinted that one could write a Go program using the unofficial whisper-go bindings and the influxdb Go client library. That's what I did now, it's at github.com/vimeo/whisper-to-influxdb. It uses configurable amounts of workers for both whisper fetches and InfluxDB commits, but it's still a bit naive in the sense that it commits to InfluxDB one serie at a time, irrespective of how many records are in it. My series, and hence my commits have at most 60k records, and presumably InfluxDB could handle a lot more per commit, so we might leverage better batching later. Either way, this way I can consistently commit about 100k series every 2.5 hours (or 10/s), where each serie has a few thousand points on average, with peaks up to 60k points. I usually play with 1 to 30 InfluxDB workers. Even though I've hit a few InfluxDB issues, this tool has enabled me to fill in gaps after outages and to do a restore from whisper after a complete database wipe.

30 Sep 2014 12:37pm GMT

28 Sep 2014

feedPlanet Arch Linux

Shellshock and Arch Linux

I'm guessing most people have heard about the security issue that was discovered in bash earlier in the week, which has been nicknamed Shellshock. Most of the details are covered elsewhere, so I thought I would post a little about Continue reading

28 Sep 2014 5:08am GMT

26 Sep 2014

feedPlanet Arch Linux

Mailinglists maintenance

Starting 14:30 UTC today I'll move our mailing lists to a new server. Expected downtime is 2 hours.

This post will be updated once the move is done.

Update: Migration complete, sorry for the couple mails with the wrong List-Id, I've corrected that now. If you experience any other problems with the mailing lists please feel free to send me an email (postmaster@archlinux.org) or ping me on IRC.

26 Sep 2014 2:03pm GMT

24 Sep 2014

feedPlanet Arch Linux

InfluxDB as a graphite backend, part 2



Updated oct 1, 2014 with a new Disk space efficiency section which fixes some mistakes and adds more clarity.

The Graphite + InfluxDB series continues.

  • In part 1, "On Graphite, Whisper and InfluxDB" I described the problems of Graphite's whisper and ceres, why I disagree with common graphite clustering advice as being the right path forward, what a great timeseries storage system would mean to me, why InfluxDB - despite being the youngest project - is my main interest right now, and introduced my approach for combining both and leveraging their respective strengths: InfluxDB as an ingestion and storage backend (and at some point, realtime processing and pub-sub) and graphite for its renown data processing-on-retrieval functionality. Furthermore, I introduced some tooling: carbon-relay-ng to easily route streams of carbon data (metrics datapoints) to storage backends, allowing me to send production data to Carbon+whisper as well as InfluxDB in parallel, graphite-api, the simpler Graphite API server, with graphite-influxdb to fetch data from InfluxDB.
  • Not Graphite related, but I wrote influx-cli which I introduced here. It allows to easily interface with InfluxDB and measure the duration of operations, which will become useful for this article.
  • In the Graphite & Influxdb intermezzo I shared a script to import whisper data into InfluxDB and noted some write performance issues I was seeing, but the better part of the article described the various improvements done to carbon-relay-ng, which is becoming an increasingly versatile and useful tool.
  • In part 2, which you are reading now, I'm going to describe recent progress, share more info about my setup, testing results, state of affairs, and ideas for future work

read more

24 Sep 2014 11:56am GMT

22 Sep 2014

feedPlanet Arch Linux

An alternative way to install a pure Arch Linux installation

As we all know, Arch Linux isn't the distribution with an easy installation. And if I say "easy" I mean easy for most Linux users and maybe new Linux users. I know we have a great Beginner's Guide in the wiki which is awesome if you follow these steps and understand each step. But for the most users these steps are a show stopper and they won't even try Arch Linux even if it's the distribution they were looking for. I think there are a lot of Linux users out there who want to try Arch, but fails with the installation. And even those should have a chance to try Arch.

Evo/Lution-AIS

A german Linux news site made an article about Evo/Lution. Evo (founded by Jeff Story) is an Arch Linux Live CD with a CLI installer. The installer reminds me at our old CLI installer we had some years ago, which was removed in 2012 because of the huge amount of maintenance.

Since then you definitely have to know what you are doing during the installation of Arch Linux.

This huge gap is now closed with Evo. You are guided through the whole installation, each step of the installation is described in the installer, even the choice of a desktop environment is provided. So after the installation you have a fully running Arch Linux installation which is booting directly into the graphical environment. You don't have to worry about commands and their parameters, this is all done by AIS.

The project itself is very new and I have tried the Evo-AIS-0.3-RC2-64bit ISO image. I don't know in which direction this project is going, but at the moment it's a very good alternative to install a pure Arch Linux without the hassle of reading through the whole Beginner's Guide.

BUT one thing must be clear, after the installation you are on your own and you have definitely learn how Arch Linux works. But if you need any help after the installation process, you can find all information in the wiki or in the forum of Arch Linux.

Conclusion: If you want to learn Arch Linux from scratch use the Beginner's Guide from our wiki. If you are familiar with Linux then use our own installation process. If you are looking for a fast way to try Arch Linux and don't want (or like) the actual installation process, then try Evo/Lution-AIS (at least the Evo-AIS-0.3-RC2 ISO image, maybe the project is moving forward and change their main intention, which is at the moment the Arch Linux installer).

Personally I'm impressed by the installer which is working like a charm, but that's my personal opinion and no official statement. ;-)

22 Sep 2014 7:37am GMT

20 Sep 2014

feedPlanet Arch Linux

Graphite & Influxdb intermezzo: migrating old data and a more powerful carbon relay


read more

20 Sep 2014 7:18pm GMT

19 Sep 2014

feedPlanet Arch Linux

Managing rackspace products in SaltStack

How everything is setup now.

Currently we have been keeping all the individual drivers seperate, controlling all the behavior so that you get the same experience no matter which provider you use. I would like to keep doing this even though we are adding stuff that is more specific.

The goal will be to continue to only use the CloudClient interface, and just run everything through that so that we can write the code once, and have it available to all of the different cloud modules/runner/state/salt-cloud.

Getting Started

I want to start this one cleanly and not run into the problems that happened with the novaclient driver. Starting with salt.utils.openstack.pyrax, __init__.py should look something like this

salt/utils/openstack/pyrax/__init__.py

    from __future__ import print_function, with_statement, generators

    try:
        import pyrax

        # import pyrax classes
        from salt.utils.openstack.pyrax.authentication import Auth

        __all__ = [ 'Auth' ]
        HAS_PYRAX = True
    except ImportError as err:
        HAS_PYRAX = False

This will keep everything imported when we just import salt.utils.openstack.pyrax but if pyrax is not installed, it doesn't try to import it a bunch more times on all the other files.

Once this has begun, there will be the main class, that gets authenticated and does the _get_conn just like all the other drivers do, but then, we can just pass the authenticated object to all the new classes, instead of authenticating on each new one.

After these have gotten started, everything needs to then be referenced in salt. cloud.clouds.pyrax, and that just uses the classes from the utils directory.
This is the code I am writing right now, and once it is finished, it will make it really easy for anyone to just add another class in another file and then hook it into the salt-cloud pyrax driver.

What is next?

I have to write the authentication and servers class, since those are what everything revolves around. Once those are done, we can start expanding and have everyone come in and start adding Cloud Things!

From the point that it is up, everything should be easy enough to run through the cloud.action module. Using the cloud.action module, I want to make one pyrax state that does all the things.

I am definitely going to get the first pyrax authentication class and stuff done this weekend, and am going to start with it in my gtmanfred/salt repo in the pyrax branch. Look for that on Saturday or Sunday this weekend.

19 Sep 2014 11:33pm GMT