11 Dec 2014

feedPlanet Arch Linux

ca-certificates update

The way local CA certificates are handled has changed. If you have added any locally trusted certificates:

  1. Move /usr/local/share/ca-certificates/*.crt to /etc/ca-certificates/trust-source/anchors/
  2. Do the same with all manually-added /etc/ssl/certs/*.pem files and rename them to *.crt
  3. Instead of update-ca-certificates, run trust extract-compat

Also see man 8 update-ca-trust and trust --help.

11 Dec 2014 12:05pm GMT

10 Dec 2014

feedPlanet Arch Linux

The ultimate 2014 release of TalkingArch is now live

It took some time to get this one out the door, but here it is, TalkingArch for December 2014, the last of the year. Notable changes include Linux kernel 3.17.4, as well as changes in the boot loader that cause the intel-ucode package to load properly at boot time. This snapshot also includes the latest [...]

10 Dec 2014 12:37am GMT

08 Dec 2014

feedPlanet Arch Linux

GnuPG-2.1 and the pacman keyring

The upgrade to gnupg-2.1 ported the pacman keyring to a new upstream format but in the process rendered the local master key unable to sign other keys. This is only an issue if you ever intend to customize your pacman keyring. We nevertheless recommend all users fix this by generating a fresh keyring.

In addition, we recommend installing haveged, a daemon that generates system entropy; this speeds up critical operations in cryptographic programs such as gnupg (including the generation of new keyrings).

To do all the above, run as root:

pacman -Syu haveged
systemctl start haveged
systemctl enable haveged

rm -fr /etc/pacman.d/gnupg
pacman-key --init
pacman-key --populate archlinux

08 Dec 2014 2:13am GMT

06 Dec 2014

feedPlanet Arch Linux

IT-Telemetry Google group. Trying to foster more collaboration around operational insights.

The discipline of collecting infrastructure & application performance metrics, aggregation, storage, visualizations and alerting has many terms associated with it... Telemetry. Insights engineering. Operational visibility. I've seen a bunch of people present their work in advancing the state of the art in this domain:
from Anton Lebedevich's statistics for monitoring series, Toufic Boubez' talks on anomaly detection and Twitter's work on detecting mean shifts to projects such as flapjack (which aims to offload the alerting responsibility from your monitoring apps), the metrics 2.0 standardization effort or Etsy's Kale stack which tries to bring interesting changes in timeseries to your attention with minimal configuration.

Much of this work is being shared via conference talks and blog posts, especially around anomaly and fault detection, and I couldn't find a location for collaboration, quicker feedback and discussions on more abstract (algorithmic/mathematical) topics or those that cross project boundaries. So I created the IT-telemetry Google group. If I missed something existing, let me know. I can shut this down and point to whatever already exists. Either way I hope this kind of avenue proves useful to people working on these kinds of problems.

06 Dec 2014 9:01pm GMT

24 Oct 2014

feedPlanet Arch Linux

Pruning Tarsnap Archives

I started using Tarsnap about three years ago and I have been nothing but impressed with it since. It is simple to use, extremely cost effective and, more than once, it has saved me from myself; making it easy to retrieve copies of files that I have inadvertently overwritten or otherwise done stupid things with1. When I first posted about it, I included a simple wrapper script, which has held up pretty well over that time.

What became apparent over the last couple of months, as I began to consciously make more regular backups, was that pruning the archives was a relatively tedious business. Given that Tarsnap de-duplicates data, there isn't much mileage in keeping around older archives because, if you do have to retrieve a file, you don't want to have to search through a large number of archives to find it; so there is a balance between making use of Tarsnap's efficient functionality, and not creating a rod for your back if your use case is occasionally retrieving single-or small groups of-files, rather than large dumps.

I have settled on keeping five to seven archives, depending on the frequency of my backups, which is somewhere around two to three times a week. Pruning these archives was becoming tedious, so I wrote a simple script to make it less onerous. Essentially, it writes a list of all the archives to a tmpfile, runs sort(1) to order them from oldest to newest, and then deletes the oldest minus whatever the number to keep is set to.

The bulk of the code is simple enough:

snapclean
</p>

<h1>generate list</h1>

<p>tarsnap --list-archives > "$tmpfile"</p>

<h1>sort by descending date, format is: host-ddmmyy_hh:mm</h1>

<p>{
  rm "$tmpfile" &amp;&amp; sort -k 1.11,1.10 -k 1.8,1.9 -k 1.7,1.6 > "$tmpfile"
} &lt; "$tmpfile"</p>

<h1>populate the list</h1>

<p>mapfile -t archives &lt; "$tmpfile"</p>

<h1>print the full list</h1>

<p>printf "%s\n%s\n" "${cyn}Current archives${end}:" "${archives[@]#*-}"</p>

<h1>identify oldest archives</h1>

<p>remove=$(( ${#archives[@]} - keep ))
targets=( $(head -n "$remove" "$tmpfile") )</p>

<h1>if there is at least one to remove</h1>

<p>if (( ${#targets[@]} >= 1 )); then
  printf "%s\n" "${red}Archives to delete${end}:"
  printf "%s\n" "${targets[@]#*-}"</p>

<p>  read -p "Proceed with deletion? [${red}Y${end}/N] " YN</p>

<p>  if [[ $YN == Y ]]; then</p>

<pre><code>for archive in "${targets[@]}"; do
  tarsnap -d --no-print-stats -f "$archive"
done &amp;&amp; printf "%s\n" "${yel}Archives successfully deleted...${end}"

printf "\n%s\n" "${cyn}Remaining archives:${end}"
tarsnap --list-archives
</code></pre>

<p>  else</p>

<pre><code>printf "%s\n" "${yel}Operation aborted${end}"
</code></pre>

<p>  fi
else
  printf "%s\n" "Nothing to do"
  exit 0
fi

You can see the rest of the script in my bitbucket repo. It even comes with colour.

Once every couple of weeks, I run the script, review the archives marked for deletion and then blow them away. Easy. If you aren't using Tarsnap, you really should check it out; it is an excellent service and-for the almost ridiculously small investment-you get rock solid, encrypted peace of mind. Why would you not do that?

Coda

This is the one hundredth post on this blog: a milestone that I never envisaged getting anywhere near. Looking back through the posts, nearly 60,000 words worth, there are a couple there that continue to draw traffic and are obviously seen at some level as helpful. There are also quite a few that qualify as "filler", but blogging is a discipline like any other and sometimes you just have to push something up to keep the rhythm going. In any event, this is a roundabout way of saying that, for a variety of reasons both personal and professional, I am no longer able to fulfil my own expectations of regularly pushing these posts out.

I will endeavour to, from time to time when I find something that I genuinely think is worth sharing, make an effort to write about it, but I can't see that happening all that often. I'd like to thank all the people that have read these posts; especially those of you that have commented. With every post, I always looked forward to people telling me where I got something wrong or how I could have approached a problem differently or more effectively2; I learned a lot from these pointers and I am grateful to the people that were generous enough to share them.

Notes

  1. The frequency with which this happens is, admittedly, low; but not low enough to confidently abandon a service like this…
  2. Leaving a complimentary note is just as welcome, don't get me wrong…

24 Oct 2014 8:38pm GMT

22 Oct 2014

feedPlanet Arch Linux

SysV init on Arch Linux, and Debian

Arch Linux distributes systemd as its init daemon, and has deprecated SysV init in June 2013. Debian is doing the same now and we see panic and terror sweep through that community, especially since this time thousands of my sysadmin colleagues are affected. But like with Arch Linux we are witnessing irrational behavior, loud protests all the way to the BSD camp and public threats of Debian forking. Yet all that is needed, and let's face it much simpler to achieve, is organizing a specialized user group interested in keeping SysV (or your alternative) usable in your favorite GNU/Linux distribution with members that support one another, exactly as I wrote back then about Arch Linux.

Unfortunately I'm not aware of any such group forming in the Arch Linux community around sysvinit, and I've been running SysV init alone as my PID 1 since then. It was not a big deal, but I don't always have time or the willpower to break my personal systems after a 60 hour work week, and the real problems are yet to come anyway - if (when) for example udev stops working without systemd PID 1. If you had a support group, and especially one with a few coding gurus among you most of the time chances are they would solve a difficult problem first, and everyone benefits. On some other occasions an enthusiastic user would solve it first, saving gurus from a lousy weekend.

For anyone else left standing at the cheapest part of the stadium, like me, maybe uselessd as a drop-in replacement is the way to go after major subsystems stop working in our favorite GNU/Linux distributions. I personally like what they reduced systemd to (inspired by suckless.org philosophy?), but chances are without support the project ends inside 2 years, and we would be back here duct taping in isolation.

22 Oct 2014 9:51pm GMT

Changes to Intel microcode updates

Microcode on Intel CPUs is no longer loaded automatically, as it needs to be loaded very early in the boot process. This requires adjustments in the bootloader. If you have an Intel CPU, please follow the instructions in the wiki.

22 Oct 2014 9:29pm GMT

18 Oct 2014

feedPlanet Arch Linux

Rtl Power

Basic scripting.

18 Oct 2014 2:49pm GMT

12 Oct 2014

feedPlanet Arch Linux

Java users: manual intervention required before upgrade

To circumvent a conflicting files issue, manual intervention is required only if package java-common is installed. This can be checked with the following command:

$ pacman -Q java-common
java-common ...

If so, please run the following prior to upgrading:

# archlinux-java unset
# pacman -Sydd --asdeps java-runtime-common
:: java-runtime-common and java-common are in conflict. Remove java-common? [y/N] y
# archlinux-java fix

You can then go ahead and upgrade:

# pacman -Su

Please note that new package java-runtime-common does not use nor support forcing JAVA_HOME as former package java-common did. See the Java wiki page for more info.

12 Oct 2014 7:46pm GMT

06 Oct 2014

feedPlanet Arch Linux

python-zarafa monthly update October

It's been two months since I've written about a python-zarafa update. Lately it has been a busy time and there aren't that much changes made to the API. In this post I will describe the new features we added.

The changes in Git where as following.

3236e26 2 days ago : Add User.admin which checks if user is admin
34e3e54 2 days ago : Add try/except to last_logon/last_logof since a new user has no logon property
dc3e916 2 days ago : Add Store.last_login/last_logoff
bb3327b 2 days ago : dictionary comprehension is py2.7 only
a3443ab 5 days ago : - lint fixes - table.dict_rows() make an exception for PR_EC_STATSTABLE_SYSTEM New f
older related functions: - folder.copy - folder.delete - folder.movie - folder.submit
9d982c7 8 days ago : Add z-barplot.py to the readme
31a4c23 8 days ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
dc31ea6 8 days ago : Remove whitespace
a0e54ba 8 days ago : Add barplot stastics script
2c3df17 2 weeks ago : Merge pull request #9 from fbartels/patch-1
33e45fe 2 weeks ago : force default encoding
bd71906 3 weeks ago : Add documentation url
703ba1d 3 weeks ago : Add description for zarafa-spamhandler
82d73f0 3 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
7e216f4 3 weeks ago : - added sort option to z-plot - add README section for z-plot
75935eb 3 weeks ago : Simple program that uses matplotlib to plot graphs about user data usage
22a24e1 5 weeks ago : Update delete_olditems.py
fad089a 5 weeks ago : Add more MAPI defined folders to Store class
351062e 5 weeks ago : Do not look at admin.cfg if ZARAFA_SOCKET is defined
ff83d14 5 weeks ago : Add auth_user and auth_pass options to zarafa.Server()
3beb326 5 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
4fc55cc 6 weeks ago : - Store.folders(mail=True) to only generate folders which contain mail - fix Item.b
ody bug - Add folder property to Item class
0f1d23d 6 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
974dca8 6 weeks ago : Tabs to spaces
5606cf5 6 weeks ago : Add initial release of zarafa-spamhandler
5509cda 7 weeks ago : Fix remote=False to actually show results
d625941 7 weeks ago : Add list_sentitems.py
1440ee4 7 weeks ago : Add outbox
1efa35e 7 weeks ago : Add password authentication option
35826f1 7 weeks ago : Remove print
90328b1 7 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
df0dbdb 7 weeks ago : Add description for monitor,tables and zarfa-stats.py
a3229ab 7 weeks ago : Redo command line arguments, add -U and -P for user session support
ac1c13b 8 weeks ago : update README
ab5ed57 8 weeks ago : Bump version to 0.1
daba691 8 weeks ago : Rename doc to docs
a482a3c 8 weeks ago : - Add verbose flag - Rename spam to junk
cb54d78 8 weeks ago : Make code a bit more fancy...
4273d2f 8 weeks ago : Actually restrict to folder given as parameter
6bf66ef 8 weeks ago : Addition of the spam property.
8bd5778 8 weeks ago : Rename email: to item:
d3a4c2e 8 weeks ago : Add fix for items without received time
128907b 8 weeks ago : Rename examples to scripts - Add several new examples replacing the examples.py
329aca8 9 weeks ago : Add README.md
4d186c3 9 weeks ago : Add delete_olditems.py: Allows a system administrator to delete old email for a user. e.g. python delete_olditems.py -u username 30 will delete all mails older than 30 days based on the receive date of the emails.
c6e977a 9 weeks ago : - fix subfolder bug in Folder.folders - make Folder.create_folder return a mapi folder - add store.folder(key)
a790523 9 weeks ago : fix documentation chapter
1ec3062 9 weeks ago : add sphinx documentation part
63d69f7 9 weeks ago : add sphinx Makefile to create documentation
eb303e6 9 weeks ago : - Add docstrings for classes - Add Body class, which can provide an html or plaintext representation of a MAPI message body - add folder.item(entryid)

Sphinx documentation

We've added online Sphinx documentation, which should make it easier to hack on Python-Zarafa. The documentation is still work in progress so expect us to add missing parts and improvements.

User session support

The API had initial support for only a SYSTEM session, but now it also supports user sessions. User sessions can be used with the command line switches -U (username) and -P (password). In a Python program the user session support looks as following.

import zarafa

server = zarafa.Server(auth_user='bob',auth_pass='test')

# Bob's inbox Folder
inbox = server.users().next().store.inbox

The generator server.users() still returns a generator, but it only contains one User object.

New properties

User, Store, Folder, etc. have received new exposed MAPI properties, so that you don't need to look up the MAPI property definition.

import zarafa

server = zarafa.Server(auth_user='bob',auth_pass='test')
bob = server.users().next()
store = bob.store
print "last logoff:", store.last_logoff

# More store Folders definitions
print = store.sentmail
print store.junk

# User.admin to verify if a user is an admin
print "Is bob admin? ", bob.admin

Which prints:

last logoff: 2014-10-04 19:44:39
Folder(Sent Items)
Folder(Junk E-mail)
Is bob admin? False

Folder related functions

There are three new Folder related functions, Folder.copy(), Folder.move() and Folder.delete().

import zarafa
store = zarafa.Server(auth_user='bob',auth_pass='test').users().next().store
print list(store.inbox.items())
# Copy all items to the junk folder
store.inbox.copy(store.inbox.items(),store.junk)
print list(store.junk.items())
# Move all items to the junk folder
store.inbox.move(store.inbox.items(),store.junk)
print list(store.inbox.items())
print list(store.junk.items())
# Empty junk
store.junk.delete(store.junk.items())

Which prints:

# Inbox
[Item(Undelivered Mail Returned to Sender)]
# Junk
[Item(Undelivered Mail Returned to Sender)]
# Inbox
[]
# Junk
[Item(Undelivered Mail Returned to Sender), Item(Undelivered Mail Returned to Sender)]
# Junk
[]

These functions combined with ICS can for example create a client-side rule.

import zarafa
import time

class importer:
    def __init__(self, folder, target):
        self.folder = folder
        self.target = target

    def update(self, item, flags):
        if 'spam' in item.subject:
            print 'trashing..', item
            self.folder.move(item, self.target)

    def delete(self, item, flags):
        pass

server = zarafa.Server()
store = server.user(server.options.auth_user).store
inbox, junk = store.inbox, store.junk

state = inbox.state
while True:
    state = inbox.sync(importer(inbox, junk), state)
    time.sleep(1)

These where the most interesting changes in Python-Zarafa from 1 August till 6 October.

python-zarafa monthly update October was originally published by Jelle van der Waa at Jelly's Blog on October 06, 2014.

06 Oct 2014 6:00pm GMT

05 Oct 2014

feedPlanet Arch Linux

nvidia-340xx and nvidia

As NVIDIA dropped support for G8x, G9x, and GT2xx GPUs with the release of 343.22, there now is set of nvidia-340xx packages supporting those older GPUs. 340xx will receive support until the end of 2019 according to NVIDIA.

Users of older GPUs should consider switching to nvidia-340xx. The nvidia-343.22 and nvidia-340xx-340.46 packages will be in testing for a few days.

05 Oct 2014 5:36am GMT

04 Oct 2014

feedPlanet Arch Linux

October 2014: a new month, a new iso and a new home

The TalkingArch team is pleased to announce the availability of the newest TalkingArch iso for October 2014. This iso includes the latest base packages, including Linux kernel 3.16.3. It can be downloaded via BitTorrent or HTTP from the TalkingArch download page. In other news, TalkingArch, its torrent tracker and part of its supporting IRC network [...]

04 Oct 2014 3:44am GMT

02 Oct 2014

feedPlanet Arch Linux

mesa updated to 10.3.0

mesa is now available with some packaging changes:

02 Oct 2014 12:44pm GMT

30 Sep 2014

feedPlanet Arch Linux

A real whisper-to-InfluxDB program.

The whisper-to-influxdb migration script I posted earlier is pretty bad. A shell script, without concurrency, and an undiagnosed performance issue. I hinted that one could write a Go program using the unofficial whisper-go bindings and the influxdb Go client library. That's what I did now, it's at github.com/vimeo/whisper-to-influxdb. It uses configurable amounts of workers for both whisper fetches and InfluxDB commits, but it's still a bit naive in the sense that it commits to InfluxDB one serie at a time, irrespective of how many records are in it. My series, and hence my commits have at most 60k records, and presumably InfluxDB could handle a lot more per commit, so we might leverage better batching later. Either way, this way I can consistently commit about 100k series every 2.5 hours (or 10/s), where each serie has a few thousand points on average, with peaks up to 60k points. I usually play with 1 to 30 InfluxDB workers. Even though I've hit a few InfluxDB issues, this tool has enabled me to fill in gaps after outages and to do a restore from whisper after a complete database wipe.

30 Sep 2014 12:37pm GMT

28 Sep 2014

feedPlanet Arch Linux

Shellshock and Arch Linux

I'm guessing most people have heard about the security issue that was discovered in bash earlier in the week, which has been nicknamed Shellshock. Most of the details are covered elsewhere, so I thought I would post a little about Continue reading

28 Sep 2014 5:08am GMT

26 Sep 2014

feedPlanet Arch Linux

Mailinglists maintenance

Starting 14:30 UTC today I'll move our mailing lists to a new server. Expected downtime is 2 hours.

This post will be updated once the move is done.

Update: Migration complete, sorry for the couple mails with the wrong List-Id, I've corrected that now. If you experience any other problems with the mailing lists please feel free to send me an email (postmaster@archlinux.org) or ping me on IRC.

26 Sep 2014 2:03pm GMT