22 Oct 2014

feedPlanet Arch Linux

SysV init on Arch Linux, and Debian

Arch Linux distributes systemd as its init daemon, and has deprecated SysV init in June 2013. Debian is doing the same now and we see panic and terror sweep through that community, especially since this time thousands of my sysadmin colleagues are affected. But like with Arch Linux we are witnessing irrational behavior, loud protests all the way to the BSD camp and public threats of Debian forking. Yet all that is needed, and let's face it much simpler to achieve, is organizing a specialized user group interested in keeping SysV (or your alternative) usable in your favorite GNU/Linux distribution with members that support one another, exactly as I wrote back then about Arch Linux.

Unfortunately I'm not aware of any such group forming in the Arch Linux community around sysvinit, and I've been running SysV init alone as my PID 1 since then. It was not a big deal, but I don't always have time or the willpower to break my personal systems after a 60 hour work week, and the real problems are yet to come anyway - if (when) for example udev stops working without systemd PID 1. If you had a support group, and especially one with a few coding gurus among you most of the time chances are they would solve a difficult problem first, and everyone benefits. On some other occasions an enthusiastic user would solve it first, saving gurus from a lousy weekend.

For anyone else left standing at the cheapest part of the stadium, like me, maybe uselessd as a drop-in replacement is the way to go after major subsystems stop working in our favorite GNU/Linux distributions. I personally like what they reduced systemd to (inspired by suckless.org philosophy?), but chances are without support the project ends inside 2 years, and we would be back here duct taping in isolation.

22 Oct 2014 9:51pm GMT

Changes to Intel microcode updates

Microcode on Intel CPUs is no longer loaded automatically, as it needs to be loaded very early in the boot process. This requires adjustments in the bootloader. If you have an Intel CPU, please follow the instructions in the wiki.

22 Oct 2014 9:29pm GMT

18 Oct 2014

feedPlanet Arch Linux

Rtl Power

Basic scripting.

18 Oct 2014 2:49pm GMT

12 Oct 2014

feedPlanet Arch Linux

Java users: manual intervention required before upgrade

To circumvent a conflicting files issue, manual intervention is required only if package java-common is installed. This can be checked with the following command:

$ pacman -Q java-common
java-common ...

If so, please run the following prior to upgrading:

# archlinux-java unset
# pacman -Sydd --asdeps java-runtime-common
:: java-runtime-common and java-common are in conflict. Remove java-common? [y/N] y
# archlinux-java fix

You can then go ahead and upgrade:

# pacman -Su

Please note that new package java-runtime-common does not use nor support forcing JAVA_HOME as former package java-common did. See the Java wiki page for more info.

12 Oct 2014 7:46pm GMT

06 Oct 2014

feedPlanet Arch Linux

python-zarafa monthly update October

It's been two months since I've written about a python-zarafa update. Lately it has been a busy time and there aren't that much changes made to the API. In this post I will describe the new features we added.

The changes in Git where as following.

3236e26 2 days ago : Add User.admin which checks if user is admin
34e3e54 2 days ago : Add try/except to last_logon/last_logof since a new user has no logon property
dc3e916 2 days ago : Add Store.last_login/last_logoff
bb3327b 2 days ago : dictionary comprehension is py2.7 only
a3443ab 5 days ago : - lint fixes - table.dict_rows() make an exception for PR_EC_STATSTABLE_SYSTEM New f
older related functions: - folder.copy - folder.delete - folder.movie - folder.submit
9d982c7 8 days ago : Add z-barplot.py to the readme
31a4c23 8 days ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
dc31ea6 8 days ago : Remove whitespace
a0e54ba 8 days ago : Add barplot stastics script
2c3df17 2 weeks ago : Merge pull request #9 from fbartels/patch-1
33e45fe 2 weeks ago : force default encoding
bd71906 3 weeks ago : Add documentation url
703ba1d 3 weeks ago : Add description for zarafa-spamhandler
82d73f0 3 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
7e216f4 3 weeks ago : - added sort option to z-plot - add README section for z-plot
75935eb 3 weeks ago : Simple program that uses matplotlib to plot graphs about user data usage
22a24e1 5 weeks ago : Update delete_olditems.py
fad089a 5 weeks ago : Add more MAPI defined folders to Store class
351062e 5 weeks ago : Do not look at admin.cfg if ZARAFA_SOCKET is defined
ff83d14 5 weeks ago : Add auth_user and auth_pass options to zarafa.Server()
3beb326 5 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
4fc55cc 6 weeks ago : - Store.folders(mail=True) to only generate folders which contain mail - fix Item.b
ody bug - Add folder property to Item class
0f1d23d 6 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
974dca8 6 weeks ago : Tabs to spaces
5606cf5 6 weeks ago : Add initial release of zarafa-spamhandler
5509cda 7 weeks ago : Fix remote=False to actually show results
d625941 7 weeks ago : Add list_sentitems.py
1440ee4 7 weeks ago : Add outbox
1efa35e 7 weeks ago : Add password authentication option
35826f1 7 weeks ago : Remove print
90328b1 7 weeks ago : Merge branch 'master' of github.com:zarafagroupware/python-zarafa
df0dbdb 7 weeks ago : Add description for monitor,tables and zarfa-stats.py
a3229ab 7 weeks ago : Redo command line arguments, add -U and -P for user session support
ac1c13b 8 weeks ago : update README
ab5ed57 8 weeks ago : Bump version to 0.1
daba691 8 weeks ago : Rename doc to docs
a482a3c 8 weeks ago : - Add verbose flag - Rename spam to junk
cb54d78 8 weeks ago : Make code a bit more fancy...
4273d2f 8 weeks ago : Actually restrict to folder given as parameter
6bf66ef 8 weeks ago : Addition of the spam property.
8bd5778 8 weeks ago : Rename email: to item:
d3a4c2e 8 weeks ago : Add fix for items without received time
128907b 8 weeks ago : Rename examples to scripts - Add several new examples replacing the examples.py
329aca8 9 weeks ago : Add README.md
4d186c3 9 weeks ago : Add delete_olditems.py: Allows a system administrator to delete old email for a user. e.g. python delete_olditems.py -u username 30 will delete all mails older than 30 days based on the receive date of the emails.
c6e977a 9 weeks ago : - fix subfolder bug in Folder.folders - make Folder.create_folder return a mapi folder - add store.folder(key)
a790523 9 weeks ago : fix documentation chapter
1ec3062 9 weeks ago : add sphinx documentation part
63d69f7 9 weeks ago : add sphinx Makefile to create documentation
eb303e6 9 weeks ago : - Add docstrings for classes - Add Body class, which can provide an html or plaintext representation of a MAPI message body - add folder.item(entryid)

Sphinx documentation

We've added online Sphinx documentation, which should make it easier to hack on Python-Zarafa. The documentation is still work in progress so expect us to add missing parts and improvements.

User session support

The API had initial support for only a SYSTEM session, but now it also supports user sessions. User sessions can be used with the command line switches -U (username) and -P (password). In a Python program the user session support looks as following.

import zarafa

server = zarafa.Server(auth_user='bob',auth_pass='test')

# Bob's inbox Folder
inbox = server.users().next().store.inbox

The generator server.users() still returns a generator, but it only contains one User object.

New properties

User, Store, Folder, etc. have received new exposed MAPI properties, so that you don't need to look up the MAPI property definition.

import zarafa

server = zarafa.Server(auth_user='bob',auth_pass='test')
bob = server.users().next()
store = bob.store
print "last logoff:", store.last_logoff

# More store Folders definitions
print = store.sentmail
print store.junk

# User.admin to verify if a user is an admin
print "Is bob admin? ", bob.admin

Which prints:

last logoff: 2014-10-04 19:44:39
Folder(Sent Items)
Folder(Junk E-mail)
Is bob admin? False

Folder related functions

There are three new Folder related functions, Folder.copy(), Folder.move() and Folder.delete().

import zarafa
store = zarafa.Server(auth_user='bob',auth_pass='test').users().next().store
print list(store.inbox.items())
# Copy all items to the junk folder
store.inbox.copy(store.inbox.items(),store.junk)
print list(store.junk.items())
# Move all items to the junk folder
store.inbox.move(store.inbox.items(),store.junk)
print list(store.inbox.items())
print list(store.junk.items())
# Empty junk
store.junk.delete(store.junk.items())

Which prints:

# Inbox
[Item(Undelivered Mail Returned to Sender)]
# Junk
[Item(Undelivered Mail Returned to Sender)]
# Inbox
[]
# Junk
[Item(Undelivered Mail Returned to Sender), Item(Undelivered Mail Returned to Sender)]
# Junk
[]

These functions combined with ICS can for example create a client-side rule.

import zarafa
import time

class importer:
    def __init__(self, folder, target):
        self.folder = folder
        self.target = target

    def update(self, item, flags):
        if 'spam' in item.subject:
            print 'trashing..', item
            self.folder.move(item, self.target)

    def delete(self, item, flags):
        pass

server = zarafa.Server()
store = server.user(server.options.auth_user).store
inbox, junk = store.inbox, store.junk

state = inbox.state
while True:
    state = inbox.sync(importer(inbox, junk), state)
    time.sleep(1)

These where the most interesting changes in Python-Zarafa from 1 August till 6 October.

python-zarafa monthly update October was originally published by Jelle van der Waa at Jelly's Blog on October 06, 2014.

06 Oct 2014 8:00pm GMT

05 Oct 2014

feedPlanet Arch Linux

nvidia-340xx and nvidia

As NVIDIA dropped support for G8x, G9x, and GT2xx GPUs with the release of 343.22, there now is set of nvidia-340xx packages supporting those older GPUs. 340xx will receive support until the end of 2019 according to NVIDIA.

Users of older GPUs should consider switching to nvidia-340xx. The nvidia-343.22 and nvidia-340xx-340.46 packages will be in testing for a few days.

05 Oct 2014 5:36am GMT

04 Oct 2014

feedPlanet Arch Linux

October 2014: a new month, a new iso and a new home

The TalkingArch team is pleased to announce the availability of the newest TalkingArch iso for October 2014. This iso includes the latest base packages, including Linux kernel 3.16.3. It can be downloaded via BitTorrent or HTTP from the TalkingArch download page. In other news, TalkingArch, its torrent tracker and part of its supporting IRC network [...]

04 Oct 2014 3:44am GMT

02 Oct 2014

feedPlanet Arch Linux

mesa updated to 10.3.0

mesa is now available with some packaging changes:

02 Oct 2014 12:44pm GMT

30 Sep 2014

feedPlanet Arch Linux

A real whisper-to-InfluxDB program.

The whisper-to-influxdb migration script I posted earlier is pretty bad. A shell script, without concurrency, and an undiagnosed performance issue. I hinted that one could write a Go program using the unofficial whisper-go bindings and the influxdb Go client library. That's what I did now, it's at github.com/vimeo/whisper-to-influxdb. It uses configurable amounts of workers for both whisper fetches and InfluxDB commits, but it's still a bit naive in the sense that it commits to InfluxDB one serie at a time, irrespective of how many records are in it. My series, and hence my commits have at most 60k records, and presumably InfluxDB could handle a lot more per commit, so we might leverage better batching later. Either way, this way I can consistently commit about 100k series every 2.5 hours (or 10/s), where each serie has a few thousand points on average, with peaks up to 60k points. I usually play with 1 to 30 InfluxDB workers. Even though I've hit a few InfluxDB issues, this tool has enabled me to fill in gaps after outages and to do a restore from whisper after a complete database wipe.

30 Sep 2014 12:37pm GMT

28 Sep 2014

feedPlanet Arch Linux

Shellshock and Arch Linux

I'm guessing most people have heard about the security issue that was discovered in bash earlier in the week, which has been nicknamed Shellshock. Most of the details are covered elsewhere, so I thought I would post a little about Continue reading

28 Sep 2014 5:08am GMT

26 Sep 2014

feedPlanet Arch Linux

Mailinglists maintenance

Starting 14:30 UTC today I'll move our mailing lists to a new server. Expected downtime is 2 hours.

This post will be updated once the move is done.

Update: Migration complete, sorry for the couple mails with the wrong List-Id, I've corrected that now. If you experience any other problems with the mailing lists please feel free to send me an email (postmaster@archlinux.org) or ping me on IRC.

26 Sep 2014 2:03pm GMT

24 Sep 2014

feedPlanet Arch Linux

InfluxDB as a graphite backend, part 2



Updated oct 1, 2014 with a new Disk space efficiency section which fixes some mistakes and adds more clarity.

The Graphite + InfluxDB series continues.

  • In part 1, "On Graphite, Whisper and InfluxDB" I described the problems of Graphite's whisper and ceres, why I disagree with common graphite clustering advice as being the right path forward, what a great timeseries storage system would mean to me, why InfluxDB - despite being the youngest project - is my main interest right now, and introduced my approach for combining both and leveraging their respective strengths: InfluxDB as an ingestion and storage backend (and at some point, realtime processing and pub-sub) and graphite for its renown data processing-on-retrieval functionality. Furthermore, I introduced some tooling: carbon-relay-ng to easily route streams of carbon data (metrics datapoints) to storage backends, allowing me to send production data to Carbon+whisper as well as InfluxDB in parallel, graphite-api, the simpler Graphite API server, with graphite-influxdb to fetch data from InfluxDB.
  • Not Graphite related, but I wrote influx-cli which I introduced here. It allows to easily interface with InfluxDB and measure the duration of operations, which will become useful for this article.
  • In the Graphite & Influxdb intermezzo I shared a script to import whisper data into InfluxDB and noted some write performance issues I was seeing, but the better part of the article described the various improvements done to carbon-relay-ng, which is becoming an increasingly versatile and useful tool.
  • In part 2, which you are reading now, I'm going to describe recent progress, share more info about my setup, testing results, state of affairs, and ideas for future work

read more

24 Sep 2014 11:56am GMT

22 Sep 2014

feedPlanet Arch Linux

An alternative way to install a pure Arch Linux installation

As we all know, Arch Linux isn't the distribution with an easy installation. And if I say "easy" I mean easy for most Linux users and maybe new Linux users. I know we have a great Beginner's Guide in the wiki which is awesome if you follow these steps and understand each step. But for the most users these steps are a show stopper and they won't even try Arch Linux even if it's the distribution they were looking for. I think there are a lot of Linux users out there who want to try Arch, but fails with the installation. And even those should have a chance to try Arch.

Evo/Lution-AIS

A german Linux news site made an article about Evo/Lution. Evo (founded by Jeff Story) is an Arch Linux Live CD with a CLI installer. The installer reminds me at our old CLI installer we had some years ago, which was removed in 2012 because of the huge amount of maintenance.

Since then you definitely have to know what you are doing during the installation of Arch Linux.

This huge gap is now closed with Evo. You are guided through the whole installation, each step of the installation is described in the installer, even the choice of a desktop environment is provided. So after the installation you have a fully running Arch Linux installation which is booting directly into the graphical environment. You don't have to worry about commands and their parameters, this is all done by AIS.

The project itself is very new and I have tried the Evo-AIS-0.3-RC2-64bit ISO image. I don't know in which direction this project is going, but at the moment it's a very good alternative to install a pure Arch Linux without the hassle of reading through the whole Beginner's Guide.

BUT one thing must be clear, after the installation you are on your own and you have definitely learn how Arch Linux works. But if you need any help after the installation process, you can find all information in the wiki or in the forum of Arch Linux.

Conclusion: If you want to learn Arch Linux from scratch use the Beginner's Guide from our wiki. If you are familiar with Linux then use our own installation process. If you are looking for a fast way to try Arch Linux and don't want (or like) the actual installation process, then try Evo/Lution-AIS (at least the Evo-AIS-0.3-RC2 ISO image, maybe the project is moving forward and change their main intention, which is at the moment the Arch Linux installer).

Personally I'm impressed by the installer which is working like a charm, but that's my personal opinion and no official statement. ;-)

22 Sep 2014 7:37am GMT

20 Sep 2014

feedPlanet Arch Linux

Graphite & Influxdb intermezzo: migrating old data and a more powerful carbon relay


read more

20 Sep 2014 7:18pm GMT

19 Sep 2014

feedPlanet Arch Linux

Managing rackspace products in SaltStack

How everything is setup now.

Currently we have been keeping all the individual drivers seperate, controlling all the behavior so that you get the same experience no matter which provider you use. I would like to keep doing this even though we are adding stuff that is more specific.

The goal will be to continue to only use the CloudClient interface, and just run everything through that so that we can write the code once, and have it available to all of the different cloud modules/runner/state/salt-cloud.

Getting Started

I want to start this one cleanly and not run into the problems that happened with the novaclient driver. Starting with salt.utils.openstack.pyrax, __init__.py should look something like this

salt/utils/openstack/pyrax/__init__.py

    from __future__ import print_function, with_statement, generators

    try:
        import pyrax

        # import pyrax classes
        from salt.utils.openstack.pyrax.authentication import Auth

        __all__ = [ 'Auth' ]
        HAS_PYRAX = True
    except ImportError as err:
        HAS_PYRAX = False

This will keep everything imported when we just import salt.utils.openstack.pyrax but if pyrax is not installed, it doesn't try to import it a bunch more times on all the other files.

Once this has begun, there will be the main class, that gets authenticated and does the _get_conn just like all the other drivers do, but then, we can just pass the authenticated object to all the new classes, instead of authenticating on each new one.

After these have gotten started, everything needs to then be referenced in salt. cloud.clouds.pyrax, and that just uses the classes from the utils directory.
This is the code I am writing right now, and once it is finished, it will make it really easy for anyone to just add another class in another file and then hook it into the salt-cloud pyrax driver.

What is next?

I have to write the authentication and servers class, since those are what everything revolves around. Once those are done, we can start expanding and have everyone come in and start adding Cloud Things!

From the point that it is up, everything should be easy enough to run through the cloud.action module. Using the cloud.action module, I want to make one pyrax state that does all the things.

I am definitely going to get the first pyrax authentication class and stuff done this weekend, and am going to start with it in my gtmanfred/salt repo in the pyrax branch. Look for that on Saturday or Sunday this weekend.

19 Sep 2014 11:33pm GMT

Multi-arch Packages in AUR

One of the easiest ways to contribute to Arch is to maintain a package, or packages, in the AUR; the repository of user contributed PKGBUILDs that extends the range of packages available for Arch by some magnitude. Given that PKGBUILDs are just shell scripts, the barrier to entry is relatively low, and investing the small amount of effort required to clear that barrier will not only give you a much better understanding of how packaging works in Arch, but will scratch your own itch for a particular package and hopefully assuage someone else's similar desire at the same time.

Now that I have a Raspberry Pi1, I am naturally much more interested in packages that can be built for the ARMv6 architecture; especially those that are available in the AUR. It is worth a brief digression to note that Arch Linux ARM is an entirely separate distribution and, while they share features with Arch, support for each is restricted to their respective communities. It is with this consideration in mind that I had begun to think about multi-arch support in PKGBUILDs, particularly in the packages that I maintain in the AUR.

I have previously posted about using Syncthing across my network, including on a Pi as one of the nodes. As the Syncthing developer pushes out a release at least weekly, I have been maintaining my own PKGBUILD and, after Syncthing was pulled into [community], I uploaded it to the AUR as syncthing-bin.

Syncthing is a cross platform application so it runs on a wide range of architectures, including ARM (both v6 and v7). Initially, when I wrote the PKGBUILD, I would updpkgsums on my x86_64 machine, build the package and then, on the Pi, have to regenerate the integrity checks. This was manageable enough for my own use across two architectures, but wasn't really going to work for people using other architectures (especially if they are using AUR helpers).

Naturally enough, this started me thinking about how I could more effectively manage the process of updating the PKGBUILD for each new release, and have it work across the four architectures-without having to manually copy and paste or anything similarly tedious. Managing multiple architectures in the PKGBUILD itself is not particularly problematic, a case statement is sufficient:

PKGBUILD
case "$CARCH" in</p>

<pre><code>armv6h) _pkgarch="armv6"
        sha1sums+=('a94e5d00cec32956eb27bc12dbbc4964b68913f9')
       ;;
armv7h) _pkgarch="armv7"
        sha1sums+=('9b782abf95668a906bfe76ad5ceb4cda17ec2289')
       ;;
i686) _pkgarch="386"
      sha1sums+=('b2e1961594a931201799246f5cf61cb1e1700ff9')
       ;;
x86_64) _pkgarch="amd64"
        sha1sums+=('035730c09ca5383c90fdd9898baf66b90acdef24')
       ;;
</code></pre>

<p>esac

The real challenge, for me, was to be able to script the replacement of each of the respective sha1sums, and then to update the PKGBUILD with the new arrays. Each release of Syncthing is accompanied by a text file containing all of the sha1sums, each on its own line in a conveniently ordered format, like so:

sha1sums.txt.asc
b2e1961594a931201799246f5cf61cb1e1700ff9    syncthing-linux-386-v0.9.16.tar.gz
035730c09ca5383c90fdd9898baf66b90acdef24    syncthing-linux-amd64-v0.9.16.tar.gz
d743b64204f0ac7884e4b42d9b1865b2436f5ecb    syncthing-linux-armv5-v0.9.16.tar.gz

This seemed a perfect job for Awk, or more particularly, gawk's switch statement, and an admittedly rather convoluted printf incantation.

</p>

<pre><code>switch ($2) {
  case /armv6/:
    arm6 = $1
    break
  case /armv7/:
    arm7 = $1
    break
  case /linux-386/:
    i386 = $1
    break
  case /linux-amd64/:
    x86 = $1
    break
  }
</code></pre>

<p>  }
END {
  printf "case \"$CARCH\" in\n\t"\</p>

<pre><code>     "armv6h) _pkgarch=\"armv6\"\n\t\tsha1sums+=(\047%s\047)\n\t\t;;\n\t"\
     "armv7h) _pkgarch=\"armv7\"\n\t\tsha1sums+=(\047%s\047)\n\t\t;;\n\t"\
     "i686) _pkgarch=\"386\"\n\t\tsha1sums+=(\047%s\047)\n\t\t;;\n\t"\
     "x86_64) _pkgarch=\"amd64\"\n\t\tsha1sums+=(\047%s\047)\n\t\t;;\n"\
     "esac\n",
     arm6, arm7, i386, x86
</code></pre>

<p>}

The remaining step was to update the PKGBUILD with the new sha1sums. Fortunately, Dave Reisner had already written the code for this in his updpkgsums utility; I had only to adapt it slightly:

excerpt from updpkgsums
{
  rm "$buildfile"
  exec awk -v newsums="$newsums" '</p>

<pre><code>/^case/,/^esac$/ {
  if (!w) { print newsums; w++ }
    next
  }; 1
  END { if (!w) print newsums }
</code></pre>

<p>  ' > "$buildfile"
} &lt; "$buildfile"

Combining these two tasks means that I have a script that, when run, will download the current Syncthing release's sha1sum.txt.asc file, extract the relevant sums into the replacement case statement and then write it into the PKGBUILD. I can then run makepkg -ci && mkaurball, upload the new tarball to the AUR and the two other people that are using the PKGBUILD can download it and not have to generate new sums before installing their shiny, new version of Syncthing. You can see the full version of the script in my bitbucket repo.

Notes

  1. See my other posts about the Pi

Creative Commons image of the Mosque at Agra, by yours truly.

19 Sep 2014 9:16pm GMT