30 Aug 2014

feedPlanet Arch Linux

Need your help for a photo contest

Hi there,

as sum of you may know one of my hobby is photography.
Sometimes I take part in some photo contest, so this time I'm taking part again.

If you want to support me then you can vote online for my submitted photos. You can vote until August 31st (only two days left), then the best voted 20 photos will be judged by a jury to find the best 10 photos.

The online vote itself is done with a single click on the cup which appears if you move your mouse over the photo. After your vote you can fill out a form to win a trip to Chile, but you won't have to, so the vote is anonymous if you want.

I take part with the following photos and it would be great if you could vote for all of them:

- http://www.vaudevisions.com/contest/photos/detail/beauty-of-the-mountain/
- http://www.vaudevisions.com/contest/photos/detail/the-hiker/
- http://www.vaudevisions.com/contest/photos/detail/on-the-edge/

Feel free to forward this post and links to any people you may know who would vote for my photos, more votes means a better chance to be under the first 20 photos for the jury.

MANY THANKS....Daniel

30 Aug 2014 12:32am GMT

27 Aug 2014

feedPlanet Arch Linux

E-mail infrastructure you can blog about

The "e" in eCryptfs stands for "enterprise". Interestingly in the enterprise I'm in its uses were few and far apart. I built a lot of e-mail infrastructure this year. In fact it's almost all I've been doing, and "boring old e-mail" is nothing interesting to tell your friends about. With inclusion of eCryptfs and some other bits and pieces I think it may be something worth looking at, but first to do an infrastructure design overview.

I'm not an e-mail infrastructure architect (even if we make up that term for a moment), or in other words I'm not an expert in MS Exchange, IBM Domino and some other "collaborative software", and most importantly I'm not an expert in all the laws and legal issues related to E-mail in major countries. I consult with legal departments, and so should you. Your infrastructure designs are always going to be driven by corporate e-mail policies and local law - which can, for example, require from you to archive mail for a period of 7-10 years, and do so while conforming with data protection legislation... and that makes a big difference on your infrastructure. I recommend this overview of the "Climategate" case as a good cautionary tale. With that said I now feel comfortable describing infrastructure ideas someone may end up borrowing from one day.

E-mail is critical for most business today. Wait, that sounds like a stupid generalization. As a fact I can say this for types of businesses I've been working with; managed service providers and media production companies. They all operate with teams around the world and losing their e-mail system severely degrades their ability to get work done. That is why:

The system must be highly-available and fault-tolerant


Before I go on to the pretty pictures I have to note that good network design and engineering I am taking as a given here. The network has to be redundant well in advance of services. Network engineers I worked with were very good at their jobs and I had it easy, inheriting good infrastructure.

The first layer deployed on the network is the MX frontend. If you already have, or rent, an HA frontend that can sustain abuse traffic it's an easy choice to pull mail through it too. But your mileage may vary, as it's not trivial to proxy SMTP for a SPAM filter. If the filter sees connections only from the LB cluster it would be impossible for it to perform well; no rate limiting, no reputation scoring... I prefer HAProxy. People making it are great software engineers and their software and services are superior to anything else I've used (it's true I consulted for them once as a sysadmin but that has nothing to do with my endorsements). The HAProxy PROXY protocol, or TPROXY mode can be used in some cases. Or if you are a Barracuda Networks customer instead you might have their load balancers which are supposed to integrate with their SPAM firewalls, but I've been unable to find a single implementation detail to verify their claim. Without load balancers using the SPAM filtering cluster as the MX, and load balancing across it with round-robin DNS is a common deployment:

Network diagram

I wouldn't say much about the SPAM filter, obviously it's supposed to do a very good job at rating and scanning incoming mail, and everyone has their favorites. My own favorite classifier component for many years has been the crm114 discriminator, but you can't expect from (many) people to train their own filters and that it takes 3-6 months to achieve >99% accuracy, Gmail has spoiled the world. The important thing in the context of the diagram above is that the SPAM filter needs to be redundant, and that it must have the capability to spool incoming mail if all the Mailstore backends fail.

The system must have backups and DR fail-over strategy


For building the backend, the "Mailstores", some of my favorites are Postfix, sometimes Qmail, and Dovecot. It's not relevant, but I guess someone would want to hear that too.

eCryptfs (stacked) file-system runs on top of the storage file-system, and all the mailboxes and spools are stored on it. The reasons for using it are not just related to data protection legislation. There are other solutions and faster too, block-level or hardware-based solutions for doing full disk encryption. But, being a file-system eCryptfs allows us to manipulate mail on the individual (mail) file or (mailbox) directory level. Encrypted mail can be transferred over the network to the remote backup backend very efficiently because of it. If you require, or are allowed to do, snapshots they don't necessarily have to be done at the (fancy) file-system or volume level. Common ext4/xfs and a little rsync hard-links magic work just as well (up to about 1TB on cheap slow drives).

When doing backup restores or a backend fail-over eCryptfs keys can be inserted into the kernel keyring, and data mounted on the remote file-system to take over.

The system must be secure


Everyone has their IPS and IDS favorites, and implementations. But those, together with firewalls, application firewalls, virtual private networks, access controls, two-factor authentication and file-system encryption... still do not make your private and confidential data safe. E-mail is not confidential as SMTP is a plain-text protocol. I personally think of it as being in the public domain. The solution to authenticating correspondents and to protecting your data and intellectual property of your company, both in transit and stored on the Mailstore, is PGP/GPG encryption. It is essential.

Even then, confidential data and attachments from mailboxes of employees will find their way onto your project management suite, bug tracker, wiki... But that is another topic entirely. Thanks for reading.

27 Aug 2014 8:42pm GMT

22 Aug 2014

feedPlanet Arch Linux

Building from Source

One of the real strengths of Arch is its ability to be customised. Not just in terms of the packages that you choose to install, but how those packages themselves can be patched, altered or otherwise configured to suit your workflow and setup. I have posted previously about, for example, building Vim or hacking PKGBUILDS. What makes all this possible is the wonderful ABS, the Arch Build System.

Essentially a tree of all of the PKGBUILDs (and other necessary files) for the packages in the official repositories, the ABS is the means by which you can easily acquire, compile and install any of the packages on your system:

ABS is made up of a directory tree (the ABS tree) residing under /var/abs. This tree contains many subdirectories, each within a category and each named by their respective package. This tree represents (but does not contain) all official Arch software, retrievable through the SVN system.

Arch Wiki ABS

I have been using ABS since I started running Arch and it has worked well. I wrote a simple script to check for and download updates when required to help simplify the process and have been generally content with that approach. That isn't to say that elements of this process couldn't be improved. One of the small niggles is that the ABS only syncs once a day so there is almost always-for me down here in .nz, anyway-at least a full day's wait between the package hitting the local mirror and the updated ABS version arriving. The other issue is that you download and sync the entire tree…

That all changed when, at the start of this month, one of the Arch developers, Dave Reisner, opened a thread on the Arch boards announcing asp, the Arch Source Package management tool, a git-based alternative for abs1.

Basically a 200-line bash script, asp is an improvement over abs insofar as you get the updated PKGBUILDs immediately; you can choose between just pulling the necessary source files (as per abs), or checking out the package branch so that you can create your own development branch and, for example, keep your patch set in git as well.

You can elect to locate the local git repository in a directory of your choosing by exporting ASPROOT, there are Tab completion scripts for bash and zsh and a succinct man page. Overall, for a utility that is only three weeks old, asp is already fulfilling the function of a drop-in replacement; a faster, more flexible tool for building Arch packages from source.

With thy sharp teeth this knot intrinsicate
Of life at once untie…

Antony and Cleopatra V.ii

Notes

  1. The package, not the entire build system…

Creative Commons image, Red Lego Brick by Brian Dill on Flickr.

22 Aug 2014 9:41pm GMT

21 Aug 2014

feedPlanet Arch Linux

Reorganization of Vim packages

Thomas Dziedzic wrote:

The Vim suite of packages has been reorganized to better provide advanced features in the standard vim package, and to split the CLI and GUI versions; the new packages are:

21 Aug 2014 3:12am GMT

20 Aug 2014

feedPlanet Arch Linux

How I lost my blog content...

...and, luckily, how I restored it!

Let me say this before you start reading: backup your data NOW!!!

Really, do it. I post-poned this for so long and, as result, I had a drammatic weekend.

Last Friday I had the wonderful idea to update my Ghost setup to the newer 0.5. I did this from my summer house via SSH, but the network isn't the culprit here.

You have to know that some months ago, maybe more, I switched from a package installation, through this PKGBUILD, to an installation via npm. So, as soon as I typed npm update, all my node_modules/ghost content was gone. Yep, I must be dumb.

After some minute, which helped me to better understand how the situation was, I immediately shutdown the BeagleBone Black.

The day after I went home, I installed Arch Linux ARM on a microSD and obviously the super TestDisk which got SQLite support since a while now. Cool!

This way I restored the Ghost database, BUT it was corrupted. However, a StackOverflow search pointed me to this commad:

cat <( sqlite3 ghost.db .dump | grep "^ROLLBACK" -v ) <( echo "COMMIT;" ) | sqlite3 ghost-fixed.db  

After that, I was able to open the database and to restore 14 of 40 posts.

My second attempt has been to use the Google cache. Using this method I recovered about 10 posts. Nice, I already had more than 50% of the total content! I was feeling optimistic.

The Arch Linux Planet let me recover 3 posts more, which however I could recover anyway using Bartle Doo; I never heard of this website before, but thanks to it I recovered some posts by looking for my First and Last Name.

I was almost here. About 10 posts missing, but how to recover them?? I didn't remember titles and googling without specific keywords didn't help neither.

I went back on the broken SQLite database, Vim can open it so let's look into for some data. Bingo! The missing posts titles are still there!

And then I started googling again, but for specific titles, which pointed me to websites mirroring my posts content.
At the end of this step I had 38 of 40 posts!

I can't stop now, it's more than a challenge now.

I went back again on the broken database where posts content is corrupted: there's some text, then symbols and then another text which doesn't make any sense in union with the first part. This looks like a tedious job. This Saturday can end here.

It's sunday; I'm motivated and I can't lose those 2 posts because of my laziness.
I've the missing posts titles and I now remember their content, so I started to look for their phrases in the database and, with all my surprise and a lot of patience, I recovered their content!
This mainly because Ghost keeps both the markdown and the HTML text in the database and then the post content is duplicated which decrease the chance of a corruption in the same phrase.

Another summer, another Linux survival experience (that I'm pleased to link to!).

20 Aug 2014 6:23pm GMT

12 Aug 2014

feedPlanet Arch Linux

Darktable: a magnificent photo manager and editor

A post about the magnificent darktable photo manager/editor and why I'm abandoning pixie

When I wrote pixie, I was aware of darktable. It looked like a neat application with potential to be pretty much what I was looking for, although it also looked complicated, mainly due to terminology like "darkroom" and "lighttable", which was a bit off-putting to me and made me feel like the application was meant for photo professionals and probably wouldn't work well with the ideals of a techie with some purist views on how to manage files and keep my filesystems clean.

Basically I didn't want to give the application a proper chance and then rationalized the decision after I made it. I'm sure psychologists have a term for this behavior. I try to be aware of these cases and not to fall in the trap, but this time I was very aware of it and still proceeded, but I think I had a reasonable excuse. I wanted an app that behaves exactly how I like, I wanted to play with angularjs, it seemed like a fun learning exercise to implement a full-stack program backed by a Go api server and an angularjs interface, with some keybind features and vim-like navigation sprinkled on top.

Pixie ended up working, but I got fed up with some angularjs issues, slow js performance and a list of to-do's i would need to address before i would consider pixie feature complete, so only as of a few days ago I started giving darktable the chance it had deserved from the beginning.
As it turns out, darktable is actually a fantastic application, and despite some imperfections, the difference is clear enough for me to abandon pixie.


Here's why I like it:

  1. It stays true to my ideals: It doesn't modify your files at all, this is a must for easily synchronizing photo archives with each other and with devices. You can tag, assign metadata, create edits, etc. and re-size on export. It stores metadata in a simple sqlite database, and also in xmp files which it puts along with the original files, but luckily you can easily ignore those while syncing. (I have yet to verify whether you can adjust dates or set GPS info without modifying the actual files, but I had no solution for that either)
  2. basically, it's just well thought out and works well. the terminology thing is a non-issue. You just have to realize that lighttable means the set of pictures in your collection you want to work with, and darkroom is the editor where you edit the image. Everything else is intuitive
  3. It has decent tag editing features, and a powerful mechanism to build a selection of images using a variety of criteria using exif data, tags, GPS info, labels, etc. You can make duplicates of an image and make different edits, and treat them as images of their own
  4. It has pretty extensive key binding options, and even provides a lua api so you can hook in your own plugins. People are working on a bunch of scripts already.
  5. It's fast. Navigating a 33k file archive, adjusting thumbnail sizes on the fly, iterating fast, works well
  6. It has good support for non-destructive editing. It has a variety of editing possibilities, as if it was commercial software
  7. It has complete documentation, a great blog with plenty of tutorial articles, and tutorial videos

I did notice some bugs (including a few crashes), but there's always a few developers and community members active, on IRC and the bug tracker, so it's pretty active project and I'm confident/hopeful my issues will be resolved soon.
I also have a few more ideas for features that would make it closer to my ideals, but as it stands, darktable is already a great application and I'm happy I can deprecate pixie at this point. I even wrote a script that automatically does all tag assignments in darktable based on the pixie information in tmsu, to make the transition friction free.

12 Aug 2014 12:36pm GMT

03 Aug 2014

feedPlanet Arch Linux

The TalkingArch August 2014 iso is out

Announcing the TalkingArch iso for august 2014. This month's snapshot features the Linux kernel 3.15.8, and fixes the problem that was reported last month where the pick-a-card script wasn't working correctly. Get it now from the usual place. Share and enjoy, and of course, keep those torrents seeding :-).

03 Aug 2014 8:08pm GMT

01 Aug 2014

feedPlanet Arch Linux

pass{,word} manager

After posting last week about KeePassC as a password manager, a couple of people immediately commented about a utility billed as "the standard Unix password manager." This is definitely one of the reasons I continue to write up my experiences with free and open source software: as soon as you think that you have learned something, someone will either offer a correction or encourage you to explore something else that is similar, related or interesting for some other tangential reason.

So, I was off down that path… Called simply pass, it is a 600 line bash script that uses GPG encryption and some other standard tools and scripts to organize and manage your password files. I had never heard of it but, based on Cayetano and Bigby's recommendations, I thought it would be worth a look.

On of the reasons that I had not come across it before was that, after using KeePassX for so long, I had assumed that I would need to continue to use that database format; so when I was looking for an alternative, KeePassC was a natural fit (and a fine application). The question of migrating my data hadn't even occurred to me…

It turns out that the migration process to pass is extraordinarily well catered for: there are 10 migration scripts for a range of different formats, including keepassx2pass.py, which takes the exported XML KeePassX database file and creates your pass files,ordered by the schema you had used in that application. You just need to make sure you amend the shebang to python2 before running the script, otherwise it will fail with an unhelpful error message.

After using KeePassX to dump my database, before I could use the script to create my pass directories, I had to export the PASSWORD_STORE_DIR environment variable to place the top level pass directory in an alternate location. This way, instead of initializing a git repository, I could have the store synced by Syncthing. The git idea is a good one, but I'm not particularly interested in version controlling these directories, and I have no intention, encrypted or not, of pushing them to someone else's server.

That constitutes the basic setup. It took a grand total of five minutes. The real strength of pass, however, is in its integration with two other fantastic tools: keychain and dmenu. Together with pass, these constitute a secure, convenient and effortless workflow for managing your passwords. With your GPG key loaded into keychain, you are only prompted for your master passphrase once1 and with Chris Down's excellent passmenu script, you can use dmenu to sort through your password files, Tab complete the one you are looking for and have it copied to your clipboard with a couple of keystrokes.

After using Chris' script for a couple of days, I made a few alterations to suit my setup: removed the xdotool stuff (as I don't need it), included dmenu formatting options to match my dwm statusbar and, most significantly, changed the way that the files are printed in dmenu to remove the visual clutter of the parent directories, ie., print archwiki as opposed to internet/archwiki:

dpass
</p>

<h1>!/usr/bin/env bash</h1>

<h1>based on: https://github.com/cdown/passmenu</h1>

<p>shopt -s nullglob globstar</p>

<p>nb='#121212'
nf='#696969'
sb='#121212'
sf='#914E89'
font="Dejavu Sans Mono:medium:size=7.5"
dmenucmd=( dmenu -i -fn "$font" -nb "$nb" -nf "$nf" -sb "$sb" -sf "$sf" )</p>

<p>prefix=${PASSWORD_STORE_DIR:-~/.password-store}
files=( "$prefix"/<em><em>/</em>.gpg )
files=( "${files[@]#"$prefix"/}" )
files=( "${files[@]%.gpg}" )
fbase=( "${files[@]##</em>/}" )</p>

<p>word=$(printf '%s\n' "${fbase[@]}" | "${dmenucmd[@]}" "$@")</p>

<p>if [[ -n $word ]]; then
  for match in "${files[@]}"; do</p>

<pre><code>if [[ $word == ${match#*/} ]]; then
  /usr/bin/pass show -c "$match" 2&gt;/dev/null
fi
</code></pre>

<p>  done
fi

It does introduce some more complexity into the script, but it makes it a lot easier for me to identify the desired password when reading it in dmenu.

Now, when I need a to enter a password, I hit my dmenu hotkey, type dpass Enter and the first couple of letters of the desired password filename, TabEnter and the password is loaded and ready to go. There are also completion scripts for the main shells, and even one for fish2 for the iconoclasts…

While I have no complaints at all with KeePassC, I have found this pass setup to be a lot less intrusive to use, it seamlessly integrates with my workflow, and the passwords themselves are much simpler to manage. Short of someone else popping up in the comments with another compelling proposition, I'm content with the way this has worked out. Many thanks to Cayetano Santos and Bigby James for the push.

Notes

  1. There is a very annoying bug open for keychain that means if, as I do, you start keychain from your $HOME/.profile or $ZDOTDIR/.zprofile you will need to enter the same passphrase to unlock a sub-key before you can use pass (the same thing applies to Mutt). This gets really ugly if you attempt to use dmenu before unlocking your key…
  2. Finally, a command line shell for the 90s… Indeed.

Creative Commons image by Intel Free Press on Flickr.

01 Aug 2014 9:28pm GMT

python-zarafa monthly update July

It's been almost a month since my previous post on the changes in python-zarafa. This month we continued adding new features to python-zarafa, the following git command shows the changes made since the last post.

i[jelle@P9][~/projects/python-zarafa]%git log --since "JUN 29 2014" --until "AUG 1 2014" --pretty=format:"%h %ar : %s"
20a391b 2 days ago : fix partial rename in class Property
35e5c3e 6 days ago : - address pylint warnings - add Server.remove_user - Server.{get_store, get_user, get_company} now return None instead of throwing an exception - added Folder.state by merging with Server.state - rename mapifolder etc. to mapiobj
842b98d 11 days ago : Update README.md
75f9220 2 weeks ago : properties => prop rename
219351f 3 weeks ago : - Add support for unicode user names - Add support for unicode for Server.create_use - Let Store.folders return nothing if there is no IPM Subtree
746949c 3 weeks ago : - fix some issues with unicode company names - recreate exception for single-tenant setup - improve {Server, Company}.create_user
5820b83 3 weeks ago : add example code for handling tables
816982f 3 weeks ago : Fix shebang
6582594 3 weeks ago : - expose associated - reverse-sort Folder.items() on received date by default - added more folder-level tables
e3d7e4f 4 weeks ago : zarafa-stats.py: zarafa-stats in Python except --top support
0598914 4 weeks ago : Remove property tag since the generator accepts arguments
984d58d 4 weeks ago : - new class Table for MAPI tables - refactor delete - Item.tables() for recipeints and attachments - Rename property\_ and properties to prop and props

In the following chapters we walk through the main new features in python-zarafa.

Table support

In python-zarafa we added a table class which abstracts MAPI tables. It provids a few methods which makes it easier to display a MAPI table in various formats, for example csv.

for item in zarafa.Server().user('user').store.inbox:
        print item.table(PR_MESSAGE_RECIPIENTS, columns=[PR_EMAIL_ADDRESS, PR_ENTRYID]).csv(delimiter=';')
        print item.table(PR_MESSAGE_ATTACHMENTS).text()
        print item
        for table in item.tables():
                print table
                print table.csv()

Address class

The new Address class represents the sender and recipient of a MAPI message.

item = zarafa.Server().user('user').store.inbox.items().next()
print 'from:', item.sender.name, item.sender.email
for r in item.recipients():
                print 'rec:', r.name, r.email

Which prints:

from: john@localhost john@localhost
rec: jaap@localhost jaap@localhost

Associated folder support

An associated folder in MAPI is a "hidden" table of a folder, which is usually used to store configuration messages for example quota information. In the Zarafa-Inspector this functionality is used to look into these MAPI objects. You can access the associated folder by calling associated method on a folder.

associated = zarafa.Server().user('user').store.inbox.associated

User creation/removal

The API now also supports the addition and removal of users, which is as simple as the code example below.

server = zarafa.Server()
server.create_user('cowboy', fullname='cowboy bebop')
server.remove_user('cowboy')

Entryid access

It wasn't possible to use MAPI Object's entryid to access items directly. Previously we had to loop through the entire inbox to access a partiuclar item. We can now directly access the mapi item if we know it's entryid as you can see in the example below.

user = zarafa.Server().user('user')
entryid = user.store.inbox.items().next().entryid
print 'enytryid', entryid
# Access via store
print 'store   ', user.store.item(entryid).entryid
# Access via folder
print 'inbox   ', user.store.inbox.item(entryid).entryid

Example output

enytryid 00000000C80AB3E59F3E420D984664AF5049F1A401000000050000002496BD8A547C46B881BFAC8E9392019700000000
store    00000000C80AB3E59F3E420D984664AF5049F1A401000000050000002496BD8A547C46B881BFAC8E9392019700000000
inbox    00000000C80AB3E59F3E420D984664AF5049F1A401000000050000002496BD8A547C46B881BFAC8E9392019700000000

These where all the main new features, there are also numerous other small changes which I didn't discuss.

python-zarafa monthly update July was originally published by Jelle van der Waa at Jelly's Blog on August 01, 2014.

01 Aug 2014 8:00pm GMT

A Month of RTL-SDR

Day 10 campaign report

01 Aug 2014 1:18pm GMT

28 Jul 2014

feedPlanet Arch Linux

xorg-server 1.16 is now available

Laurent Carlier wrote:

The new version comes with the following changes:

28 Jul 2014 9:39pm GMT

27 Jul 2014

feedPlanet Arch Linux

Rtl Power

Making pretty pictures with the CLI-junky's waterfall

27 Jul 2014 8:09am GMT

26 Jul 2014

feedPlanet Arch Linux

Beautiful Go patterns for concurrent access to shared resources and coordinating responses

It's a pretty common thing in backend go programs to have multiple coroutines concurrently needing to modify a shared resource, and needing a response that tells them whether the operation succeeded and/or other auxiliary information. Something centralized manages the shared state, the changes to it and the responses.


read more

26 Jul 2014 5:22pm GMT

24 Jul 2014

feedPlanet Arch Linux

CLI Password Manager

Managing passwords is a necessary evil. You can choose a number of different strategies for keeping track of all of your login credentials; from using the same password for every site which prioritises convenience over sanitysecurity, through to creating heinously complex unique passwords for every service and then balancing the relief of knowing your risks of being hacked have been minimised with the very real fear you will only remember any of them for a short period-if at all-and will shortly be locked out of everything.

Fortunately, this is a solved problem. There are a number of password managers available, both as desktop clients and cloud services. Personally, I find the idea of storing my passwords in the cloud has all the fascination of bungee jumping; it's apparently mostly safe, but that can be cold comfort… The first application that I used, and used happily for quite a long time, was KeePassX.

Around the end of 2012, I started experimenting with KeePassC, a curses-based password manager that is completely compatible with KeePassX and has very little in the way of dependencies. I have been using it solidly on my home and work laptops ever since and, after recently uninstalling Skype on my desktop, have switched over to it completely1. I'm still not entirely clear why I haven't written about it previously.

Written in Python 3, KeePassC is entirely keyboard driven (naturally enough, you can use Vim keybinds) and integrates seamlessly with your browser and clipboard. My experience of the software over the last eighteen-odd months is that it has been incredibly stable and the developer, Karsten-Kai, has been exceptionally responsive and helpful in the forum thread.

Like most good software, there is not a lot to it. You pull up the login page, switch to a terminal and run keepassc, enter your passphrase (I use a Yubikey for this and it works wonderfully) and then search for your desired entry with / and then hit c to copy the password to your clipboard before switching back to the browser and you are in.

KeePassC also has a set of simple command line options, run keepassc -h to see them. Additionally, you can set up KeePassC as a server, I haven't experimented with this as I sync my database. The only functionality that the X application offers in addition, as far as I can tell, is the auto-filling of your username and password fields bound to a keybind; undoubtedly, this is a very handy feature, but I haven't really missed it at all.

As I said, I store the database in a directory synced between all my machines2 (using Syncthing), so I have access to an up-to-date versions of my credentials everywhere. Well, almost everywhere. I don't use the Android client because the mobile web is just such a fundamentally insecure environment and I see it as just being sensible, rather than any sort of inconvenience.

Notes

  1. Skype and KeePassX were the only two applications I used that required Qt, so once Skype was gone there was no reason to keep KeePassX installed.
  2. And, after a nasty scare very early on with a corrupt database, I back that file up daily.

Creative Commons image on Flickr by xserv.

24 Jul 2014 9:36pm GMT

12 Jul 2014

feedPlanet Arch Linux

MariaDB 10.0 enters [extra]

Bartłomiej Piotrowski wrote:

A new major release of MariaDB will be moved to [extra] soon. The change in versioning scheme has been made to clearly distinguish provided functionality from MySQL 5.6. From now on, it won't be possible to easily move between various MySQL implementations provided in the official repositories.

Due to major changes in MariaDB 10.0, it is recommended (although not necessary) to dump the tables before upgrading and reloading the dump file afterwards. After upgrading to the new version don't forget to restart mysqld.service and run mysql_upgrade to check the databases for possible errors.

Additionally TokuDB storage engine has been disabled because of repeating build failures. I'm sorry for any inconvenience caused. TokuDB is available again in MariaDB 10.0.12-2.

For detailed information on changes and upgrade steps, please refer to MariaDB Knowledge Base and MySQL Reference Manual. Akonadi users can find detailed how-to on our forums.

12 Jul 2014 2:38pm GMT

10 Jul 2014

feedPlanet Arch Linux

Install scripts

It is now almost exactly two years since the AIF was put out to pasture. At the time, it caused a degree of consternation, inexplicably causing some to believe that it presaged the demise of-if not Arch itself, then certainly the community around it. And I think it would be fair to say that it was the signal event that launched a number of spin-offs, the first of which from memory was Archbang; soon followed by a number of others that promised "Arch Linux with an easy installation," or something to that effect…

Of course, if you look back at the Installation Guide immediately before the move to the new scripts, for example the version that shipped with the last AIF in October, 2011, it is pretty evident that the current approach is a lot simpler. Sure, there is no curses GUI to step you through each part of the install but the introduction of pacstrap and arch-chroot meant that you no longer need those prompts.

There is also the added advantage that these scripts are useful outside the installation process itself; they can be used for system maintenance and, in the rare event that your recent bout of experimentation at 2am after a few drinks doesn't pan out the way you anticipated, repair.

One of the other responses to the new approach, however, has been the steady proliferation of "helpful" install scripts. These are essentially bash scripts that emulate the behaviour of the AIF and walk people through an automated install of their system. Well, not really their system, more accurately a system. So you run one of these scripts, answer a few prompts and then, when you reboot, you have a brand spanking new Arch Linux install running KDE with the full panoply of software and, in a few special cases, some customized dot files to "enhance" your experience.

From a certain perspective, I can see how these things appeal. "I wonder if I could script an entire install, from the partitioning right through to the desktop environment?" That sounds like a fun project, right? Where it all comes unstuck, unfortunately, is when the corollary thought appears that suggests sharing it with the wider community would be a good idea. It is at this point that a rigorous bout of self-examination about the outcomes that you are seeking and your base motivations for this act of selflessness are called for.

Whatever those motivations are, whether driven by altruism or the naked desire for fame and fortune that have-from time to time-alighted on these projects when they appear on /r/archlinux and the adoring throngs bestow their favours in equal measures of upvotes and bitcoin, they are grotesquely misplaced. No good comes from these things, I tell you; none.

Why not? Because, in the guise of being helpful, you are effectively depriving people of the single most important part of using Arch: building it themselves. It's like inviting someone to a restaurant for an amazing haute cuisine meal, sitting them down across the table from you and then them watching as the staff bring out a mouth-watering array of dishes, each of which you ostentatiously savour before vomiting it all back into their mouth.

Now, I am sure there is a small minority (well, at least from my own sheltered experience I imagine it is small) who would relish this scenario, but for most it would qualify as a pretty disappointing date.

Then, after the technicolour table d'hôte, there is the matter of the clean up. Recently, we had someone show up on the Arch boards who had "installed Arch" but who did not understand how to edit a text file; literally had no clue how to open a file like /etc/fstab make some changes and then save it. This is beyond stupid; it is a drain on the goodwill of the community that has to deal with this ineptitude, it is unfair on people to put them in a position where they feel they are at the mercy of their technology, rather than in control of it, and it does nothing to advance the interests of Arch.

If you want to write something like this to improve your scripting skills, by all means proceed. If you want to contribute to Arch, then pick a project to contribute to, some bugs to squash, wiki edits, whatever; just don't publish another one of these idiotic scripts, because you aren't doing anyone any favours, quite the contrary.

Notes

Flickr Creative Commons image, Measuring spoons by Theen Moy.

10 Jul 2014 9:49pm GMT