31 May 2016

feedPlanet Ubuntu

Zygmunt Krynicki: Bite-size bugs in snapd

Hey

This is just a quick shout to anyone out there that is interested in snapd, maybe follows development or is just curious. We are getting more and more bite-size bugs that would be perfect for someone to pick up and fix as their first contribution to either free and open source software in general or perhaps to specifically to snapd itself.

There's one such bug I'd like to highlight just now. https://bugs.launchpad.net/snappy/+bug/1587445

The bug is very simple to fix. The snap list command should say something appropriate when there are no snaps installed. This is literally an if statement and a printf call. If anyone wants to try to do that, feel free to ping me on irc (zyga on #snappy on freenode) or comment below on Google+.

We have hacking instructions, we have automated tests and we have a friendly development community. I'd love to see you joint us.

31 May 2016 1:55pm GMT

Ubuntu Insights: Building a nervous system for OpenStack

Big Software is a new class of software composed of so many moving pieces that humans, by themselves, cannot design, deploy or operate them. OpenStack, Hadoop and container-based architectures are all byproducts of Big Software. The only way to address the complexity is with automatic, AI-powered analytics.

DeepStack UI Demo

Summary

Canonical and Skymind are working together to help System Administrators operate large OpenStack instances. With the growth of cloud computing, the size of data has surpassed humans' ability to cope with it. In particular, overwhelming amounts of data make it difficult to identify patterns; e.g. signals that precede server failure. Using deep learning, Skymind enables OpenStack to discover patterns automatically, predict server failure and take preventative actions.

Canonical story

Canonical, the company behind Ubuntu, was founded in March 2004 and launched its Linux distribution six months later. Shortly thereafter, Amazon created AWS, the first public cloud. Canonical worked to make Ubuntu the easiest option for AWS and later public cloud computing platforms.

In 2010, OpenStack was created as the open-source alternative to the public cloud. Quickly, the complexity of deploying and running OpenStack at cloud scale showed that traditional configuration management, which focuses on instances (i.e. machines, servers) rather than running micro-service architectures, was not the right approach. This was the beginning of what Canonical named the Era of Big Software.

Big Software is a class of software made up of so many moving pieces that humans cannot design, deploy and operate alone. It is meant to evoke big data, defined initially as data you cannot store on a single machine. OpenStack, Hadoop and container-based architectures are all big software.

The problem with Big Software

Day 1: Deployment

The first challenge of big software is to create a service model for successful deployment; that is, to find a way to support immediate and successful installations of that software. Canonical has created several tools to streamline this process. Those tools help map software to available resources:

Day 2: Operations

Big Software is hard to model and deploy and even harder to operate, which means day 2 operations also need a new approach.

Traditional monitoring and logging tools were designed for operators who only had to oversee data generated by fewer than 100 servers. They would find patterns manually, create SQL queries to catch harmful events, and receive notifications if they needed to act. When noSQL became available, this improved marginally, since queries would scale.

But that doesn't solve the core problem today. With Big Software, there is so much data that a normal human cannot cope with and find patterns of behaviour that result in server failure.

AI and the future of Big Software

This is where AI comes in. Deep Learning is the future of those day 2 operations. Neural nets can learn from massive amounts of data to find almost any needle in almost any haystack. Those nets are a tool that vastly extends the power of traditional system admins; in a sense, transforming their role.

Initially, neural nets will be a tool to triage logs, surface interesting patterns and predict hardware failure. As humans react to these events and label data (by confirming the AI's predictions), the power to make certain operational decisions will be given to the AI directly: e.g. scale this service in/out, kill this node, move these containers, etc.
Finally, as the AI learns, self-healing data centers will become standard. AI will eventually modify code to improve and remodel the infrastructure as it discovers better models adapted to the resources at hand.

The first generation Deep Learning solution looks like this: HDFS + Mesos + Spark + DL4J + Spark Notebook. It's an enablement model, so that anyone can do Deep Learning. But using Skymind on OpenStack is just the beginning.

Ultimately, Canonical wants every piece of software to be scrutinised and learnt in order to build the best architectures and operating tools.

Learn more

View the Original article to learn more about how Canonical and Skymind are working together to solve Deep Learning problems. Alternatively, you can get in touch with our team.

Skymind

Skymind provides scalable deep learning for industry. It is the commercial support arm of the open-source project Deeplearning4j, a versatile deep-learning framework written for the JVM. Skymind's artificial neural nets can run on desktop, mobile, and massively parallel GPUs and CPUs in the cloud to analyze text, images, sound and time series data. A few use cases: facial recognition, image search, theme detection and augmented search in text, speech-to-text and CRM.

About the author

Chris Nicholson

Chris Nicholson is the founder and CEO of Skymind. He spends his days helping enterprises build Deep Learning applications.

31 May 2016 11:00am GMT

Zygmunt Krynicki: snapd 2.0.5 released, new release cadence

There's a new release of snapd arriving in Ubuntu 16.04. As before, our fearless release manager Michael Vogt has crafted the work and made sure it can arrive to your machines on a timely basis.

You can see the changelog below, annotated with links to fixed bugs. I would only like to highlight one bug which improves experience of snaps under Unity 7.

New snapd releases are now planned to happen every week. You can expect a steady stream of fresh snappy goodness in both snapd and in the store. With this in mind we also plan to change the version scheme. Currently, as you can see below, we use 2.0.x for each micro-release. This system will quickly get meaningless so we will likely witch to a date-based release names instead . Expect to see a 2016W22 (or ..23) release next time around.

On the development front, many interesting changes are in the pipeline. While not a part of snapd 2.0.5 they should be released in the next few weeks, at most. You can expect applications to gain ability to play sound and music using the new pulseaudio interface. This ability, along with bug fixes to opengl should unlock the ability to deliver many popular games as snaps. Game on!

There's also ongoing work to allow sharing data from the classic Ubuntu and snaps. One of our first goals is to allow sharing fonts. This will improve the user experience of snaps that want to take advantage of custom, locally installed fonts. It should also allow us to package fonts as snaps in the near future. The underlying technology is very generic and I'm sure we'll find many interesting things to share this way.

As always, you can reach out to us on IRC (#snappy) and on the mailing list (snapcraft@lists.ubuntu.com, see this post for details). If you have any questions we will be happy to answer them.

See you next week!

snapd (2.0.5) xenial; urgency=medium

* New upstream release: LP: #1583085
- interfaces: add dbusmenu, freedesktop and kde notifications to
unity7 (LP: #1573188)
- daemon: make localSnapInfo return SnapState
- cmd: make snap list with no snaps not special
- debian: workaround for XDG_DATA_DIRS issues
- cmd,po: fix conflicts, apply review from #1154
- snap,store: load and store the private flag sent by the store in
SideInfo
- interfaces/apparmor/template.go: adjust /dev/shm to be more usable
- store: use purchase decorator in Snap and FindSnaps
- interfaces: first version of the networkmanager interface
- snap, snappy: implement the new (minmimal) kernel spec
- cmd/snap, debian: move manpage generation to depend on an environ
key; also, fix completion

-- Michael Vogt Thu, 19 May 2016 15:29:16 +0200

snapd (2.0.4) xenial; urgency=medium

* New upstream release:
- interfaces: cleanup explicit denies
- integration-tests: remove the ancient integration daemon tests
- integration-tests: add network-bind interface test
- integration-tests: add actual checks for undoing install
- integration-tests: add store login test
- snap: add certain implicit slots only on classic
- integration-tests: add coverage flags to snapd.service ExecStart
setting when building from branch
- integration-tests: remove the tests for features removed in 16.04.
- daemon, overlord/snapstate: "(de)activate" is no longer a thing
- docs: update meta.md and security.md for current snappy
- debian: always start snapd
- integration-tests: add test for undoing failed install
- overlord: handle ensureNext being in the past
- overlord/snapstate,overlord/snapstate/backend,snappy: start
backend porting LinkSnap and UnlinkSnap
- debian/tests: add reboot capability to autopkgtest and execute
snapPersistsSuite
- daemon,snappy,progress: drop license agreement broken logic
- daemon,client,cmd/snap: nice access denied message
(LP: #1574829)
- daemon: add user parameter to all commands
- snap, store: rework purchase methods into decorators
- many: simplify release package and add OnClassic
- interfaces: miscellaneous policy updates
- snappy,wrappers: move desktop files handling to wrappers
- snappy: remove some obviously dead code
- interfaces/builtin: quote apparmor label
- many: remove the gadget yaml support from snappy
- snappy,systemd,wrappers: move service units generation to wrappers
- store: add method to determine if a snap must be bought
- store: add methods to read purchases from the store
- wrappers,snappy: move binary wrapper generation to new package
wrappers
- snap: add `snap help` command
- integration-tests: remove framework-test data and avoid using
config-snap for now
- builtin/unity7.go: allow using gmenu. Closes: LP:#1576287
- add integration test to verify fix for LP:#1571721

-- Michael Vogt Fri, 13 May 2016 17:19:37 -0700

31 May 2016 7:56am GMT

The Fridge: Ubuntu Weekly Newsletter Issue 467

Welcome to the Ubuntu Weekly Newsletter. This is issue #467 for the week May 23 - 29, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

31 May 2016 2:51am GMT

Paul Tagliamonte: Iron Blogger DC

Back in 2014, Mako ran a Boston Iron Blogger chapter, where you had to blog once a week, or you owed $5 into the pot. A while later, I ran it (along with Molly and Johns), and things were great.

When I moved to DC, I had already talked with Tom Lee and Eric Mill about running a DC Iron Blogger chapter, but it hasn't happened in the year and a half I've been in DC.

This week, I make good on that, with a fantastic group set up at dc.iron-blogger.com; with more to come (I'm sure!).

Looking forward to many parties and though provoking blog posts in my future. I'm also quite pleased I'll be resuming my blogging. Hi, again, planet Debian!

31 May 2016 1:37am GMT

30 May 2016

feedPlanet Ubuntu

Sebastian KŁgler: Multiscreen in Plasma 5.7 and beyond

Here's a quick status update about where we currently stand with respect to multiscreen support in Plasma Desktop.

While for many people, multiscreen support in Plasma works nicely, for some of our users, it doesn't. There are problems with restoring previously set up configurations, and around the primary display mechanism. We're really unhappy about that, and we're working on fixing it for all of our users. These kinds of bugs are the stuff nightmares are made of, so there's not a silver bullet to fix everything of it, once and for all right away. Multiscreen support requires many different components to play in tune with each other, and they're usually divided into separate processes communicating via different channels with each other. There's X11 involved, XCB, Qt, libkscreen and of course the Plasma shell. I can easily at least three different protocols in this game, Wayland being a fourth (but likely not used at the same time as X11). There's quite some complexity involved, and the individual components involved are actually doing their jobs quite well and have their specific purposes. Let me give an overview.

Multiscreen components

Plasma Shell renders the desktop, places panels, etc., When a new screen is connected, it checks whether it has an existing configuration (wallpaper, widgets, panels etc.) and extends the desktop. Plasma shell gets its information from QScreen now (more on that later on!)

KWin is the compositor and window manager. KWin/X11 interacts with X11 and is responsible for window management, movement, etc.. Under Wayland, it will also take the job of the graphical and display server work that X11 currently does, though mostly through Wayland and *GL APIs.

KScreen kded is a little daemon (actually a plugin) that keeps track of connected monitors and applies existing configs when they change

KScreen is a module in systemsettings that allows to set up the display hardware, positioning, resolution, etc.

Libkscreen is the library that backs the KScreen configuration. It offers an API abstraction over XRandR and Wayland. libkscreen sits pretty much at the heart of proper multiscreen support when it comes to configuring manually and loading the configuration.

Primary Desktop

The primary display mechanism is a bit of API (rooted in X11) to mark a display as primary. This is used to place the Panel in Plasma, and for example to show the login manager window on the correct monitor.

Libkscreen and Qt's native QScreen are two different mechanism to reflect screen information. QScreen is mainly used for querying info (and is of course used throughout QtGui to place windows, get information about resolution and DPI, etc.). Libkscreen has all this information as well, but also some more, such as write support. Libkscreen's backends get this information directly from Xorg, not going through Qt's QScreen API. For plasmashell, we ended up needing both, since it was not possible to find the primary display using Qt's API. This causes quite some problems since X11 is async by its nature, so essentially we ended up having "unfixable" race conditions, also in plasmashell. These are likely the root cause of the bug you're seeing here.

This API has been added in Qt 5.6 (among a few other fixes) by Aleix Pol, one of our devs in the screen management team. We have removed libkscreen from plasmashell today and replaced it with "pure QScreen" code, since all the API we need for plasmashell is now available in the Qt we depend on.

These changes should fix much of the panel placement grief that bug 356225 causes. It does need some good testing, now it's merged. Therefore, we'd like to see as many people, especially those reporting problem with multiscreen, to test against latest Plasma git master (or the upcoming Plasma 5.7 Beta, which is slated for release on June, 16th).

Remember the config

Another rough area that is under observation right now is remembering and picking the right configuration from a previous setup, for example when you return to your docking station which has another display connected. Bug 358011 is an example for that. Here, we get "spurious" events about hardware changes from X11, and I'm unsure where they come from. The problem is that it's not easy to reproduce, it only happens for certain setups. This bug was likely introduced with the move to Qt 5 and Frameworks, it's a regression compared to Plasma 4.
I've re-reviewed the existing code, added more autotests and made the code more robust in some places that seemed relevant, but I can't say that we've found a sure solution to these problems. The code is now also better instrumented for debugging the areas at play here. Now we need some more testing of the upcoming beta. This is certainly not unfixable, but needs feedback from testing so we can apply further fixes if needed.

Code quality musings

From a software engineering point of view, we're facing some annoying problems. It took us a long time to get upstream QScreen code to be robust and featureful enough to draw the lines between the components involved, especially QScreen and libkscreen more clearly, which certainly helps to reduce hard-to-debug race conditions involving hardware events. The fact that it's almost impossible to properly unit test large parts of the stack (X11 and hardware events are especially difficult in that regard) means that it's hard to control the quality. On the other hand, we're lacking testers, especially those that face said problems and are able to test the latest versions of Qt and Plasma.
QA processes is something we spent some serious work on, on the one hand, our review processes for new code and changes to current code are a lot stricter, so we catch more problems and potential side-effects before code gets merged. For new code, especially the Wayland support, our QA story also looks a lot better. We're aiming for near-100% autotest coverage, and in many cases, the autotests are a lot more demanding than the real world use cases. Still, it's a lot of new code that needs some real world exposure, which we hope to get more of when users test Plasma 5.7 using Wayland.

30 May 2016 10:56pm GMT

Costales: Ubucon Paris 16.04. Day 2

Last day of the Ubucon Paris of release xenial!


Podcast in 3... 2... 1...

This time, we did an international podcast. Rudy, Quesh, Didier, Gonzalo and me from the Ubuntu Party and Marius, Ilonka, Alfred and even Simon from their homes.
We spoke about the Ubucons, tablet, news. So great experience!



It was a few hours, until the launch time. Then I ate with Gonzalo, Rudy, Winael, Didier and Yoboy.

After the lunch, I saw a Gemma's conference.

Gemma's talk


Nicolas and me were catching public inside the event from the hall of the building. And it worked so well.

Nicolas did a so great work!!


The Ubucon event was closed by Rudy, explaining thinks about the convergence, ubucon... etc.

Last conference

We finished in a restaurant, with no so much people as yesterday, but enough :) Dinner and a few drinks together.

Cheers!

Excited, this was a great event. The Ubuntu Paris is doing a so great work and this team is incredible.
Congrats!!

Convergence


Lovely Mozilla!

Hall


Quesh


wow!


Indeed!


Future :)



Until the next!


30 May 2016 9:54pm GMT

Ubuntu App Developer Blog: Can I haz MainView in a Window?

When using Unity8 these days connecting a Bluetooth mouse to a device enables windowed mode. Another option is to connect an external monitor via HDMI and most recently on some devices wireless displays. This raises a few questions on the API side of things.

Apps are currently advised to use a MainView as the root item, which can have a width and a height used as the window dimensions in a windowed environment - on phones and tablets by default all apps are always full screen. As soon as users can freely resize the window, some apps may not look great anymore - QtQuick.Window solves this by providing minimum/maximum/Width/Height properties. Another question is what title is used for the window - as soon as there is more than one Page that's no longer obvious and it's actually somewhat redundant.

So what can we do now?

There's two ways to sort this that we'll be discussing here. One way is to in fact go ahead and use MainView, which is just an Item, and put it inside a Window. That's perfectly fine to do and that's a good stop-gap for any apps affected now. To the user the outcome is almost the same, except the title and sizing can be customized behind the scenes.

import QtQuick 2.4
import QtQuick.Window 2.2
import Ubuntu.Components 1.3
Window {
title: "Hello World"
minimumWidth: units.gu(30)
minimumHeight: units.gu(50)
maximumWidth: units.gu(90)
maximumHeight: units.gu(120)
MainView {
applicationName: "Hello World"
}
}

From here on after things work exactly the same way they did before. And this is something that will continue to work in the future.

A challenger appears

That said, there's another way under discussion. What if there was a new MainWindow component that could replace the MainView and provide the missing features out of the box? Code would be simpler. Is it worth it, though, just to save some lines of code you might wonder? Yes actually. It is worth it when performance enters the picture.

As it is now, MainView does many different things. It displays a header for starters - that is, if you're not using AdaptivePageLayout to implement convergence. It also has automaticOrientation API, something the shell does a much better job of these days. And it handles actions, which are, like the header, part of each Page now. It's still doing a good job at things we need, like setting up folders for confinement (config, cache, localization) and making space for the OSK (in the form of anchorsToKeyboard). So in short, there's several internals to re-consider if we had a chance to replace it.

Even more drastic would be the impact of implementing properties in MainWindow that right now are context properties. "units" and "theme" are very useful in so many ways and at the same time by design super slow because of how QML processes them. A new toplevel component in C++ could provide regular object properties without the overhead potentially speeding up every single use of those properties throughout the application as well as the components using them behind the scenes.

Let's be realistic, however, these are ideas that need discussion, API design and planning. None of this is going to be available tomorrow or next week. So by all means, engage in the discussions, maybe there's more use cases to consider, other approaches, it's the one component virtually every app uses so we better do a good job coming up with a worthy successor.

30 May 2016 1:06pm GMT

Kubuntu: Plasma 5.6.4 available in 16.04 Backports

The Kubuntu Team announces the availability of Plasma 5.6.4 on Kubuntu 16.04 though our Backports PPA.

Plasma 5.6.4 Announcement:
https://www.kde.org/announcements/plasma-5.6.4.php

How to get the update (in the commandline):
1. sudo apt-add-repository ppa:kubuntu-ppa/backports
2. sudo apt update
3. sudo apt full-upgrade -y

Plasma 5.6.4 Screenshot - 16.04

Here is a great video demoing the some of the new features in this release:
https://www.youtube.com/watch?v=v0TzoXhAbxg

30 May 2016 12:15pm GMT

Forums Council: New Ubuntu Member via forums contributions

The Forum Council is proud to announce a new Ubuntu membership obtained through forum contributions.

Please welcome our newest Member, Mark Phelps. You can see Mark's application thread here.

Mark has been a been a long time contributor and has always showed sustained and helpful contributions to the forums.

If you have been a contributor to the forums and wish to apply to Ubuntu Membership, all you have to do is to put together a wiki and Launchpad pages, sign the Ubuntu Code of Conduct and follow the process outlined in the Ubuntu Membership via Forums contributions wiki page.


30 May 2016 10:24am GMT

29 May 2016

feedPlanet Ubuntu

Svetlana Belkin: What Programs Do I Use: Mudlet

Like many people, I have different hobbies and also like many, I play computer/video games. But not the extreme, as some though. I do have Steam and my username is senseopennes if you want to add me. I played many graphical games but I tend to get bored of them fast. I mean it. I think the longest time that I stuck to a game was one year off and on and that was a MMOPRG (RO or AO, I think).

The only game that I played and still playing is a text-based multi-user dungeon (MUD) called Armageddon MUD. It's a 20 plus year old game that is roleplay enforced, meaning that while you have coded actions, you need to also roleplay out them. In short, it's collaborative storytelling. I think I have played it for 7 years but off and on and my longest live character (it's perma-death one) lived for close two real life years before I had to store them. One day, I will write a post dedicated for Armageddon MUD, something that I said that I was going to do ages ago…

Anyhow, the MUD doesn't have a client that I can play on, but I use a client called Mudlet:

Screenshot from 2016-05-29 10-44-13

Main screen with Armageddon MUD main screen.

I used three other clients before Mudlet, two for windows back in 2008 - 2009 (MUSH and something else) and one on Ubuntu, which was KClient. KClient stopped working with Armageddon MUD after the staff of the MUD moved the server to the cloud. I did my research from what other players of the MUD were using and found that Mudlet was the most used. Like most Open Source and Free programs, Mudlet is very customizable but I just use it out-of-the-box. I have no triggers or keystrokes set up, I don't need them. I type out everything. Someday, I might work on customizing it.

It's a great program out of the box and you can have multiple profiles and games running at the same time and they can be all saved. What is great about Mudlet that I like is the fact there is a built-in notepad for notes. I use it a lot to keep track of things.

I plan to write about MyPaint next week. See you then!

29 May 2016 6:16pm GMT

Lubuntu Blog: Top Menu for Lubuntu

Thanks to the blog WebUpd8, there's a new "trick" to add an app menu to the LXDE panel, just like Unity interface has. Check this nice tutorial in our Tips'n'Tricks page.

29 May 2016 12:39pm GMT

Costales: Ubucon Paris 16.04. Day 1

And first day of the Ubucon Paris!

When I arrived there were a lot of public in all areas.

Install Party area



I was attended a Quesh talk, an introduction to the community.

Quesh's talk



After that Didier told us about the Snappy packages. Looks great.

Didier's talk



Then I was to eat and I saw Nicolas in there. Nicolas is a so great guy. I was speaking with he a few hours.

Nicolas and me



And then, I speak a bit about the 1st uNav's anniversary :) And in there was a big big big surprise from the Ubuntu Party members :)) They come with uNav and Ubuntu presents and they were singing happy birdthay :') Because of 10 years of Ubucon Paris and 1 year of uNav :)) (You guys are the best!).

:')))



And after that, it was the dinner time. So many members in the same restaurant.

Dinner


Presents from Ubuntu Paris



This was a great first day event. And tomorrow will be the last day of the Ubucon Paris.

29 May 2016 11:11am GMT

28 May 2016

feedPlanet Ubuntu

James Hunt: Procenv 0.46 - now with more platform godness


I have just released procenv version 0.46. Although this is a very minor release for the existing platforms (essentially 1 bug fix), this release now introduces support for a new platform...

Darwin

Yup - OS X now joins the ranks of supported platforms.

Although adding support for Darwin was made significantly easier as a result of the recent internal restructure of the procenv code, it did present a challenge: I don't own any Apple hardware. I could have borrowed a Macbook, but instead I decided to see this as a challenge:

Well, you've just read the answer, but how did I do this?

Stage 1: Docker


Whilst surfing around I came across this interesting docker image:


It provides a Darwin toolchain that I could run under Linux. It didn't take very long to follow my own instructions on porting procenv to a new platform. But although I ended up with a binary, I couldn't actually run it, partly because Darwin uses a different binary file format to Linux: rather than ELF, it uses the Mach-O format.



Stage 2: Travis

The final piece of the puzzle for me was solved by Travis. I'd read the very good documentation on their site, but had initially assumed that you could only build Objective-C based projects on OSX with Travis. But a quick test proved my assumption to be incorrect: it didn't take much more than adding "osx" to the os list and "clang" to the compiler list in procenv's .travis.yml to have procenv building and running (it runs itself as part of its build) on OSX under Travis!

Essentially, the following YAML snippet from procenv's .travis.yml did most of the work:

language: c
compiler:
- gcc
- clang
os:
- linux
- osx



All that remained was to install the build-time dependencies to the same file with this additional snippet:

before_install:
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew update; fi
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew install expat check perl; fi


(Note that it seems Travis is rather picky about before_install - all code must be on a single line, hence the rather awkward-to-read "if; then ....; fi" tests).


Summary


Although I've never personally run procenv under OSX, I have got a good degree of confidence that it does actually work.

That said, it would be useful if someone could independently verify this claim on a real system!) Feel free to raise bugs, send code (or even Apple hardware :-) my way!



28 May 2016 7:44pm GMT

Michael Lustfield: Long Term Secure Backups

Not that long ago, I managed to delete all off my physical HV hosts, backup server, all external backups, and a bit more. The first question that most people would ask would probably be how that's even possible. That may become a post by itself; it probably won't, though. What really matters is how I can keep this from ever happening again?

I sat down for some time to come up with some requirements, some ideas, and eventually rolled out a backup solution that I feel confident with.

Requirements

To build this backup solution, I first needed to define a set of requirements.

  • No server can see backups from other servers
  • The backup server can not access other servers
  • The backup server must create versioned backups (historical archives)
  • No server can access its own historical archive
  • All archives must be uploaded to an off-site location
  • All off-site backups must enforce data retention
  • The backup server must be unable to delete backups from an off-site location
  • All off-site backups must be retained for a minimum of three months
  • The backup server must keep two years worth of historical archives
  • The entire solution must be fully automated
  • Low budget
  • Can't impact quality of service

Some of these may sound like common sense, but most backup tools, including the big dollar options, don't meet all of them. In some (way too many) cases, the backup server is given access to root (or administrator) on most systems.

The Stack

Deciding how this stack should be contructed was definitely the most time consuming part of this project. I'm going to attempt to lay out what I built in the order of the direction data flows. Wish me luck!

Server to Backup Server

The obvious choice is SSH. It's a standard, reasonably secure, and very easy.

When people do backups with SSH, the typical decision is to have the backup server initiate and control backups, which almost always means the backup server has the ability to log into other servers. This makes your backup server a substantially higher value target for an attacker. Yes, it's horrible if any system gets compromised, but this minimizes the impact and aids in recovery.

Scheduling

Every server has a backup script that runs on a pseudo-random schedule. Because the node name will always be the same and checksums are worthless unless they produce the same value every time, I was able to use the node name to build the backup schedule.

This boils down to what is essentially:

snap:
  cron.present:
    - identifier: snap
    - name: /usr/local/sbin/snap
    - hour: 2,10,18
    - minute: {{ pillar['backup_minute'] }}

The 'backup_minute' is created with ext_pillar. To build the entire ext_pillar is a task for the reader, what matters is:

import zlib
return zlib.crc32(grains['hostname']) % 60

You may notice that using 60 doubles the chance a backup running on the top of the hour. You can feel free to choose 59, but I like nice round numbers that are easy to identify.

SSH Keys

I mentioned that I wanted something 100% automated. I'm a huge fan of Salt and use it in my home environment, so Salt was the only choice for the automation.

A feature of Salt is the Salt Mine. The mine is a way for minions (every server) to report bits of data back to the salt master that can be shared with other systems. I utilized this feature to share root's SSH public key. I also used salt to generate that key if it doesn't already exist.

Here's a mini-snippet for clarification:

root_sshkeygen:
  cmd.run:
    - name: 'ssh-keygen -f /root/.ssh/id_rsa -t rsa -N ""'
    - unless: 'test -f /root/.ssh/id_rsa.pub'

/etc/salt/minion.d/mine.conf:
  file.managed:
    - contents: |
        mine_functions:
          ssh.user_keys:
            user: root
            prvfile: False
            pubfile: /root/.ssh/id_rsa.pub

Overall, this is pretty simple, but amazingly effective.

User Accounts

At this point, all of the servers are ready to back up their data. They just aren't able to yet because the backup server is sitting there empty with no user accounts.

This part is surprisingly easy as well. I simply use salt to create a separate jailed home directory for every server in the environment. The salt master already has the public SSH keys for every server in addition to the servers hostname.

To keep things simple, this example does not include jails.

{% for server, keys in salt['mine.get']('*', 'ssh.user_keys').items() %}
{{ server }}:
  user.present:
    - name: {{ server }}
    - createhome: True
  ssh_auth.present:
    - user: {{ server }}
    - names: [ {{ keys['root']['id_rsa.pub'] }} ]

# Ensures the user directory is never readable by others
/home/{{ server }}:
  file.directory:
    - user: {{ server }}
    - group: {{ server }}
    - mode: '0700'
    - require:
      - user: {{ server }}
{% endfor %}

This will get user accounts created on the backup server, add the SSH public key to the users trusted keychain, and force the users home directory to be set to 700 which prevents other users/groups from accessing the data.

Backup Archives

Now that data is getting from all servers to the backup server, it's time to start having more than a single copy of the data. The best tool I could find for this job was rsnapshot. I simply point rsnapshot at /home (or /srv/jails) and keep data stored where the existing servers can't access it. This means no compromised server can destroy any previous backups.

I broke some of my own rules and have rsnapshot also backing up my pfSense device as well as my Cisco switch configurations. I'll get a better solution in place for those, but that is it's own project.

Ice Ice Baby

At this point, we have a rather complete backup option that meets nearly everything I care about. So far, we're at $0.00 to build this solution. However, off-site backups haven't been included.

Do you want to trust your buddy and arrange to share backups with each other? Hopefully the obvious answer to everyone is an emphatic NO.

The only two reasonable options I found were AWS Glacier and Google Nearline. Because we're talking about data that you should never need to actually access, the two options are very comparable. Google Nearline advertises fastest time to first byte; however, the more you pull down, the slower your retrieval rate is. AWS Glacier advertises cheapest storage, but the faster you want your data, the more you get to pay.

The important thing to remember is that you're dealing with an off-site backup. You are "putting it on ice." If nothing ever breaks, the only time you will ever access this data is to verify your backup process.

I wrote a relatively simple script that runs on a cron (2x/mo) that:

  • Creates a squashfs image of the entire rsnapshot archive
  • Encrypts the quashfs image with a public GPG key
  • Uploads the encrypted image

I created a GPG key pair for this single process, encrypted the private key with my personal key, moved multiple copies (including paper) to various locations, and removed the private key from the server.

Wrapping Up

There are a lot of backup options that exist. I have concerns about nearly every option that exists, including most commercial/enterprise offerings. To have a backup solution that I considered reasonably secure, I had to spend a lot of time thinking through the process and researching many different tools.

I very much hope that what I put here will prove useful to other people trying address similar concerns. As always, I'm more than eager to answer questions.

28 May 2016 5:00am GMT

27 May 2016

feedPlanet Ubuntu

Aur√©lien G√Ęteau: Mass edit your tasks with t_medit

If you are a Yokadi user or if you have used other todo list systems, you might have encountered this situation where you wanted to quickly add a set of tasks to a project. Using Yokadi you would repeatedly write t_add <project> <task title>. History and auto-completion on command and project names makes entering tasks faster, but it is still slower than the good old TODO file where you just write down one task per line.

t_medit is a command to get the best of both worlds. It takes the name of a project as an argument and starts the default editor with a text file containing a line for each task of the project.

Suppose you have a "birthday" project like this:

yokadi> t_list birthday
                             birthday
ID|Title               |U  |S|Age     |Due date
-----------------------------------------------------------------
1 |Buy food (grocery)  |0  |N|2m      |
2 |Buy drinks (grocery)|0  |N|2m      |
3 |Invite Bob (phone)  |0  |N|2m      |
4 |Invite Wendy (phone)|0  |N|2m      |
5 |Bake a yummy cake   |0  |N|2m      |
6 |Decorate living-room|0  |N|2m      |

Running t_medit birthday will start your editor with this content:

1 N @grocery Buy food
2 N @grocery Buy drinks
3 N @phone Invite Bob
4 N @phone Invite Wendy
5 N Bake a yummy cake
6 N Decorate living-room

By editing this file you can do a lot of things:

Let's say you modify the text like this:

2 N @grocery Buy drinks
1 N @grocery Buy food
3 D @phone Invite Bob
4 N @phone Invite Wendy & David
- @phone Invite Charly
5 N Bake a yummy cake
- S Decorate table
- Decorate walls

Then Yokadi will:

You can even quickly create a project, for example if you want to plan your holidays you can type t_medit holidays. This creates the "holidays" project and open an empty editor. Just type new tasks, one per line, prefixed with -. When you save and quit, Yokadi creates the tasks you entered.

One last bonus: if you use Vim, Yokadi ships with a syntax highlight file for t_medit:

t_medit syntax highlight

This should be in the upcoming 1.1.0 version, which I plan to release soon. If you want to play with it earlier, you can grab the code from the git repository. Hope you like it!

27 May 2016 11:02pm GMT