18 Aug 2017

feedLXer Linux News

How to Manage CentOS 7 Server with Webmin

In this tutorial, we will install Webmin on CentOS 7.2. We will also learn to administrator Apache web server, firewalld and Webmin configuration. Webmin is free and open source web based system administration tool for Unix systems. It provides a rich and powerful web based user interface to administrator the server along with all the popular applications like Apache, BIND, Squid Proxy etc.

18 Aug 2017 7:33pm GMT

Listen To Your Favorite Radio Station With A Single Command on Linux

Internet radio is a great way to listen to different radio stations from across your country or the world in real time. Unlike listening to your own music collection, it gives you the opportunity to discover new artists and genres that you might not have explored otherwise. Many Internet radio stations are browser-based, meaning they're easily accessible regardless of your operating system, but having yet another browser window open isn't all that convenient, and it eats into RAM. Plus, you're on Linux, why not have an awesome command line hack to tune into your favorite Internet radio station in seconds?

18 Aug 2017 6:01pm GMT

feedLinuxtoday.com

Listen To Your Favorite Radio Station With A Single Command on Linux

Internet radio is a great way to listen to different radio stations from across your country or the world in real time.

18 Aug 2017 5:00pm GMT

feedLXer Linux News

Don't hate COBOL until you've tried it

COBOL is the Rodney Dangerfield of programming languages-it doesn't get any respect. It is routinely denigrated for its verbosity and dismissed as archaic. Yet COBOL is far from a dead language. It processes an estimated 85% of all business transactions, and 5 billion lines of new COBOL code are written every year.read more

18 Aug 2017 4:30pm GMT

feedLinuxtoday.com

How to Boot into Single User Mode in CentOS/RHEL 7

Tecmint: In this tutorial, we will describe how to boot into single user mode on CentOS 7.

18 Aug 2017 4:00pm GMT

feedLXer Linux News

Krita 3.2.0 Supports Smart Patching Elements In Paintings And 7 New Brushes Presets

Krita is a Free and Open-Source drawing and image editing software for creating a high quality and professional paintings. Krita 3.2.0 Has Been Released and it brought some interesting features and bug fixes. Check the key features of Krita 3.2.0.

18 Aug 2017 2:58pm GMT

feedLinuxtoday.com

5 open source alternatives to Slack for team chat

When it comes to chat, there are plenty of open source options.

18 Aug 2017 2:00pm GMT

feedLXer Linux News

How to recover from a git mistake

Today my colleague almost lost everything that he did over four days of work. Because of an incorrect git command, he dropped the changes he'd saved on stash. After this sad episode, we looked for a way to try to recover his work... and we did it!First a warning: When you are implementing a big feature, split it in small pieces and commit it regularly. It's not a good idea to work for a long time without committing your changes.read more

18 Aug 2017 1:44pm GMT

feedLinuxtoday.com

Another Behind-the-Scenes Niche Where Open Source is Winning

Do you spend a lot of time thinking about Bluetooth Low Energy (BLE) beacons? Unless you run a retail store, probably not.

18 Aug 2017 1:00pm GMT

feedLXer Linux News

Oracle Wants to Open Up Java EE

Java Enterprise Edition could be leaving the tight control of Oracle and moving to an Open Source Foundation (maybe).

18 Aug 2017 12:30pm GMT

An Early Look at Ubuntu Dock for GNOME Shell in Ubuntu 17.10 (Artful Aardvark)

Ubuntu 17.10, the next major release of the widely-used Ubuntu Linux OS, will be transitioning to the GNOME Shell user interface by default instead of the Unity desktop environment that was used until now.

18 Aug 2017 11:15am GMT

feedLinuxtoday.com

Oracle Wants to Open Up Java EE

EnterpriseAppsToday: Java Enterprise Edition could be leaving the tight control of Oracle and moving to an Open Source Foundation (maybe)

18 Aug 2017 11:00am GMT

feedLXer Linux News

Skilled bad actors use new pulse wave DDoS attacks to hit multiple targets

In a new report, Incapsula warns about a new type of ferocious DDoS attack that uses "pulse waves" to hit multiple targets. Pulse wave DDoS is a new attack tactic designed by skilled bad actors "to double the botnet's output and exploit soft spots in 'appliance first cloud second' hybrid mitigation solutions."

18 Aug 2017 10:01am GMT

CoreOS extends Kubernetes to Microsoft Azure

CoreOS[he]#039[/he]s Kubernetes distro, Tectonic 1.7, delivers on hybrid cloud by extending container DevOps capabilities across open-source and Azure clouds and data centers.

18 Aug 2017 8:47am GMT

Docker Can Now Containerize Legacy Apps Running on Mainframes

The new release comes on the heels of a report last week from Bloomberg that the container company has been raising money, which will result in $75 million dollars being added to its coffers by the end of the month, bringing with it a new valuation of $1.3 billion - up $300 million from its previous valuation.

18 Aug 2017 7:32am GMT

Under $15 open spec SBC offers Allwinner H5, GbE, and WiFi

The Orange Pi Zero Plus is a quad -A53 version of the Zero that advances to Gigabit Ethernet. You also get WiFi, USB host and OTG, and Linux/Android images. Shenzhen Xunlong is clearly trying to mess with our minds.

18 Aug 2017 6:18am GMT

feedLinuxtoday.com

Ethereum Blockchain Powers Vault One Password Service

eSecurityPlanet: Ethereum isn't just for cryptocurrency anymore as a new startup uses the underlying open-source blockchain to help improve password security.

18 Aug 2017 6:00am GMT

feedLXer Linux News

Manipulate IPv6 Addresses with ipv6calc

Last week, you may recall, we looked at calculating network addresses with ipcalc. Now, dear friends, it is my pleasure to introduce you to ipv6calc, the excellent IPv6 address manipulator and query tool by Dr. Peter Bieringer. ipv6calc is a little thing; on Ubuntu /usr/bin/ipv6calc is about 2MB, yet it packs in a ton of functionality.

18 Aug 2017 5:04am GMT

How to Install Sensu Monitoring on Ubuntu 16.04

Sensu is a free and open source tool for composing the monitoring system you need. In this tutorial, we will go through step by step installation of Redis, RabbitMQ and Sensu on Ubuntu 16.04.

18 Aug 2017 3:49am GMT

Raspbian Linux OS for Raspberry Pi Is Now Based on Debian GNU/Linux 9 "Stretch"

As of Wednesday, August 16, 2017, the Raspberry Pi Foundation has released new installation images of its Debian-based Raspbian Linux operating system rebased on Debian GNU/Linux 9 "Stretch" series.

18 Aug 2017 2:35am GMT

feedLinuxtoday.com

Linux-based postmarketOS project aims to give smartphones a 10-year lifecycle

Liliputing: Nw effort aims to develop a Linux-based alternative to Android with the goal of providing up to 10 years of support for old smartphones.

18 Aug 2017 2:00am GMT

feedLXer Linux News

11 Open Source Tools for Writers

Here are 11 open source tools for all your writing needs.

18 Aug 2017 1:21am GMT

Another Behind-the-Scenes Niche Where Open Source is Winning

Do you spend a lot of time thinking about Bluetooth Low Energy (BLE) beacons? Unless you run a retail store, probably not. But if you do run a store (or stores) along with an e-commerce operation, BLE is a hot new thing you are either using already or thinking about using before long.

18 Aug 2017 12:06am GMT

17 Aug 2017

feedLinuxtoday.com

Review: System76’s Galago Pro solves “just works” Linux’s Goldilocks problem

ars technica: We also get a look at System76's in-house OS (with in-house gear on the horizon).

17 Aug 2017 10:00pm GMT

RancherOS: A tiny Linux for Docker lovers

InfoWorld: Rancher Labs takes the container paradigm to its final destination with a completely Dockerized operating system

17 Aug 2017 9:00pm GMT

feedKernel Planet

Linux Plumbers Conference: Tracing/BPF Microconference Accepted into the Linux Plumbers Conference

Following on from the successful Tracing Microconference last year, we're pleased to announce there will be a follow on at Plumbers in Los Angeles this year.

The agenda for this year will not focus only on tracing but also will include several topics around eBPF. As eBPF now interacts with tracing and there is still a lot of work to accomplish, such as building an infrastructure around the current tools to compile and utilize eBPF within the tracing framework. Topics outside of eBPF will include enhancing uprobes and tracing virtualize and layered environments. Of particular interest is new techniques to improve kernel to user space tracing integration. This includes usage of uftrace and better symbol resolution of user space addresses from within the kernel. Additionally there will be a discussion on challenges of real world use cases by non-kernel engineers.

For more details on this, please see this microconference's wiki page.

We hope to see you there!

17 Aug 2017 4:51pm GMT

16 Aug 2017

feedKernel Planet

Linux Plumbers Conference: Trusted Platform Module Microconference Accepted into the Linux Plumbers Conference

Following on from the TPM Microconference last year, we're pleased to announce there will be a follow on at Plumbers in Los Angeles this year.

The agenda for this year will focus on a renewed attempt to unify the 2.0 TSS; cryptosystem integration to make TPMs just work for the average user; the current state of measured boot and where we're going; using TXT with TPM in Linux and using TPM from containers.

For more details on this, please see this microconference's wiki page

We hope to see you there!

16 Aug 2017 12:01am GMT

14 Aug 2017

feedKernel Planet

Dave Airlie (blogspot): radv on SI and CIK GPU - update

I recently acquired an r7 360 (BONAIRE) and spent some time getting radv stable and passing the same set of conformance tests that VI and Polaris pass.

The main missing thing was 10-bit integer format clamping for a bug in the SI/CIK fragment shader output hardware, where it truncates instead of clamps. The other missing piece was code for handling f16->f32 conversions according to the vulkan spec that I'd previously fixed for VI.

I also looked at a trace from amdgpu-pro and noticed it was using a ds_swizzle for the derivative calculations which avoided accessing LDS memory. I wrote support to use this path for radv/radeonsi since LLVM supported the intrinsic for a while now.

With these fixed CIK is pretty much in the same place as VI/Polaris.

I then plugged in my SI (Tahiti), and got lots of GPU hangs and crashes. I fixed a number of SI specific bugs (tiling and MSAA handling, stencil tiling). However even with those fixed I was getting random hangs, and a bunch of people on a bugzilla had noticed the same thing. I eventually discovered adding a shader pipeline and cache flush at the end of every command buffer (this took a few days to narrow down exactly). We aren't 100% sure why this is required on SI only, it may be a kernel bug, or a command processor bug, but it does mean radv on SI now can run games without hanging.

There are still a few CTS tests outstanding on SI only, and I'll probably get to them eventually, however I also got an RX Vega and once I get a newer BIOS for it from AMD I shall be spending some time fixing the radv support for it.

14 Aug 2017 3:16am GMT

10 Aug 2017

feedKernel Planet

Linux Plumbers Conference: Scheduler Workloads Microconference Accepted into the Linux Plumbers Conference

New to Linux Plumbers Conference this year, the Scheduler Workloads Microconference will focus on understanding various workloads and their impact on the Linux Kernel Scheduler. The objective is to initiate a cross organizational and architectural discussion involving currently available (or in development) benchmarks and their effectiveness in evaluating the scheduler for these workloads.

The agenda for this year will focus on sharing current workload and benchmark tools and traces and how these can be used to improve the various Linux subsystems, including power management and real time. Given that benchmarking the Linux scheduler is a controversial topic and often depends on proprietary tools, we'll also discuss how to develop fully open source tools and benchmarks for this.

For more details on this, please see this microconference's wiki page.

We hope to see you there!

10 Aug 2017 7:10pm GMT

08 Aug 2017

feedKernel Planet

Daniel Vetter: Why Github can't host the Linux Kernel Community

A while back at the awesome maintainerati I chatted with a few great fellow maintainers about how to scale really big open source projects, and how github forces projects into a certain way of scaling. The linux kernel has an entirely different model, which maintainers hosting their projects on github don't understand, and I think it's worth explaining why and how it works, and how it's different.

Another motivation to finally get around to typing this all up is the HN discussion on my "Maintainers Don't Scale" talk, where the top comment boils down to "… why don't these dinosaurs use modern dev tooling?". A few top kernel maintainers vigorously defend mailing lists and patch submissions over something like github pull requests, but at least some folks from the graphics subsystem would love more modern tooling which would be much easier to script. The problem is that github doesn't support the way the linux kernel scales out to a huge number of contributors, and therefore we can't simply move, not even just a few subsystems. And this isn't about just hosting the git data, that part obviously works, but how pull requests, issues and forks work on github.

Scaling, the Github Way

Git is awesome, because everyone can fork and create branches and hack on the code very easily. And eventually you have something good, and you create a pull request for the main repo and get it reviewed, tested and merged. And github is awesome, because it figured out an UI that makes this complex stuff all nice&easy to discover and learn about, and so makes it a lot simpler for new folks to contribute to a project.

But eventually a project becomes a massive success, and no amount of tagging, labelling, sorting, bot-herding and automating will be able to keep on top of all the pull requests and issues in a repository, and it's time to split things up into more manageable pieces again. More important, with a certain size and age of a project different parts need different rules and processes: The shiny new experimental library has different stability and CI criteria than the main code, and maybe you have some dumpster pile of deprecated plugins that aren't support, but you can't yet delete them: You need to split up your humongous project into sub-projects, each with their own flavour of process and merge criteria and their own repo with their own pull request and issue tracking. Generally it takes a few tens to few hundreds of full time contributors until the pain is big enough that such a huge reorganization is necessary.

Almost all projects hosted on github do this by splitting up their monorepo source tree into lots of different projects, each with its distinct set of functionality. Usually that results in a bunch of things that are considered the core, plus piles of plugins and libraries and extensions. All tied together with some kind of plugin or package manager, which in some cases directly fetches stuff from github repos.

Since almost every big project works like this I don't think it's necessary to delve on the benefits. But I'd like to highlight some of the issues this is causing:

Interlude: Why Pull Requests Exist

The linux kernel is one of the few projects I'm aware of which isn't split up like this. Before we look at how that works - the kernel is a huge project and simply can't be run without some sub-project structure - I think it's interesting to look at why git does pull requests: On github pull request is the one true way for contributors to get their changes merged. But in the kernel changes are submitted as patches sent to mailing lists, even long after git has been widely adopted.

But the very first version of git supported pull requests. The audience of these first, rather rough, releases was kernel maintainers, git was written to solve Linus Torvalds' maintainer problems. Clearly it was needed and useful, but not to handle changes from individual contributors: Even today, and much more back then, pull requests are used to forward the changes of an entire subsystem, or synchronize code refactoring or similar cross-cutting change across different sub-projects. As an example, the 4.12 network pull request from Dave S. Miller, committed by Linus: It contains 2k+ commits from 600 contributors and a bunch of merges for pull requests from subordinate maintainers. But almost all the patches themselves are committed by maintainers after picking up the patches from mailing lists, not by the authors themselves. This kernel process peculiarity that authors generally don't commit into shared repositories is also why git tracks the committer and author separately.

Github's innovation and improvement was then to use pull requests for everything, down to individual contributions. But that wasn't what they were originally created for.

Scaling, the Linux Kernel Way

At first glance the kernel looks like a monorepo, with everything smashed into one place in Linus' main repo. But that's very far from it:

At first this just looks like a complicated way to fill everyone's disk space with lots of stuff they don't care about, but there's a pile of compounding minor benefits that add up:

In short, I think this is a strictly more powerful model, since you can always fall back to doing things exactly like you would with multiple disjoint repositories. Heck there's even kernel drivers which are in their own repository, disjoint from the main kernel tree, like the proprietary Nvidia driver. Well it's just a bit of source code glue around a blob, but since it can't contain anything from the kernel for legal reasons it is the perfect example.

This looks like a monorepo horror show!

Yes and no.

At first glance the linux kernel looks like a monorepo because it contains everything. And lots of people learned that monorepos are really painful, because past a certain size they just stop scaling.

But looking closer, it's very, very far away from a single git repository. Just looking at the upstream subsystem and driver repositories gives you a few hundred. If you look at the entire ecosystem, including hardware vendors, distributions, other linux-based OS and individual products, you easily have a few thousand major repositories, and many, many more in total. Not counting any git repo that's just for private use by individual contributors.

The crucial distinction is that linux has one single file hierarchy as the shared namespace across everything, but lots and lots of different repos for all the different pieces and concerns. It's a monotree with multiple repositories, not a monorepo.

Examples, please!

Before I go into explaining why github cannot currently support this workflow, at least if you want to retain the benefits of the github UI and integration, we need some examples of how this works in practice. The short summary is that it's all done with git pull requests between maintainers.

The simple case is percolating changes up the maintainer hierarchy, until it eventually lands in a tree somewhere that is shipped. This is easy, because the pull request only ever goes from one repository to the next, and so could be done already using the current github UI.

Much more fun are cross-subsystem changes, because then the pull request flow stops being an acyclic graph and morphs into a mesh. The first step is to get the changes reviewed and tested by all the involved subsystems and their maintainers. In the github flow this would be a pull request submitted to multiple repositories simultaneously, with the one single discussion stream shared among them all. Since this is the kernel, this step is done through patch submission with a pile of different mailing lists and maintainers as recipients.

The way it's reviewed is usually not the way it's merged, instead one of the subsystems is selected as the leading one and takes the pull requests, as long as all other maintainers agree to that merge path. Usually it's the subsystem most affected by a set of changes, but sometimes also the one that already has some other work in-flight which conflicts with the pull request. Sometimes also an entirely new repository and maintainer crew is created, this often happens for functionality which spans the entire tree and isn't neatly contained to a few files and directories in one place. A recent example is the DMA mapping tree, which tries to consolidate work that thus far has been spread across drivers, platform maintainers and architecture support groups.

But sometimes there's multiple subsystems which would both conflict with a set of changes, and which would all need to resolve some non-trivial merge conflict. In that case the patches aren't just directly applied (a rebasing pull request on github), but instead the pull request with just the necessary patches, based on a commit common to all subsystems, is merged into all subsystem trees. The common baseline is important to avoid polluting a subsystem tree with unrelated changes. Since the pull is for a specific topic only, these branches are commonly called topic branches.

One example I was involved with added code for audio-over-HDMI support, which spanned both the graphics and sound driver subsystems. The same commits from the same pull request where both merged into the Intel graphics driver and also merged into the sound subsystem.

An entirely different example that this isn't insane is the only other relevant general purpose large scale OS project in the world also decided to have a monotree, with a commit flow modelled similar to what's going on in linux. I'm talking about the folks with such a huge tree that they had to write an entire new GVFS virtual filesystem provider to support it …

Dear Github

Unfortunately github doesn't support this workflow, at least not natively in the github UI. It can of course be done with just plain git tooling, but then you're back to patches on mailing lists and pull requests over email, applied manually. In my opinion that's the single one reason why the kernel community cannot benefit from moving to github. There's also the minor issue of a few top maintainers being extremely outspoken against github in general, but that's a not really a technical issue. And it's not just the linux kernel, it's all huge projects on github in general which struggle with scaling, because github doesn't really give them the option to scale to multiple repositories, while sticking to with a monotree.

In short, I have one simple feature request to github:

Please support pull requests and issue tracking spanning different repos of a monotree.

Simple idea, huge implications.

Repositories and Organizations

First, it needs to be possible to have multiple forks of the same repo in one organization. Just look at git.kernel.org, most of these repositories are not personal. And even if you might have different organizations for e.g. different subsystems, requiring an organization for each repo is silly amounts of overkill and just makes access and user managed unnecessarily painful. In graphics for example we'd have 1 repo each for the userspace test suite, the shared userspace library, and a common set of tools and scripts used by maintainers and developers, which would work in github. But then we'd have the overall subsystem repo, plus a repository for core subsystem work and additional repositories for each big drivers. Those would all be forks, which github doesn't do. And each of these repos has a bunch of branches, at least one for feature work, and another one for bugfixes for the current release cycle.

Combining all branches into one repository wouldn't do, since the point of splitting repos is that pull requests and issues are separated, too.

Related, it needs to be possible to establish the fork relationship after the fact. For new projects who've always been on github this isn't a big deal. But linux will be able to move at most a subsystem at a time, and there's already tons of linux repositories on github which aren't proper github forks of each another.

Pull Requests

Pull request need to be attached to multiple repos at the same time, while keeping one unified discussion stream. You can already reassign a pull request to a different branch of repo, but not at multiple repositories at the same time. Reassigning pull requests is really important, since new contributors will just create pull requests against what they think is the main repo. Bots can then shuffle those around to all the repos listed in e.g. a MAINTAINERS file for a given set of files and changes a pull request contains. When I chatted with githubbers I originally suggested they'd implement this directly. But I think as long as it's all scriptable that's better left to individual projects, since there's no real standard.

There's a pretty funky UI challenge here since the patch list might be different depending upon the branch the pull request is against. But that's not always a user error, one repo might simple have merged a few patches already.

Also, the pull request status needs to be different for each repo. One maintainer might close it without merging, since they agreed that the other subsystem will pull it in, while the other maintainer will merge and close the pull. Another tree might even close the pull request as invalid, since it doesn't apply to that older version or vendor fork. Even more fun, a pull request might get merged multiple times, in each subsystem with a different merge commit.

Issues

Like pull requests, issues can be relevant for multiple repos, and might need to be moved around. An example would be a bug that's first reported against a distribution's kernel repository. After triage it's clear it's a driver bug still present in the latest development branch and hence also relevant for that repo, plus the main upstream branch and maybe a few more.

Status should again be separate, since once push to one repo the bugfix isn't instantly available in all of them. It might even need additional work to get backported to older kernels or distributions, and some might even decide that's not worth it and close it as WONTFIX, even thought the it's marked as successfully resolved in the relevant subsystem repository.

Summary: Monotree, not Monorepo

The Linux Kernel is not going to move to github. But moving the Linux way of scaling with a monotree, but mutliple repos, to github as a concept will be really beneficial for all the huge projects already there: It'll give them a new, and in my opinion, more powerful way to handle their unique challenges.

08 Aug 2017 12:00am GMT

07 Aug 2017

feedKernel Planet

Paul E. Mc Kenney: Book review: "Antifragile: Things That Gain From Disorder"

This is the fourth and final book in Nassim Taleb's Incerto series, which makes a case for antifragility as a key component of design, taking the art of design one step beyond robustness. An antifragile system is one where variation, chaos, stress, and errors improve the results. For example, within limits, stressing muscles and bones makes them stronger. In contrast, stressing a device made of (say) aluminum will eventually cause it to fail. Taleb gives a lengthy list of examples in Table 1 starting on page 23, some of which seem more plausible than others. An example implausible entry lists rule-based systems as fragile, principles-based systems as robust, and virtue-based systems as antifragile. Although I can imagine a viewpoint where this makes sense, any expectation that a significantly large swath of present-day society will agree on a set of principles (never mind virtues!) seems insanely optimistic. The table nevertheless provides much good food for thought.

Taleb states that he has constructed antifragile financial strategies using insurance to control downside risks. But he also states on page 6 "Thou shalt not have antifragility at the expense of the fragility of others." Perhaps Taleb figures that few will shed tears for any difficulties that insurance companies might get into, perhaps he is taking out policies that are too small to have material effect on the insurance company in question, or perhaps his policies are counter to the insurance company's main business, so that payouts to Taleb are anticorrelated with payouts to the company's other customers. One presumes that he has thought this through carefully, because a bankrupt insurance company might not be all that effective at controlling his downside risks.

Appendix I beginning on page 435 gives a graphical summary of the books main messages. Figure 28 on page 441 is good grist for the mills of those who would like humanity to become an intergalactic species: After all, confining the human race seems likely to limit its upside. (One counterargument would posit that a finite object might have unbounded value, but such counterarguments typically rely on there being a very large number of human beings interested in that finite object, which some would consider to counter this counterargument.)

The right-hand portion of Figure 30 on page 442 illustrates what the author calls local antifragility and global fragility. To see this, imagine that the x-axis represents variation from nominal conditions, and the y-axis represents payoff, with large positive payoffs being highly desired. The right-hand portion shows something not unrelated to the function x^2-x^4, which gives higher payoffs as you move in either direction from x=0, peaking when x reaches one divided by the square root of two (either positive or negative), dropping back to zero when x reaches +1 or -1, and dropping like a rock as one ventures further away from x=0. The author states that this local antifragility and global fragility is the most dangerous of all, but given that he repeatedly stresses that antifragile systems are antifragile only up to a point, this dangerous situation would seem to be the common case. Those of us who believe that life is inherently dangerous should have no problem with this apparent contradiction.

But what does all of this have to do with parallel programming???

Well, how about "Is RCU antifragile?"

One case for RCU antifragility is the batching optimizations that allow many (as in thousands) concurrent requests to share the same grace-period computation. Therefore, the heavier the update-side load on RCU, the more efficiently RCU operates.

However, load is but one of many aspects of RCU's environment that might be varied. For an extreme example, RCU is exceedingly fragile with respect to small perturbations of the program counter, as Peter Sewell so ably demonstrated, by running emacs, no less. RCU is also fragile with respect to timekeeping anomalies, for example, it can emit false-positive RCU CPU stall warnings if different CPUs have tens-of-seconds disagreements as to the current time. However, the aforementioned bones and muscles are similarly fragile with respect to any number of chemical substances (AKA "poisons"), to say nothing of well-known natural phenomena such as lightning bolts and landslides.

Even when excluding hardware misbehavior such as auto-perturbing program counters and unsynchronized clocks, RCU would still be subject to software aging, and RCU has in fact require multiple interventions from its developers and maintainer in order to keep up with changing hardware, workload, and usage. One could therefore argue that RCU is fragile with respect to perturbations of time, although the combination of RCU and its developers, reviewers, and maintainer seem to have kept up reasonably well thus far.

On the other hand, perhaps it is unrealistic to evaluate the antifragility of software without including black-hat hackers. Achieving antifragility in that sort of environment is still very much a grand challenge problem, but a challenge that must be faced. Oh, you think RCU is to low-level for this sort of attack? There was a time when I thought so. And then came rowhammer.

So please be careful, and, where possible, antifragile! It is after all a real world out there!!!

07 Aug 2017 4:36am GMT

03 Aug 2017

feedKernel Planet

Linux Plumbers Conference: Book Your Hotel for Plumbers by 18 August

As a reminder, we have a block of rooms at the JW Marriott LA Live
available to attendees at the discounted conference rate of $259/night
(plus applicable taxes). High speed internet is included in the room rate.

Our discounted room rate expires on 5:00 pm PST on August 18. We encourage
you to book today!

Visit our Attend page for additional details.

03 Aug 2017 10:15pm GMT

29 Jul 2017

feedKernel Planet

Linux Plumbers Conference: Late Registration Begins Soon

The late registration for Linux Plumbers conference begins on 31 July. If you want to take advantage of the standard registration fees, register now on this link.

Standard registration is $550, late registration will be $650.

29 Jul 2017 7:21pm GMT

Linux Plumbers Conference: Checkpoint-Restart Microconference Accepted into the Linux Plumbers Conference

Following on from the successful Checkpoint-Restart Microconference
last year, we're pleased to announce that there will be another at
Plumbers in Los Angeles this year.

The agenda this year will focus on specific use cases of Checkpoint-
Restart, such as High Performance Computing, state saving uses such as
job scheduling and hot standby. In addition we'll be looking at
enhancements such as performance and using userfaultfd for dirty memory
tracking in iterative migration and what it would take to have
unprivileged checkpoint-restart. Finally, we'll have discussions on
checkpoint-restart aware applications and what sort of testing needs to
be applied to the upstream kernel to prevent any checkpoint-restore API
breakage as it evolves.

For more details on this, please see this microconference's wiki page.

We hope to see you there!

29 Jul 2017 4:42pm GMT

21 Jul 2017

feedKernel Planet

Michael Kerrisk (manpages): man-pages-4.12 is released

I've released man-pages-4.12. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

This release resulted from patches, bug reports, reviews, and comments from around 30 contributors. It includes just under 200 commits changing around 90 pages. This is a relatively small release, with one new manual page, ioctl_getfsmap(2). The most significant change in the release consists of a number of additions and improvements in the ld.so(8) page.

21 Jul 2017 6:53pm GMT

20 Jul 2017

feedKernel Planet

Paul E. Mc Kenney: Parallel Programming: Getting the English text out of the way

We have been making good progress on the next release of Is Parallel Programming Hard, And, If So, What Can You Do About It?, and hope to have a new release out soonish.

In the meantime, for those of you for whom the English text in this book has simply gotten in the way, there is now an alternative:

perfbook_cn_cover

On the off-chance that any of you are seriously interested, this is available from
Amazon China, JD.com, Taobao.com, and Dangdang.com. For the rest of you, you have at least seen the picture. ;-)

20 Jul 2017 2:37am GMT

18 Jul 2017

feedKernel Planet

Matthew Garrett: Avoiding TPM PCR fragility using Secure Boot

In measured boot, each component of the boot process is "measured" (ie, hashed and that hash recorded) in a register in the Trusted Platform Module (TPM) build into the system. The TPM has several different registers (Platform Configuration Registers, or PCRs) which are typically used for different purposes - for instance, PCR0 contains measurements of various system firmware components, PCR2 contains any option ROMs, PCR4 contains information about the partition table and the bootloader. The allocation of these is defined by the PC Client working group of the Trusted Computing Group. However, once the boot loader takes over, we're outside the spec[1].

One important thing to note here is that the TPM doesn't actually have any ability to directly interfere with the boot process. If you try to boot modified code on a system, the TPM will contain different measurements but boot will still succeed. What the TPM can do is refuse to hand over secrets unless the measurements are correct. This allows for configurations where your disk encryption key can be stored in the TPM and then handed over automatically if the measurements are unaltered. If anybody interferes with your boot process then the measurements will be different, the TPM will refuse to hand over the key, your disk will remain encrypted and whoever's trying to compromise your machine will be sad.

The problem here is that a lot of things can affect the measurements. Upgrading your bootloader or kernel will do so. At that point if you reboot your disk fails to unlock and you become unhappy. To get around this your update system needs to notice that a new component is about to be installed, generate the new expected hashes and re-seal the secret to the TPM using the new hashes. If there are several different points in the update where this can happen, this can quite easily go wrong. And if it goes wrong, you're back to being unhappy.

Is there a way to improve this? Surprisingly, the answer is "yes" and the people to thank are Microsoft. Appendix A of a basically entirely unrelated spec defines a mechanism for storing the UEFI Secure Boot policy and used keys in PCR 7 of the TPM. The idea here is that you trust your OS vendor (since otherwise they could just backdoor your system anyway), so anything signed by your OS vendor is acceptable. If someone tries to boot something signed by a different vendor then PCR 7 will be different. If someone disables secure boot, PCR 7 will be different. If you upgrade your bootloader or kernel, PCR 7 will be the same. This simplifies things significantly.

I've put together a (not well-tested) patchset for Shim that adds support for including Shim's measurements in PCR 7. In conjunction with appropriate firmware, it should then be straightforward to seal secrets to PCR 7 and not worry about things breaking over system updates. This makes tying things like disk encryption keys to the TPM much more reasonable.

However, there's still one pretty major problem, which is that the initramfs (ie, the component responsible for setting up the disk encryption in the first place) isn't signed and isn't included in PCR 7[2]. An attacker can simply modify it to stash any TPM-backed secrets or mount the encrypted filesystem and then drop to a root prompt. This, uh, reduces the utility of the entire exercise.

The simplest solution to this that I've come up with depends on how Linux implements initramfs files. In its simplest form, an initramfs is just a cpio archive. In its slightly more complicated form, it's a compressed cpio archive. And in its peak form of evolution, it's a series of compressed cpio archives concatenated together. As the kernel reads each one in turn, it extracts it over the previous ones. That means that any files in the final archive will overwrite files of the same name in previous archives.

My proposal is to generate a small initramfs whose sole job is to get secrets from the TPM and stash them in the kernel keyring, and then measure an additional value into PCR 7 in order to ensure that the secrets can't be obtained again. Later disk encryption setup will then be able to set up dm-crypt using the secret already stored within the kernel. This small initramfs will be built into the signed kernel image, and the bootloader will be responsible for appending it to the end of any user-provided initramfs. This means that the TPM will only grant access to the secrets while trustworthy code is running - once the secret is in the kernel it will only be available for in-kernel use, and once PCR 7 has been modified the TPM won't give it to anyone else. A similar approach for some kernel command-line arguments (the kernel, module-init-tools and systemd all interpret the kernel command line left-to-right, with later arguments overriding earlier ones) would make it possible to ensure that certain kernel configuration options (such as the iommu) weren't overridable by an attacker.

There's obviously a few things that have to be done here (standardise how to embed such an initramfs in the kernel image, ensure that luks knows how to use the kernel keyring, teach all relevant bootloaders how to handle these images), but overall this should make it practical to use PCR 7 as a mechanism for supporting TPM-backed disk encryption secrets on Linux without introducing a hug support burden in the process.

[1] The patchset I've posted to add measured boot support to Grub use PCRs 8 and 9 to measure various components during the boot process, but other bootloaders may have different policies.

[2] This is because most Linux systems generate the initramfs locally rather than shipping it pre-built. It may also get rebuilt on various userspace updates, even if the kernel hasn't changed. Including it in PCR 7 would entirely break the fragility guarantees and defeat the point of all of this.

comment count unavailable comments

18 Jul 2017 6:48am GMT

13 Jul 2017

feedKernel Planet

Linux Plumbers Conference: VFIO/IOMMU/PCI Microconference Accepted into Linux Plumbers Conference

Following on from the successful PCI Microconference at Plumbers last year we're pleased to announce a follow on this year with an expanded scope.

The agenda this year will focus on overlap and common development between VFIO/IOMMU/PCI subsystems, and in particular how consolidation of the shared virtual memory(SVM) API can drive an even tighter coupling between them.

This year we will also focus on user visible aspects such as using SVM to share page tables with devices and reporting I/O page faults to userspace in addition to discussing PCI and IOMMU interfaces and potential improvements.

For more details on this, please see this microconference's wiki page.

We hope to see you there!

13 Jul 2017 5:20pm GMT

11 Jul 2017

feedKernel Planet

Linux Plumbers Conference: Power Management and Energy-awareness Microconference Accepted into Linux Plumbers Conference

Following on from the successful Power Management and Energy-awareness at Plumbers last year we're pleased to announce a follow on this year.

The agenda this year will focus on a range of topics including CPUfreq core improvements and schedutil governor extensions, how to best use scheduler signals to balance energy consumption and performance and user space interfaces to control capacity and utilization estimates. We'll also discuss selective throttling in thermally constrained systems, runtime PM for ACPI, CPU cluster idling and the possibility to implement resume from hibernation in a bootloader.

For more details on this, please see this microconference's wiki page.

We hope to see you there!

11 Jul 2017 4:15pm GMT

James Morris: Linux Security Summit 2017 Schedule Published

The schedule for the 2017 Linux Security Summit (LSS) is now published.

LSS will be held on September 14th and 15th in Los Angeles, CA, co-located with the new Open Source Summit (which includes LinuxCon, ContainerCon, and CloudCon).

The cost of LSS for attendees is $100 USD. Register here.

Highlights from the schedule include the following refereed presentations:

There's also be the usual Linux kernel security subsystem updates, and BoF sessions (with LSM namespacing and LSM stacking sessions already planned).

See the schedule for full details of the program, and follow the twitter feed for the event.

This year, we'll also be co-located with the Linux Plumbers Conference, which will include a containers microconference with several security development topics, and likely also a TPMs microconference.

A good critical mass of Linux security folk should be present across all of these events!

Thanks to the LSS program committee for carefully reviewing all of the submissions, and to the event staff at Linux Foundation for expertly planning the logistics of the event.

See you in Los Angeles!

11 Jul 2017 11:30am GMT

10 Jul 2017

feedKernel Planet

Kees Cook: security things in Linux v4.12

Previously: v4.11.

Here's a quick summary of some of the interesting security things in last week's v4.12 release of the Linux kernel:

x86 read-only and fixed-location GDT
With kernel memory base randomization, it was stil possible to figure out the per-cpu base address via the "sgdt" instruction, since it would reveal the per-cpu GDT location. To solve this, Thomas Garnier moved the GDT to a fixed location. And to solve the risk of an attacker targeting the GDT directly with a kernel bug, he also made it read-only.

usercopy consolidation
After hardened usercopy landed, Al Viro decided to take a closer look at all the usercopy routines and then consolidated the per-architecture uaccess code into a single implementation. The per-architecture code was functionally very similar to each other, so it made sense to remove the redundancy. In the process, he uncovered a number of unhandled corner cases in various architectures (that got fixed by the consolidation), and made hardened usercopy available on all remaining architectures.

ASLR entropy sysctl on PowerPC
Continuing to expand architecture support for the ASLR entropy sysctl, Michael Ellerman implemented the calculations needed for PowerPC. This lets userspace choose to crank up the entropy used for memory layouts.

LSM structures read-only
James Morris used __ro_after_init to make the LSM structures read-only after boot. This removes them as a desirable target for attackers. Since the hooks are called from all kinds of places in the kernel this was a favorite method for attackers to use to hijack execution of the kernel. (A similar target used to be the system call table, but that has long since been made read-only.) Be wary that CONFIG_SECURITY_SELINUX_DISABLE removes this protection, so make sure that config stays disabled.

KASLR enabled by default on x86
With many distros already enabling KASLR on x86 with CONFIG_RANDOMIZE_BASE and CONFIG_RANDOMIZE_MEMORY, Ingo Molnar felt the feature was mature enough to be enabled by default.

Expand stack canary to 64 bits on 64-bit systems
The stack canary values used by CONFIG_CC_STACKPROTECTOR is most powerful on x86 since it is different per task. (Other architectures run with a single canary for all tasks.) While the first canary chosen on x86 (and other architectures) was a full unsigned long, the subsequent canaries chosen per-task for x86 were being truncated to 32-bits. Daniel Micay fixed this so now x86 (and future architectures that gain per-task canary support) have significantly increased entropy for stack-protector.

Expanded stack/heap gap
Hugh Dickens, with input from many other folks, improved the kernel's mitigation against having the stack and heap crash into each other. This is a stop-gap measure to help defend against the Stack Clash attacks. Additional hardening needs to come from the compiler to produce "stack probes" when doing large stack expansions. Any Variable Length Arrays on the stack or alloca() usage needs to have machine code generated to touch each page of memory within those areas to let the kernel know that the stack is expanding, but with single-page granularity.

That's it for now; please let me know if I missed anything. The v4.13 merge window is open!

Edit: Brad Spengler pointed out that I failed to mention the CONFIG_SECURITY_SELINUX_DISABLE issue with read-only LSM structures. This has been add now.

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

10 Jul 2017 8:24am GMT