17 Sep 2019

feedLXer Linux News

Amid Epstein Controversy, Richard Stallman is Forced to Resign as FSF President

Richard Stallman, founder and president of the Free Software Foundation, has resigned as the president and from its board of directors. The announcement has come after a relentless campaign by a few activists and media person to remove Stallman for his views on the Epstein victims. Read more to get the details.

17 Sep 2019 10:25am GMT

How to Install Anaconda Python Distribution on Debian 10

Anaconda is an open-source distribution of the Python and R programming languages that can be used to simplify package management and deployment. In this tutorial, we will learn how to install Anaconda Python Distribution on Debian 10.

17 Sep 2019 10:17am GMT

I got 99 problems but a switch() ain't one: Java SE 13 lands with various tweaks as per Oracle's less-is-more strategy

All part of Big Red's regular small-ish release plan as opposed to large infrequent updatesCode One Oracle on Monday announced the release of Java SE 13 (JDK 13), saying it shows the tech titan's continued commitment to make innovation happen faster by sticking to a predictable six-month release cycle.…

17 Sep 2019 9:02am GMT

Chmod Command in Linux

In Linux, access to the files is managed through the file permissions, attributes, and ownership. This ensures that only authorized users and processes can access files and directories. This tutorial covers how to use the `chmod` command to change the access permissions of files and directories.

17 Sep 2019 7:48am GMT

Compact Kaby Lake embedded PC supports Linux

Axiomtek's fanless, rugged "eBOX100-51R-FL" embedded PC runs Linux or Win 10 on a 7th Gen U-series CPU and offers a pair each of GbE, USB 3.0, USB 2.0, and serial ports plus a DP++ port and M.2 slots for WiFi and SATA. Axiomtek announced a compact (142 x 87 x 58mm) embedded computer equipped with […]

17 Sep 2019 6:34am GMT

cmus – free terminal-based audio player

When it comes to console-based music software, I really admire musikcube, a wonderful audio engine, library, player and server written in C++. This review looks at an alternative to musikcube. It's called cmus.

17 Sep 2019 5:19am GMT

Delete Files That Have Not Been Accessed For A Given Time On Linux

This brief guide explains how to delete files that have not been accessed for a given time using Tmpwatch or Tmpreaper on Linux.

17 Sep 2019 4:05am GMT

Why Debian Is the Gold Standard of Upstream Desktop Linux

The last decade or so has seen Debian largely repositioned as an upstream distribution, making it an essential component of the desktop Linux infrastructure. Does your distro have "Debian Inside"?

17 Sep 2019 2:51am GMT

Wi-Fi 6 Launches Officially for the Next Generation of Wi-Fi

The Wi-Fi Alliance announced today the availability of the Wi-Fi CERTIFIED 6 certification program for vendors to provide customers with the latest and greatest Wi-Fi experience.

17 Sep 2019 1:36am GMT

How to Install Ranger Terminal File Manager on Linux

Ranger is a lightweight and powerful file manager that works in a terminal window. It comes with the Vi key bindings. It offers a smooth way to move into directories, view files and content, or open an editor to make changes to files.

17 Sep 2019 12:22am GMT

16 Sep 2019

feedLXer Linux News

Copying large files with Rsync, and some misconceptions

There is a notion that a lot of people working in the IT industry often copy & paste from internet howtos. We all do it, and the copy & paste itself is not a problem. The problem is when we run things without understanding them. Some years ago, a friend who used to work on […]

16 Sep 2019 11:08pm GMT

SAR – Linux System Performance Monitoring Tool (Check Linux System Activity Report Using the SAR Command)

The sar command allows you to check various historical performance data of the Linux system.

16 Sep 2019 9:53pm GMT

Constraint programming by example

There are many different ways to solve problems in computing. You might "brute force" your way to a solution by calculating as many possibilities as you can, or you might take a procedural approach and carefully establish the known factors that influence the correct answer. In constraint programming, a problem is viewed as a series of limitations on what could possibly be a valid solution.read more

16 Sep 2019 8:39pm GMT

How to Convert Video Formats on Linux

There are a few ways to convert video files on Linux. If you're a fan of command line tools, check out our FFMPEG video conversion guide. This guide is going to focus on HandBrake, a powerful graphical video conversion tool to covert video from and to many formats such as MP4, AVI, WebM and many more.

16 Sep 2019 7:25pm GMT

Linux Kernel 5.3 Officially Released, Here's What's New

Linus Torvalds announced today the release of the Linux 5.3 kernel series, a major that brings several new features, dozens of improvements, and updated drivers.

16 Sep 2019 6:10pm GMT

Linux commands to display your hardware information

There are many reasons you might need to find out details about your computer hardware. For example, if you need help fixing something and post a plea in an online forum, people will immediately ask you for specifics about your computer. Or, if you want to upgrade your computer, you'll need to know what you have and what you can have. You need to interrogate your computer to discover its specifications.The easiest way is to do that is with one of the standard Linux GUI programs:read more

16 Sep 2019 4:56pm GMT

14 Sep 2019

feedKernel Planet

Matthew Garrett: It's time to talk about post-RMS Free Software

Richard Stallman has once again managed to demonstrate incredible insensitivity[1]. There's an argument that in a pure technical universe this is irrelevant and we should instead only consider what he does in free software[2], but free software isn't a purely technical topic - the GNU Manifesto is nakedly political, and while free software may result in better technical outcomes it is fundamentally focused on individual freedom and will compromise on technical excellence if otherwise the result would be any compromise on those freedoms. And in a political movement, there is no way that we can ignore the behaviour and beliefs of that movement's leader. Stallman is driving away our natural allies. It's inappropriate for him to continue as the figurehead for free software.

But I'm not calling for Stallman to be replaced. If the history of social movements has taught us anything, it's that tying a movement to a single individual is a recipe for disaster. The FSF needs a president, but there's no need for that person to be a leader - instead, we need to foster an environment where any member of the community can feel empowered to speak up about the importance of free software. A decentralised movement about returning freedoms to individuals can't also be about elevating a single individual to near-magical status. Heroes will always end up letting us down. We fix that by removing the need for heroes in the first place, not attempting to find increasingly perfect heroes.

Stallman was never going to save us. We need to take responsibility for saving ourselves. Let's talk about how we do that.

[1] There will doubtless be people who will leap to his defense with the assertion that he's neurodivergent and all of these cases are consequences of that.

(A) I am unaware of a formal diagnosis of that, and I am unqualified to make one myself. I suspect that basically everyone making that argument is similarly unqualified.
(B) I've spent a lot of time working with him to help him understand why various positions he holds are harmful. I've reached the conclusion that it's not that he's unable to understand, he's just unwilling to change his mind.

[2] This argument is, obviously, bullshit

comment count unavailable comments

14 Sep 2019 11:57am GMT

10 Sep 2019

feedKernel Planet

Davidlohr Bueso: Linux v5.2: Performance Goodies

locking/rwsem: optimize trylocking for the uncontended case

This applies the idea that in most cases, a rwsem will be uncontended (single threaded). For example, experimentation showed that page fault paths really expect this. The change itself makes the code basically not read in a cacheline in a tight loop over and over. Note however that this can be a double edged sword, as microbenchmarks have show performance deterioration upon high amounts of tasks, albeit mainly pathological workloads.
[Commit ddb20d1d3aed a338ecb07a33]

lib/lockref: limit number of cmpxchg loop retries

Unbounded loops are rather froned upon, specially ones ones doing CAS operations. As such, Linus suggested adding an arbitrary upper bound to the loop to force the slowpath (spinlock fallback), which was seen to improve performance on an adhoc testcase on hardware that incurrs in the loop retry game.
[Commit 893a7d32e8e0]

rcu: avoid unnecessary softirqs when system is idle

Upon an idle system with no pending callbacks, rcu sofirqs to process callbacks were being triggered repeatedly. Specifically the mismatch between cpu_no_qs and core_need_rq was addressed.
[Commit 671a63517cf9]

rcu: fix potential cond_resched() slowdowns

When using the jiffies_till_sched_qs kernel boot parameter, a bug made jiffies_to_sched_qs become uinitialized as zero and therefore impacts negatively in cond_resched().
[Commit 6973032a602e]

mm: improve vmap allocation

Doing a vmalloc can be quite slow at times, and with it being done with preemption disabled, can affect workloads that are sensible to this. The problem relies in the fact that a new VA area is done over a busy list iteration until a suitable hole is found between two busy areas. The changes propose the always reliable red-black tree to keep blocks sorted by their offsets along with a list keeping the free space in order of increasing addresses.
[Commit 68ad4a330433 68571be99f32]


mm/gup: safe usage of get_user_pages_fast() with DAX

Users of get_user_pages_fast()have potential performance benefits compared to its non-fast cousin, by avoiding mmap_sem, than it's non-fast equivalent. However drivers such as rdma can pin these pages for a significant amount of time, where a number of issues come with the filesystem as referenced pages will block a number of critical operations and is known to mess up DAX. A new FOLL_LONGTERM flag is added and checked accordingly; which also means that other users such as xdp can now also be converted to gup_fast.
[Commit 932f4a630a69 b798bec4741b 73b0140bf0fe 7af75561e171 9fdf4aa15673 664b21e717cf f3b4fdb18cb5 ]

lib/sort: faster and smaller

Because CONFIG_RETPOLINE has made indirect calls much more expensive, these changes reduce the number made by the library sort functions, lib/sort and lib/list_sort. A number of optimizations and clever tricks are used such as a more efficient bottom up heapsort and playing nicer with store buffers.
[Commit 37d0ec34d111 22a241ccb2c1 8fb583c4258d 043b3f7b6388 b5c56e0cdd62]

ipc/mqueue: make msg priorities truly O(1)

By keeping the pointer to the tree's rightmost node, the process of consuming a message can be done in constant time, instead of logarithmic.
[Commit a5091fda4e3c]

x86/fpu: load FPU registers on return to userland

This is a large, 27-patch, cleanup and optimization to only load fpu registers on return to userspace, instead of upon every context switch. This means that tasks that remain in kernel space do not load the registers. Accessing the fpu registers in the kernel requires disabling preemption and bottom-halfs for scheduler and softirqs, accordingly.

x86/hyper-v: implement EOI optimization

Avoid a vmexit on EOI. This was seen to slightly improve IOPS when testing nvme disks with raid and ext4.
[Commit ba696429d290]

btrfs: improve performance on fsync of files with multiple hardlinks

A fix to a performance regression seen in pgbench which can make fsync a full transaction commit in order to avoid losing hard links and new ancestors of the fsynced inode.
[Commit b8aa330d2acb]

fsnotify: fix unlink performance regression

This restores an unlink performance optimization that avoids take_dentry_name_snapshot().
[Commit 4d8e7055a405]

block/bfq: do not merge queues on flash storage with queuing

Disable queue merging on non-rotational devices with internal queueing, thus boosting throughput on interleaved IO.
[Commit 8cacc5ab3eac]

10 Sep 2019 7:26pm GMT

08 Sep 2019

feedKernel Planet

James Bottomley: The Mythical Economic Model of Open Source

It has become fashionable today to study open source through the lens of economic benefits to developers and sometimes draw rather alarming conclusions. It has also become fashionable to assume a business model tie and then berate the open source community, or their licences, for lack of leadership when the business model fails. The purpose of this article is to explain, in the first part, the fallacy of assuming any economic tie in open source at all and, in the second part, go on to explain how economics in open source is situational and give an overview of some of the more successful models.

Open Source is a Creative Intellectual Endeavour

All the creative endeavours of humanity, like art, science or even writing code, are often viewed as activities that produce societal benefit. Logically, therefore, the people who engage in them are seen as benefactors of society, but assuming people engage in these endeavours purely to benefit society is mostly wrong. People engage in creative endeavours because it satisfies some deep need within themselves to exercise creativity and solve problems often with little regard to the societal benefit. The other problem is that the more directed and regimented a creative endeavour is, the less productive its output becomes. Essentially to be truly creative, the individual has to be free to pursue their own ideas. The conundrum for society therefore is how do you harness this creativity for societal good if you can't direct it without stifling the very creativity you want to harness? Obviously society has evolved many models that answer this (universities, benefactors, art incubation programmes, museums, galleries and the like) with particular inducements like funding, collaboration, infrastructure and so on.

Why Open Source development is better than Proprietary

Simply put, the Open Source model, involving huge freedoms to developers to decide direction and great opportunities for collaboration stimulates the intellectual creativity of those developers to a far greater extent than when you have a regimented project plan and a specific task within it. The most creatively deadening job for any engineer is to find themselves strictly bound within the confines of a project plan for everything. This, by the way, is why simply allowing a percentage of paid time for participating in Open Source seems to enhance input to proprietary projects: the liberated creativity has a knock on effect even in regimented development. However, obviously, the goal for any Corporation dependent on code development should be to go beyond the knock on effect and actually employ open source methodologies everywhere high creativity is needed.

What is Open Source?

Open Source has it's origin in code sharing models, permissive from BSD and reciprocal from GNU. However, one of its great values is the reasons why people do open source aren't the same reasons why the framework was created in the first place. Today Open Source is a framework which stimulates creativity among developers and helps them create communities, provides economic benefits to corportations (provided they understand how to harness them) and produces a great societal good in general in terms of published reusable code.

Economics and Open Source

As I said earlier, the framework of Open Source has no tie to economics, in the same way things like artistic endeavour don't. It is possible for a great artist to make money (as Picasso did), but it's equally possible for a great artist to live all their lives in penury (as van Gough did). The demonstration of the analogy is that trying to measure the greatness of the art by the income of the artist is completely wrong and shortsighted. Developing the ability to exploit your art for commercial gain is an additional skill an artist can develop (or not, as they choose) it's also an ability they could fail in and in all cases it bears no relation to the societal good their art produces. In precisely the same way, finding an economic model that allows you to exploit open source (either individually or commercially) is firstly a matter of choice (if you have other reasons for doing Open Source, there's no need to bother) and secondly not a guarantee of success because not all models succeed. Perhaps the easiest way to appreciate this is through the lens of personal history.

Why I got into Open Source

As a physics PhD student, I'd always been interested in how operating systems functioned, but thanks to the BSD lawsuit and being in the UK I had no access to the actual source code. When Linux came along as a distribution in 1992, it was a revelation: not only could I read the source code but I could have a fully functional UNIX like system at home instead of having to queue for time to write up my thesis in TeX on the limited number of department terminals.

After completing my PhD I was offered a job looking after computer systems in the department and my first success was shaving a factor of ten off the computing budget by buying cheap pentium systems running Linux instead of proprietary UNIX workstations. This success was nearly derailed by an NFS bug in Linux but finding and fixing the bug (and getting it upstream into the 1.0.2 kernel) cemented the budget savings and proved to the department that we could handle this new technology for a fraction of the cost of the old. It also confirmed my desire to poke around in the Operating System which I continued to do, even as I moved to America to work on Proprietary software.

In 2000 I got my first Open Source break when the product I'd been working on got sold to a silicon valley startup, SteelEye, whose business plan was to bring High Availability to Linux. As the only person on the team with an Open Source track record, I became first the Architect and later CTO of the company, with my first job being to make the somewhat eccentric Linux SCSI subsystem work for the shared SCSI clusters LifeKeeper then used. Getting SCSI working lead to fund interactions with the Linux community, an Invitation to present on fixing SCSI to the Kernel Summit in 2002 and the maintainership of SCSI in 2003. From that point, working on upstream open source became a fixture of my Job requirements but progressing through Novell, Parallels and now IBM it also became a quality sought by employers.

I have definitely made some money consulting on Open Source, but it's been dwarfed by my salary which does get a boost from my being an Open Source developer with an external track record.

The Primary Contributor Economic Models

Looking at the active contributors to Open Source, the primary model is that either your job description includes working on designated open source projects so you're paid to contribute as your day job
or you were hired because of what you've already done in open source and contributing more is a tolerated use of your employer's time, a third, and by far smaller group is people who work full-time on Open Source but fund themselves either by shared contributions like patreon or tidelift or by actively consulting on their projects. However, these models cover existing contributors and they're not really a route to becoming a contributor because employers like certainty so they're unlikely to hire someone with no track record to work on open source, and are probably not going to tolerate use of their time for developing random open source projects. This means that the route to becoming a contributor, like the route to becoming an artist, is to begin in your own time.

Users versus Developers

Open Source, by its nature, is built by developers for developers. This means that although the primary consumers of open source are end users, they get pretty much no say in how the project evolves. This lack of user involvement has been lamented over the years, especially in projects like the Linux Desktop, but no real community solution has ever been found. The bottom line is that users often don't know what they want and even if they do they can't put it in technical terms, meaning that all user driven product development involves extensive and expensive product research which is far beyond any open source project. However, this type of product research is well within the ability of most corporations, who can also afford to hire developers to provide input and influence into Open Source projects.

Business Model One: Reflecting the Needs of Users

In many ways, this has become the primary business model of open source. The theory is simple: develop a traditional customer focussed business strategy and execute it by connecting the gathered opinions of customers to the open source project in exchange for revenue for subscription, support or even early shipped product. The business value to the end user is simple: it's the business value of the product tuned to their needs and the fact that they wouldn't be prepared to develop the skills to interact with the open source developer community themselves. This business model starts to break down if the end users acquire developer sophistication, as happens with Red Hat and Enterprise users. However, this can still be combatted by making sure its economically unfeasible for a single end user to match the breadth of the offering (the entire distribution). In this case, the ability of the end user to become involved in individual open source projects which matter to them is actually a better and cheaper way of doing product research and feeds back into the synergy of this business model.

This business model entirely breaks down when, as in the case of the cloud service provider, the end user becomes big enough and technically sophisticated enough to run their own distributions and sees doing this as a necessary adjunct to their service business. This means that you can no-longer escape the technical sophistication of the end user by pursuing a breadth of offerings strategy.

Business Model Two: Drive Innovation and Standardization

Although venture capitalists (VCs) pay lip service to the idea of constant innovation, this isn't actually what they do as a business model: they tend to take an innovation and then monetize it. The problem is this model doesn't work for open source: retaining control of an open source project requires a constant stream of innovation within the source tree itself. Single innovations get attention but unless they're followed up with another innovation, they tend to give the impression your source tree is stagnating, encouraging forks. However, the most useful property of open source is that by sharing a project and encouraging contributions, you can obtain a constant stream of innovation from a well managed community. Once you have a constant stream of innovation to show, forking the project becomes much harder, even for a cloud service provider with hundreds of developers, because they must show they can match the innovation stream in the public tree. Add to that Standardization which in open source simply means getting your project adopted for use by multiple consumers (say two different clouds, or a range of industry). Further, if the project is largely run by a single entity and properly managed, seeing the incoming innovations allows you to recruit the best innovators, thus giving you direct ownership of most of the innovation stream. In the early days, you make money simply by offering user connection services as in Business Model One, but the ultimate goal is likely acquisition for the talent possesed, which is a standard VC exit strategy.

All of this points to the hypothesis that the current VC model is wrong. Instead of investing in people with the ideas, you should be investing in people who can attract and lead others with ideas

Other Business Models

Although the models listed above have proven successful over time, they're by no means the only possible ones. As the space of potential business models gets explored, it could turn out they're not even the best ones, meaning the potential innovation a savvy business executive might bring to open source is newer and better business models.

Conclusions

Business models are optional extras with open source and just because you have a successful open source project does not mean you'll have an equally successful business model unless you put sufficient thought into constructing and maintaining it. Thus a successful open source start up requires three elements: A sound business model, or someone who can evolve one, a solid community leader and manager and someone with technical ability in the problem space.

If you like working in Open Source as a contributor, you don't necessarily have to have a business model at all and you can often simply rely on recognition leading to opportunities that provide sufficient remuneration.

Although there are several well known business models for exploiting open source, there's no reason you can't create your own different one but remember: a successful open source project in no way guarantees a successful business model.

08 Sep 2019 9:35am GMT

04 Sep 2019

feedKernel Planet

Linux Plumbers Conference: LPC waiting list closed; just a few days until the conference

The waiting list for this year's Linux Plumbers Conference is now closed. All of the spots available have been allocated, so anyone who is not registered at this point will have to wait for next year. There will be no on-site registration. We regret that we could not accommodate everyone. The good news is that all of the microconferences, refereed talks, Kernel summit track, and Networking track will be recorded on video and made available as soon as possible after the conference. Anyone who could not make it to Lisbon this year will at least be able to catch up with what went on. Hopefully those who wanted to come will make it to a future LPC.

For those who are attending, we are just a few days away; you should have received an email with more details. Beyond that, the detailed schedule is available. There are also some tips on using the metro to get to the venue. As always, please send any questions or comments to "contact@linuxplumbersconf.org".

04 Sep 2019 9:31pm GMT

30 Aug 2019

feedKernel Planet

Pete Zaitcev: Docker Block Storage... say what again?

Found an odd job posting at the website of Rancher:

What you will be doing

  • Design and implement a block storage solution for Docker containers
  • Working on development of various aspects of the storage stack: consistency, reliability, replication and performance
  • Using Go for product development

Okay. Since they talk about consistency and replication together, this thing probably provides actual service, in addition to the necessary orchestration. Kind of the ill-fated Sheepdog. They may under-estimate the amount of work necesary, sure. Look no further than Ceph RBD. Remember how much work it took for a genius like Sage? But a certain arrogance is essential in a start-up, and Rancher only employs 150 people.

Also, nobody is dumb enough to write orchestration in Go, right? So this probably is not just a layer on top of Ceph or whatever.

Well, it's still possible that it's merely an in-house equivalent of OpenStack Cinder, and they want it in Go because they are a Go house and if you have a hammer everything looks like a nail.

Either way, here's the main question: what does block storage have to do with Docker?

Docker, as far as I know, is a container runtime. And containers do not consume block storage. They plug into a Linux kernel that presents POSIX to them, only namespaced. Granted, certain applications consume block storage through Linux, that is why we have O_DIRECT. But to roll out a whole service like this just for those rare appliations... I don't think so.

Why would anyone want block storage for (Docker) containers? Isn't it absurd? What am I missing and what is Rancher up to?

UPDATE:

The key to remember here is that while running containers aren't using block storage, Docker containers are distributed as disk images, and they get a their own root filesystem by default. Therefore, any time anyone adds a Docker container, they have to allocate a block device and dump the application image into it. So, yes, it is some kind of Docker Cinder they are trying to build.

See Red Hat docs about managing Docker block storage in Atomic Host (h/t penguin42).

30 Aug 2019 5:52pm GMT

15 Aug 2019

feedKernel Planet

Pete Zaitcev: POST, PUT, and CRUD

Anyone who ever worked with object storage knows that PUT creates, GET reads, POST updates, and DELETE deletes. Naturally, right? POST is such a strange verb with oddball encodings that it's perfect to update, while GET and PUT are matching twins like read(2) and write(2). Imagine my surprise, then, when I found that the official definition of RESTful makes POST create objects and PUT update them. There is even a FAQ, which uses sophistry and appeals to the authority of RFCs in order to justify this.

So, in the world of RESTful solipcism, you would upload an object foo into a bucket buk by issuing "POST /buk?obj=foo" [1], while "PUT /buk/foo" applies to pre-existing resources. Although, they had to admit that RFC-2616 assumes that PUT creates.

All this goes to show, too much dogma is not good for you.

[1] It's worse, actually. They want you to do "POST /buk", and receive a resource ID, generated by the server, and use that ID to refer to the resource.

15 Aug 2019 7:10pm GMT

14 Aug 2019

feedKernel Planet

Greg Kroah-Hartman: Patch workflow with mutt - 2019

Given that the main development workflow for most kernel maintainers is with email, I spend a lot of time in my email client. For the past few decades I have used (mutt), but every once in a while I look around to see if there is anything else out there that might work better.

One project that looks promising is (aerc) which was started by (Drew DeVault). It is a terminal-based email client written in Go, and relies on a lot of other go libraries to handle a lot of the "grungy" work in dealing with imap clients, email parsing, and other fun things when it comes to free-flow text parsing that emails require.

aerc isn't in a usable state for me just yet, but Drew asked if I could document exactly how I use an email client for my day-to-day workflow to see what needs to be done to aerc to have me consider switching.

Note, this isn't a criticism of mutt at all. I love the tool, and spend more time using that userspace program than any other. But as anyone who knows email clients, they all suck, it's just that mutt sucks less than everything else (that's literally their motto)

I did a (basic overview of how I apply patches to the stable kernel trees quite a few years ago) but my workflow has evolved over time, so instead of just writing a private email to Drew, I figured it was time to post something showing others just how the sausage really is made.

Anyway, my email workflow can be divided up into 3 different primary things that I do:

Given that all stable kernel patches need to already be in Linus's kernel tree first, the workflow of the how to work with the stable tree is much different from the new patch workflow.

Basic email reading

All of my email ends up in either two "inboxes" on my local machine. One for everything that is sent directly to me (either with To: or Cc:) as well as a number of mailing lists that I ensure I read all messages that are sent to it because I am a maintainer of those subsystems (like (USB), or (stable)). The second inbox consists of other mailing lists that I do not read all messages of, but review as needed, and can be referenced when I need to look something up. Those mailing lists are the "big" linux-kernel mailing list to ensure I have a local copy to search from when I am offline (due to traveling), as well as other "minor" development mailing lists that I like to keep a copy locally like linux-pci, linux-fsdevel, and a few other smaller vger lists.

I get these maildir folders synced with the mail server using (mbsync) which works really well and is much faster than using (offlineimap), which I used for many many years ends up being really slow for when you do not live on the same continent as the mail server. (Luis's) recent post of switching to mbsync finally pushed me to take the time to configure it all properly and I am glad that I did.

Let's ignore my "lists" inbox, as that should be able to be read by any email client by just pointing it at it. I do this with a simple alias:

alias muttl='mutt -f ~/mail_linux/'

which allows me to type muttl at any command line to instantly bring it up:

What I spend most of the time in is my "main" mailbox, and that is in a local maildir that gets synced when needed in ~/mail/INBOX/. A simple mutt on the command line brings this up:

Yes, everything just ends up in one place, in handling my mail, I prune relentlessly. Everything ends up in one of 3 states for what I need to do next:

Everything that does not require a response, or I've already responded to it, gets deleted from the main INBOX at that point in time, or saved into an archive in case I need to refer back to it again (like mailing list messages).

That last state makes me save the message into one of two local maildirs, todo and stable. Everything in todo is a new patch that I need to review, comment on, or apply to a development tree. Everything in stable is something that has to do with patches that need to get applied to the stable kernel tree.

Side note, I have scripts that run frequently that email me any patches that need to be applied to the stable kernel trees, when they hit Linus's tree. That way I can just live in my email client and have everything that needs to be applied to a stable release in one place.

I sweep my main INBOX ever few hours, and sort things out either quickly responding, deleting, archiving, or saving into the todo or stable directory. I don't achieve a constant "inbox zero", but if I only have 40 or so emails in there, I am doing well.

So, for this main workflow, I need an easy way to:

These are all tasks that I bet almost everyone needs to do all the time, so a tool like aerc should be able to do that easily.

A note about filtering. As everything comes into one inbox, it is easier to filter that mbox based on things so I can process everything at once.

As an example, I want to read all of the messages sent to the linux-usb mailing list right now, and not see anything else. To do that, in mutt, I press l (limit) which brings up a prompt for a filter to apply to the mbox. This ability to limit messages to one type of thing is really powerful and I use it in many different ways within mutt.

Here's an example of me just viewing all of the messages that are sent to the linux-usb mailing list, and saving them off after I have read them:

This isn't that complex, but it has to work quickly and well on mailboxes that are really really big. As an example, here's me opening my "all lists" mbox and filtering on the linux-api mailing list messages that I have not read yet. It's really fast as mutt caches lots of information about the mailbox and does not require reading all of the messages each time it starts up to generate its internal structures.

All messages that I want to save to the todo directory I can do with a two keystroke sequence, .t which saves the message there automatically

Again, that's a binding I set up years ago, , jumps to the specific mbox, and . copies the message to that location.

Now you see why using mutt is not exactly obvious, those bindings are not part of the default configuration and everyone ends up creating their own custom key bindings for whatever they want to do. It takes a good amount of time to figure this out and set things up how you want, but once you are over that learning curve, you can do very complex things easily. Much like an editor (emacs, vim), you can configure them to do complex things easily, but getting to that level can take a lot of time and knowledge. It's a tool, and if you are going to rely on it, you should spend the time to learn how to use your tools really well.

Hopefully aerc can get to this level of functionality soon. Odds are everyone else does something much like this, as my use-case is not unusual.

Now let's get to the unusual use cases, the fun things:

Development Patch review and apply

When I decide it's time to review and apply patches, I do so by subsystem (as I maintain a number of different ones). As all pending patches are in one big maildir, I filter the messages by the subsystem I care about at the moment, and save all of the messages out to a local mbox file that I call s (hey, naming is hard, it gets worse, just wait…)

So, in my linux/work/ local directory, I keep the development trees for different subsystems like usb, char-misc, driver-core, tty, and staging.

Let's look at how I handle some staging patches.

First, I go into my ~/linux/work/staging/ directory, which I will stay in while doing all of this work. I open the todo mbox with a quick ,t pressed within mutt (a macro I picked from somewhere long ago, I don't remember where…), and then filter all staging messages, and save them to a local mbox with the following keystrokes:

mutt
,t
l staging
T
s ../s

Yes, I could skip the l staging step, and just do T staging instead of T, but it's nice to see what I'm going to save off first before doing so:

Now all of those messages are in a local mbox file that I can open with a single keystroke, 's' on the command line. That is an alias:

alias s='mutt -f ../s'

I then dig around in that mbox, sort patches by driver type to see everything for that driver at once by filtering on the name and then save those messages to another mbox called 's1' (see, I told you the names got worse.)

s
l erofs
T
s ../s1

I have lots of local mbox files all "intuitively" named 's1', 's2', and 's3'. Of course I have aliases to open those files quickly:

alias s1='mutt -f ../s1'
alias s2='mutt -f ../s2'
alias s3='mutt -f ../s3'

I have a number of these mbox files as sometimes I need to filter even further by patch set, or other things, and saving them all to different mboxes makes things go faster.

So, all the erofs patches are in one mbox, let's open it up and review them, and save the patches that look good enough to apply to another mbox:

Turns out that not all patches need to be dealt with right now (moving erofs out of the staging directory requires other people to review it, so I just save those messages back to the todo mbox:

Now I have a single patch that I want to apply, but I need to add some acks from the maintainers of erofs provided. I do this by editing the "raw" message directly from within mutt. I open the individual messages from the maintainers, cut their reviewed-by line, and then edit the original patch and add those lines to the patch:

Some kernel maintainers right now are screaming something like "Automate this!", "Patchwork does this for you!", "Are you crazy?" Yeah, this is one place that I need to work on, but the time involved to do this is not that much and it's not common that others actually review patches for subsystems I maintain, unfortunately.

The ability to edit a single message directly within my email client is essential. I end up having to fix up changelog text, editing the subject line to be correct, fixing the mail headers to not do foolish things with text formats, and in some cases, editing the patch itself for when it is corrupted or needs to be fixed (I want a Linkedin skill badge for "can edit diff files by hand and have them still work")

So one hard requirement I have is "editing a raw message from within the email client." If an email client can not do this, it's a non-starter for me, sorry.

So we now have a single patch that needs to be applied to the tree. I am already in the ~/linux/work/staging/ directory, and on the correct git branch for where this patch needs to go (how I handle branches and how patches move between them deserve a totally different blog post…)

I can apply this patch in one of two different ways, using git am -s ../s1 on the command line, piping the whole mbox into git and applying the patches directly, or I can apply them within mutt individually by using a macro.

When I have a lot of patches to apply, I just pipe the mbox file to git am -s as I'm comfortable with that, and it goes quick for multiple patches. It also works well as I have lots of different terminal windows open in the same directory when doing this and I can quickly toggle between them.

But we are talking about email clients at the moment, so here's me applying a single patch to the local git tree:

All it took was hitting the L key. That key is set up as a macro in my mutt configuration file with a single line:

macro index L '| git am -s'\n

This macro pipes the output of the current message to git am -s.

The ability of mutt to pipe the current message (or messages) to external scripts is essential for my workflow in a number of different places. Not having to leave the email client but being able to run something else with that message, is a very powerful functionality, and again, a hard requirement for me.

So that's it for applying development patches. It's a bunch of the same tasks over and over:

Doing that all within the email program and being able to quickly get in, and out of the program, as well as do work directly from the email program, is key.

Of course I do a "test build and sometimes test boot and then push git trees and notify author that the patch is applied" set of steps when applying patches too, but those are outside of my email client workflow and happen in a separate terminal window.

Stable patch review and apply

The process of reviewing patches for the stable tree is much like the development patch process, but it differs in that I never use 'git am' for applying anything.

The stable kernel tree, while under development, is kept as a series of patches that need to be applied to the previous release. This series of patches is maintained by using a tool called (quilt). Quilt is very powerful and handles sets of patches that need to be applied on top of a moving base very easily. The tool was based on a crazy set of shell scripts written by Andrew Morton a long time ago, and is currently maintained by Jean Delvare and has been rewritten in perl to make them more maintainable. It handles thousands of patches easily and quickly and is used by many developers to handle kernel patches for distributions as well as other projects.

I highly recommend it as it allows you to reorder, drop, add in the middle of the series, and manipulate patches in all sorts of ways, as well as create new patches directly. I do this for the stable tree as lots of times we end up dropping patches from the middle of the series when reviewers say they should not be applied, adding new patches where needed as prerequisites of existing patches, and other changes that with git, would require lots of rebasing.

Rebasing a git does not work for when you have developers working "down" from your tree. We usually have the rule with kernel development that if you have a public tree, it never gets rebased otherwise no one can use it for development.

Anyway, the stable patches are kept in a quilt series in a repository that is kept under version control in git (complex, yeah, sorry.) That queue can always be found (here).

I do create a linux-stable-rc git tree that is constantly rebased based on the stable queue for those who run test systems that can not handle quilt patches. That tree is found (here) and should not ever be used by anyone for anything other than automated testing. See (this email for a bit more explanation of how these git trees should, and should not, be used.

With all that background information behind us, let's look at how I take patches that are in Linus's tree, and apply them to the current stable kernel queues:

First I open the stable mbox. Then I filter by everything that has upstream in the subject line. Then I filter again by alsa to only look at the alsa patches. I look at the individual patches, looking at the patch to verify that it really is something that should be applied to the stable tree and determine what order to apply the patches in based on the date of the original commit.

I then hit F to pipe the message to a script that looks up the Fixes: tag in the message to determine what stable tree, if any, the commit that this fix was contained in.

In this example, the patch only should go back to the 4.19 kernel tree, so when I apply it, I know to stop at that place and not go further.

To apply the patch, I hit A which is another macro that I define in my mutt configuration

macro index A |'~/linux/stable/apply_it_from_email'\n
macro pager A |'~/linux/stable/apply_it_from_email'\n

It is defined "twice" as you can have different key bindings when you are looking at mailbox's index of all messages from when you are looking at the contents of a single message.

In both cases, I pipe the whole email message to my apply_it_from_email script.

That script digs through the message, finds the git commit id of the patch in Linus's tree, then runs a different script that takes the commit id, exports the patch associated with that id, edits the message to add my signed-off-by to the patch as well as dropping me into my editor to make any needed tweaks that might be needed (sometimes files get renamed so I have to do that by hand, and it gives me one final change to review the patch in my editor which is usually easier than in the email client directly as I have better syntax highlighting and can search and review the text better.

If all goes well, I save the file and the script continues and applies the patch to a bunch of stable kernel trees, one after another, adding the patch to the quilt series for that specific kernel version. To do all of this I had to spawn a separate terminal window as mutt does fun things to standard input/output when piping messages to a script, and I couldn't ever figure out how to do this all without doing the extra spawn process.

Here it is in action, as a video as (asciinema) can't show multiple windows at the same time.

Once I have applied the patch, I save it away as I might need to refer to it again, and I move on to the next one.

This sounds like a lot of different steps, but I can process a lot of these relatively quickly. The patch review step is the slowest one here, as that of course can not be automated.

I later take those new patches that have been applied and run kernel build tests and other things before sending out emails saying they have been applied to the tree. But like with development patches, that happens outside of my email client workflow.

Bonus, sending email from the command line

In writing this up, I remembered that I do have some scripts that use mutt to send email out. I don't normally use mutt for this for patch reviews, as I use other scripts for that (ones that eventually got turned into git send-email), so it's not a hard requirement, but it is nice to be able to do a simple:

mutt -s "${subject}" "${address}" <  ${msg} >> error.log 2>&1

from within a script when needed.

Thunderbird also can do this, I have used:

thunderbird --compose "to='${address}',subject='${subject}',message=${msg}"

at times in the past when dealing with email servers that mutt can not connect to easily (i.e. gmail when using oauth tokens).

Summary of what I need from an email client

So, to summarize it all for Drew, here's my list of requirements for me to be able to use an email client for kernel maintainership roles:

That's what I use for kernel development.

Oh, I forgot:

Bonus things that I have grown to rely on when using mutt is:

If you have made it this far, and you aren't writing an email client, that's amazing, it must be a slow news day, which is a good thing. I hope this writeup helps others to show them how mutt can be used to handle developer workflows easily, and what is required of a good email client in order to be able to do all of this.

Hopefully other email clients can get to state where they too can do all of this. Competition is good and maybe aerc can get there someday.

14 Aug 2019 12:37pm GMT

10 Aug 2019

feedKernel Planet

Pete Zaitcev: Comment to 'О цифровой экономике и глобальных проблемах человечества' by omega_hyperon

Я это всё каждый день слышу. Эти люди берут вполне определившуюся тенденцию к замедлению научно-технического прогресса, и говорят - мы лучше знаем, что человечеству нужно. Отберите деньги у недостойных, и дайте таким умным как я, и прогресс снова пойдёт. А заодно защитим природу! И всегда капитализм виноват.

О том, что цивилизация топчется на месте, спору нет. А вот пара вещей о которых этот гандон умалчивает.

Во-первых, если отнять деньги у Диснея и отдать их исследовательскому институту, то денег не будет. Вроде по-русски говорит, а такого простого урука из распада СССР не вынес. Американская наука разгромила советскую науку во времена НТР прежде всего потому, что капиталистическая экономика предоставила экономическую базу для этой науки, а социалистическая экономика была провальной.

Вообще, если сравнить бюджет Эппла и Диснея с бюджетом Housing and Urban Development и аналогичных учреждений, то там разница на 2 порядка. Если кто-то хочет дать науке больше денег, то нужно не грабить Дисней, а прекратить давать халявщикам бесплатное жильё. Замедление науки и капитализма идут рука об руку и вызваны государственной политикой, а не каким-то там биткойном.

Во-вторых, кто вообще верит этим шарлатанам? Нам забивали баки про детей в Африке десятилетиями, а за время ужасного голода в Эфиопии её население увеличилось с 38 миллионов до 75 миллионов. То же самое произошло с белыми медведями. Площадь лесов на планете растёт. Допустим в Бразилии срубили какие-то леса под паздбища... Но кто в это поверит?

Этот кризис экспертизы - не шутка. Боязнь вакцин создана не капитализмом и биткойном, а загниванием и распадом системы научных исследований в целом. Он не назвал институт, бюджет которого он сравнил с Диснеем, а вот интересно, сколько там бездельников среди сотрудников.

Коллапс науки отражается не только в том как публика утратила веру в учёних. Объективные показатели тоже просели. Подтверждаемость публикаций очень плохая, и идёт вниз. Тоже биткойн виноват?

В обшцем большая часть этого нытя мне видится крайне вредной. Если он не в состоянии диагностировать причины кризиса, предлогаемые решения ничего нам не дадут, и биодиверии не прибавится.

View the entire thread this comment is a part of

10 Aug 2019 12:46pm GMT

02 Aug 2019

feedKernel Planet

Michael Kerrisk (manpages): man-pages-5.02 is released

I've released man-pages-5.02. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

This release resulted from patches, bug reports, reviews, and comments from 28 contributors. The release includes around 120 commits that change more than 50 pages.

The most notable of the changes in man-pages-5.02 is the following:


02 Aug 2019 9:04am GMT

25 Jul 2019

feedKernel Planet

Pete Zaitcev: Swift is 2 to 4 times faster than any competitor

Or so they say, at least for certain workloads.

In January of 2015 I led a project to evaluate and select a next-generation storage platform that would serve as the central storage (sometimes referred to as an active archive or tier 2) for all workflows. We identified the following features as being key to the success of the platform:

  • Utilization of erasure coding for data/failure protection (no RAID!)
  • Open hardware and the ability to mix and match hardware (a.k.a. support heterogeneous environments)
  • Open source core (preferred, but not required)
  • Self-healing in response to failures (no manual processes required, like replacing a drive)
  • Expandable online to exabyte-scale (no downtime for expansions or upgrades)
  • High availability / fault tolerance (no single point of failure)
  • Enterprise-grade support (24/7/365)
  • Visibility (dashboards to depict load, errors, etc.)
  • RESTful API access (S3/Swift)
  • SMB/NFS access to the same data (preferred, but not required)

In hindsight, I wish we would have included two additional requirements:

  • Transparently tier and migrate data to and from public cloud storage
  • Span multiple geographic regions while maintaining a single global namespace

We spent the next ~1.5 years evaluating the following systems:

  • SwiftStack
  • Ceph (InkTank/RedHat/Canonical)
  • Scality
  • Cloudian
  • Caringo
  • Dell/EMC ECS
  • Cleversafe / IBM COS
  • HGST/WD ActiveScale
  • NetApp StorageGRID
  • Nexenta
  • Qumulo
  • Quantum Lattus
  • Quobyte
  • Hedvig
  • QFS (Quantcast File System)
  • AWS S3
  • Sohonet FileStore

SwiftStack was the only solution that literally checked every box on our list of desired features, but that's not the only reason we selected it over the competition.

The top three reasons behind our selection of SwiftStack were as follows:

  • Speed - SwiftStack was-by far-the highest-performing object storage platform - capable of line speed and 2-4x faster than competitors. The ability to move assets between our "tier 1 NAS" and "tier 2 object" with extremely high throughput was paramount to the success of the architecture.
  • [...]

Note: While SwiftStack 1space was not a part of the SwiftStack platform at the time of our evaluation and purchase, it would have been an additional deciding factor in favor of SwiftStack if it had been.

Interesting. It should be noted that performance of Swift is a great match for some workloads, but not for others. In particluar, Swift is weak on small-file workloads, such as Gnocchi, which writes a ton of 16-byte objects again and again. The overhead is a killer there, and not just on the wire: Swift has to update its accounting databases each and every time a write is done, so that "swift stat" shows things like quotas. Swift is also not particularly good at HPC-style workloads, which benefit from a great bisectional bandwidth, because we transfer all user data through so-called "proxy" servers. Unlike e.g. Ceph, Swift keeps the cluster topology hidden from the client, while a Ceph client actually tracks the ring changes, placement groups and their leaders, etc.. But as we can see, once the object sizes start climbing and the number of clients increases, Swift rapidly approaches the wire speed.

I cannot help noticing that the architecture in question has a front-facing cache of pool (tier 1), which is what the ultimate clients see instead of Swift. Most of the time, Swift is selected for its ability to serve tens of thousands of clients simultaneously, but not in this case. Apparently, the end-user invented ProxyFS independently.

There's no mention of Red Hat selling Swift in the post. Either it was not part of the evaluation at all, or the author forgot about it for the passing of time. He did list a bunch of rather weird and obscure storage solutions though.

25 Jul 2019 2:38am GMT

18 Jul 2019

feedKernel Planet

Kees Cook: security things in Linux v5.2

Previously: v5.1.

Linux kernel v5.2 was released last week! Here are some security-related things I found interesting:

page allocator freelist randomization
While the SLUB and SLAB allocator freelists have been randomized for a while now, the overarching page allocator itself wasn't. This meant that anything doing allocation outside of the kmem_cache/kmalloc() would have deterministic placement in memory. This is bad both for security and for some cache management cases. Dan Williams implemented this randomization under CONFIG_SHUFFLE_PAGE_ALLOCATOR now, which provides additional uncertainty to memory layouts, though at a rather low granularity of 4MB (see SHUFFLE_ORDER). Also note that this feature needs to be enabled at boot time with page_alloc.shuffle=1 unless you have direct-mapped memory-side-cache (you can check the state at /sys/module/page_alloc/parameters/shuffle).

stack variable initialization with Clang
Alexander Potapenko added support via CONFIG_INIT_STACK_ALL for Clang's -ftrivial-auto-var-init=pattern option that enables automatic initialization of stack variables. This provides even greater coverage than the prior GCC plugin for stack variable initialization, as Clang's implementation also covers variables not passed by reference. (In theory, the kernel build should still warn about these instances, but even if they exist, Clang will initialize them.) Another notable difference between the GCC plugins and Clang's implementation is that Clang initializes with a repeating 0xAA byte pattern, rather than zero. (Though this changes under certain situations, like for 32-bit pointers which are initialized with 0x000000AA.) As with the GCC plugin, the benefit is that the entire class of uninitialized stack variable flaws goes away.

Kernel Userspace Access Prevention on powerpc
Like SMAP on x86 and PAN on ARM, Michael Ellerman and Russell Currey have landed support for disallowing access to userspace without explicit markings in the kernel (KUAP) on Power9 and later PPC CPUs under CONFIG_PPC_RADIX_MMU=y (which is the default). This is the continuation of the execute protection (KUEP) in v4.10. Now if an attacker tries to trick the kernel into any kind of unexpected access from userspace (not just executing code), the kernel will fault.

Microarchitectural Data Sampling mitigations on x86
Another set of cache memory side-channel attacks came to light, and were consolidated together under the name Microarchitectural Data Sampling (MDS). MDS is weaker than other cache side-channels (less control over target address), but memory contents can still be exposed. Much like L1TF, when one's threat model includes untrusted code running under Symmetric Multi Threading (SMT: more logical cores than physical cores), the only full mitigation is to disable hyperthreading (boot with "nosmt"). For all the other variations of the MDS family, Andi Kleen (and others) implemented various flushing mechanisms to avoid cache leakage.

unprivileged userfaultfd sysctl knob
Both FUSE and userfaultfd provide attackers with a way to stall a kernel thread in the middle of memory accesses from userspace by initiating an access on an unmapped page. While FUSE is usually behind some kind of access controls, userfaultfd hadn't been. To avoid various heap grooming and heap spraying techniques for exploiting Use-after-Free flaws, Peter Xu added the new "vm.unprivileged_userfaultfd" sysctl knob to disallow unprivileged access to the userfaultfd syscall.

temporary mm for text poking on x86
The kernel regularly performs self-modification with things like text_poke() (during stuff like alternatives, ftrace, etc). Before, this was done with fixed mappings ("fixmap") where a specific fixed address at the high end of memory was used to map physical pages as needed. However, this resulted in some temporal risks: other CPUs could write to the fixmap, or there might be stale TLB entries on removal that other CPUs might still be able to write through to change the target contents. Instead, Nadav Amit has created a separate memory map for kernel text writes, as if the kernel is trying to make writes to userspace. This mapping ends up staying local to the current CPU, and the poking address is randomized, unlike the old fixmap.

ongoing: implicit fall-through removal
Gustavo A. R. Silva is nearly done with marking (and fixing) all the implicit fall-through cases in the kernel. Based on the pull request from Gustavo, it looks very much like v5.3 will see -Wimplicit-fallthrough added to the global build flags and then this class of bug should stay extinct in the kernel.

CLONE_PIDFD added
Christian Brauner added the new CLONE_PIDFD flag to the clone() system call, which complements the pidfd work in v5.1 so that programs can now gain a handle for a process ID right at fork() (actually clone()) time, instead of needing to get the handle from /proc after process creation. With signals and forking now enabled, the next major piece (already in linux-next) will be adding P_PIDFD to the waitid() system call, and common process management can be done entirely with pidfd.

Edit: added CLONE_PIDFD notes, as reminded by Christian Brauner. :)

That's it for now; let me know if you think I should add anything here. We're almost to -rc1 for v5.3!

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

18 Jul 2019 12:07am GMT

17 Jul 2019

feedKernel Planet

Linux Plumbers Conference: System Boot and Security Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the System Boot and Security Microconference has been accepted into the 2019 Linux Plumbers Conference! Computer-system security is a topic that has gotten a lot of serious attention over the years, but there has not been anywhere near as much attention paid to the system firmware. But the firmware is also a target for those looking to wreak havoc on our systems. Firmware is now being developed with security in mind, but provides incomplete solutions. This microconference will focus on the security of the system especially from the time the system is powered on.

Expected topics for this year include:

Come and join us in the discussion of keeping your system secure even at boot up.

We hope to see you there!

17 Jul 2019 4:57pm GMT

10 Jul 2019

feedKernel Planet

Linux Plumbers Conference: Power Management and Thermal Control Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Power Management and Thermal Control Microconference has been accepted into the 2019 Linux Plumbers Conference! Power management and thermal control are important areas in the Linux ecosystem to help improve the environment of the planet. In recent years, computer systems have been becoming more and more complex and thermally challenged at the same time and the energy efficiency expectations regarding them have been growing. This trend is likely to continue in the foreseeable future and despite the progress made in the power-management and thermal-control problem space since the Linux Plumbers Conference last year. That progress includes, but is not limited to, the merging of the energy-aware scheduling patch series and CPU idle-time management improvements; there will be more work to do in those areas. This gathering will focus on continuing to have Linux meet the power-management and thermal-control challenge.

Topics for this year include:

Come and join us in the discussion of how to extend the battery life of your laptop while keeping it cool.

We hope to see you there!

10 Jul 2019 10:29pm GMT

09 Jul 2019

feedKernel Planet

Linux Plumbers Conference: Android Microconference Accepted into 2019 Linux Plumbers Conference

We are pleased to announce that the Android Microconference has been accepted into the 2019 Linux Plumbers Conference! Android has a long history at Linux Plumbers and has continually made progress as a direct result of these meetings. This year's focus will be a fairly ambitious goal to create a Generic Kernel Image (GKI) (or one kernel to rule them all!). Having a GKI will allow silicon vendors to be independent of the Linux kernel running on the device. As such, kernels could be easily upgraded without requiring any rework of the initial hardware porting efforts. This microconference will also address areas that have been discussed in the past.

The proposed topics include:

Come and join us in the discussion of improving what is arguably the most popular operating system in the world!

We hope to see you there!

09 Jul 2019 11:29pm GMT

Matthew Garrett: Bug bounties and NDAs are an option, not the standard

Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

(a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

comment count unavailable comments

09 Jul 2019 9:15pm GMT

Linux Plumbers Conference: Update on LPC 2019 registration waiting list

Here is an update regarding the registration situation for LPC2019.

The considerable interest for participation this year meant that the conference sold out earlier than ever before.

Instead of a small release of late-registration spots, the LPC planning committee has decided to run a waiting list, which will be used as the exclusive method for additional registrations. The planning committee will reach out to individuals on the waiting list and inviting them to register at the regular rate of $550, as spots become available.

With the majority of the Call for Proposals (CfP) still open, it is not yet possible to release passes. The planning committee and microconferences leads are working together to allocate the passes earmarked for microconferences. The Networking Summit and Kernel Summit speakers are yet to be confirmed also.

The planning committee understands that many of those who added themselves to the waiting list wish to find out soon whether they will be issued a pass. We anticipate the first passes to be released on July 22nd at the earliest.

Please follow us on social media, or here on this blog for further updates.

09 Jul 2019 1:36am GMT

11 Nov 2011

feedLinux Today

Tech Comics: "How to Live with Non-Geeks"

Datamation: Geeks must realize that non-geeks simply don't understand some very basics things.

11 Nov 2011 11:00pm GMT

How To Activate Screen Saver In Ubuntu 11.10

AddictiveTip: Ubuntu 11.10 does not come with a default screen saver, and even Gnome 3 provides nothing but a black screen when your system is idle.

11 Nov 2011 10:00pm GMT

XFCE: Your Lightweight, Speedy, Fully-Fledged Linux Desktop

MakeUseOf: As far as Linux goes, customization is king

11 Nov 2011 9:00pm GMT

Fedora Scholarship Recognizes Students for Their Contributions to Open Source Software

Red Hat: The Fedora Scholarship is awarded to one student each year to assist with the recipient's college or university education.

11 Nov 2011 8:00pm GMT

Digital Divide Persists Even as Broadband Adoption Grows

Datamation: New report from Dept. of Commerce shows that the 'have nots' - continue to have not when it comes to Internet.

11 Nov 2011 7:00pm GMT

Why GNOME refugees love Xfce

The Register: Thunar rather than later...

11 Nov 2011 6:00pm GMT

Everything should be open source, says WordPress founder

Between the Lines: "It's a bold statement, but it's the ethos that Mullenweg admirably stuck to, pointing out that sites like Wikipedia replaced Encyclopedia Britannica, and how far Android has gone for mobile."

11 Nov 2011 5:02pm GMT

The Computer I Need

LXer: "Before I had a cell phone I did not realize that I needed one. As of one week ago, I did not realize that I needed a tablet either but I can sense that it might be a similar experience."

11 Nov 2011 4:01pm GMT

GPL violations in Android: Same arguments, different day

IT World: "IP attorney Edward J. Naughton is repeating his arguments that Google's use of Linux kernel header files within Android may be in violation of the GNU General Public License (GPLv2), and tries to discredit Linus Torvalds' thoughts on the matter along the way."

11 Nov 2011 3:04pm GMT

No uTorrent for Linux by Year's End

Softpedia: "When asked why there's no uTorrent client version of Linux users out, BitTorrent Inc. said that the company has other priorities at the moment."

11 Nov 2011 2:01pm GMT

Keep an Eye on Your Server with phpSysInfo

Linux Magazine: "There are quite a few server monitoring solutions out there, but most of them are overkill for keeping an eye on a single personal server."

11 Nov 2011 1:03pm GMT

At long last, Mozilla Releases Lightning 1.0 Calendar

InternetNews: From the 'Date and Time' files:

11 Nov 2011 12:00pm GMT

Richard Stallman's Personal Ad

Editors' Note: You can't make this stuff up...

11 Nov 2011 10:00am GMT

Linux Top 5: Fedora 16 Aims for the Cloud

LinuxPlanet: There are many things to explore on the Linux Planet. This week, a new Fedora release provides plenty of items to examine. The new Fedora release isn't the only new open source release this week, as the Linux Planet welcomes new KDE and Firefox releases as well.

11 Nov 2011 9:00am GMT

Orion Editor Ships in Firefox 8

Planet Orion: Firefox 8 now includes the Orion code editor in its scratchpad feature.

11 Nov 2011 6:00am GMT