19 Sep 2018

feedLXer Linux News

How to Create Python Virtual Environments on Ubuntu 18.04

In this tutorial, we'll provide a step by step instructions about how to create Python virtual environments on Ubuntu 18.04.

19 Sep 2018 5:55am GMT

Raspberry Pi I/O add-on targets aquaponics and hydroponics

Upsilon is Kickstartering a "BioControle" I/O add-on board for the RPi 3 designed for aquaponics and hydroponics. The $89, open-spec add-on offers power-protected 12-bit ADC and DAC, 4x relays, servo outputs, and sensor and logical I/O. We knew it was only a matter of time before we covered a board from Luxembourg, and that time […]

19 Sep 2018 4:41am GMT

Explore the immersive web with Firefox Reality. Now available for Viveport, Oculus, and Daydream

Earlier this year, we shared that we are building a completely new browser called Firefox Reality. The mixed reality team at Mozilla set out to build a web browser that has been designed from the ground up to work on stand-alone virtual and augmented reality (or mixed reality) headsets. Today, we are pleased to announce that the first release of Firefox Reality is available in the Viveport, Oculus, and Daydream app stores.

19 Sep 2018 3:26am GMT

Linux Community to Adopt New Code of Conduct, Firefox Reality Browser Now Available, Lamplight City Game Released, openSUSE Summit Nashville Announced and It's Now Easier to Run Ubuntu VMs on Windows

News briefs for September 18, 2018.

19 Sep 2018 2:12am GMT

feedLinuxtoday.com

Ubuntu 18.10 Cosmic Cuttlefish New Features

Ubuntu 18.10 which is codenamed as Cosmic Cuttlefish is around the corner, is planned to be released next month on 18th October 2018.

19 Sep 2018 2:00am GMT

feedLXer Linux News

Linux and Open Source FAQs: Common Myths and Misconceptions Addressed

[b]LinuxSecurity.com[/b]: LinuxSecurity debunks some common myths and misconceptions regarding open source and Linux by answering a few Linux-related frequently asked questions.

19 Sep 2018 12:58am GMT

18 Sep 2018

feedLXer Linux News

Variety Wallpaper Changer And Downloader 0.7.0 Ported To Python 3, Adds Support For Settings GDM Background

A new major version of Variety Wallpaper Changer is out. With the latest 0.7.0 release, Variety was ported to Python 3, while also receiving some improvements like support for setting the Gnome Screensaver / GDM background to match the desktop wallpaper.

18 Sep 2018 11:43pm GMT

How to Change Cursor Size on Ubuntu Desktop

This article shows you how to change the cursor size on Ubuntu Desktop by using the GUI and through command line.

18 Sep 2018 10:29pm GMT

feedLinuxtoday.com

Kong 1.0 Debuts Providing Open Source API Platform for Cloud Native Applications

EnterpriseAppsToday: APIs are the cornerstone of modern applications.

18 Sep 2018 10:00pm GMT

feedLXer Linux News

Chrome OS 69 Finally Brings Linux Apps to Some Chromebooks, Night Light Feature

Google released today the Chrome OS 69 operating system for Chromebook devices, a major release that follows in the footsteps of the Chrome 69 release, which arrived earlier this week with a new look on all supported platforms and major feature updates.

18 Sep 2018 9:15pm GMT

feedLinuxtoday.com

The IT Security Mistakes that Led to the Equifax Breach

eSecurityPlanet: Patching failures alone didn't lead to the massive data breach at Equifax. Here are a half-dozen other mistakes that Equifax made that IT security teams should learn from.

18 Sep 2018 9:00pm GMT

feedLXer Linux News

Writing More Compact Bash Code

In any programming language, idioms may be usedthat may not seem obvious from reading the manual.Often these usages of the language represent ways to make yourcode more compact (as in requiring fewer lines of code).

18 Sep 2018 8:00pm GMT

feedLinuxtoday.com

Time to Rebuild Alpine Linux Docker Containers After Package Manager Patch

itprotoday: A package manager man-in-the-middle vulnerability put Alpine Linux Docker images at risk, but a patch is available now.

18 Sep 2018 8:00pm GMT

Artificial intelligence: The king of disruptors

How to help IT teams prepare for, and benefit from, artificial intelligence tools

18 Sep 2018 7:00pm GMT

feedLXer Linux News

Did you open source career begin with video games?

Certainly you don't need to be a gamer as a child to grow up and become a developer, nor does being a gamer automatically set you up for a career in technology.But there's definitely a good bit of overlap between the two.read more

18 Sep 2018 6:46pm GMT

feedLinuxtoday.com

Getting started with openmediavault: A home NAS solution

This network-attached file server offers a solid array of features and is easy to install and configure.

18 Sep 2018 6:00pm GMT

feedLXer Linux News

Cozy Is A Nice Linux Audiobook Player For DRM-Free Audio Files

Cozy is a free and open source audiobook player for the Linux desktop. The application lets you listen to DRM-free audiobooks (mp3, m4a, flac, ogg and wav) using a modern Gtk3 interface.

18 Sep 2018 5:32pm GMT

feedLinuxtoday.com

How gaming turned me into a coder

opensource.com: Text-based adventure gaming leads to a satisfying career in tech.

18 Sep 2018 5:00pm GMT

feedLXer Linux News

How to Run Commands Simultaneously in Linux

Let's say you're editing a configuration file in the Linux "vi" editor, and suddenly need to look up some data in another file? On a regular GUI system, this wouldn't be a problem. You just open the second file, check when you need, and then switch back to the first program. On a command line, it isn't that simple.

18 Sep 2018 4:17pm GMT

feedLinuxtoday.com

4 scanning tools for the Linux desktop

Go paperless by driving your scanner with one of these open source applications.

18 Sep 2018 4:00pm GMT

feedLXer Linux News

3 top Python libraries for data science

Python's many attractions-such as efficiency, code readability, and speed-have made it the go-to programming language for data science enthusiasts. Python is usually the preferred choice for data scientists and machine learning experts who want to escalate the functionalities of their applications. (For example, Andrey Bulezyuk used the Python programming language to create an amazing machine learning application.)read more

18 Sep 2018 3:03pm GMT

feedLinuxtoday.com

Webinoly - Easily Setup Optimized LEMP Stack For Wordpress In Ubuntu

ostechnix: Webinoly - A Script To Easily Setup Optimized LEMP Stack For Wordpress In Ubuntu

18 Sep 2018 3:00pm GMT

feedLXer Linux News

Linux tricks that can save you time and trouble

Some command line tricks that can make you even more productive on the Linux command line.

18 Sep 2018 1:49pm GMT

The History of Various Linux Distros

Linux has been around for almost 30 years. Here is a brief history of various popular Linux distros, the reasons for their creation and their philosophy.

18 Sep 2018 12:34pm GMT

Linux firewalls: What you need to know about iptables and firewalld

This article is excerpted from my book, Linux in Action, and a second Manning project that's yet to be released.The firewallA firewall is a set of rules. When a data packet moves into or out of a protected network space, its contents (in particular, information about its origin, target, and the protocol it plans to use) are tested against the firewall rules to see if it should be allowed through. Here's a simple example:read more

18 Sep 2018 11:20am GMT

17 Sep 2018

feedKernel Planet

Pete Zaitcev: Robots on TV

Usually I do not watch TV, but I traveled and saw a few of them in public food intake places and such. What caught my attention were ads for robotics companies, aimed at business customers. IIRC, the companies were called generic names like "Universal Robotics" and "Reach Robotics". Or so I recall, but on second thought, Reach Robotics is a thing, but it focises on gaming, not traditional robotics. But the ads depicted robots doing some unspecified stuff: moving objects from place to place. Not dueling bots. Anyway, what's up with this? Is there some sort of revolution going on? What was the enabler? Don't tell me, it's all the money released by end of the Moore's Law, seeking random fields of application.

P.S. I know about the "Pentagon's Evil Mechanical Dogs" by Boston Dynamics. These were different, manipulating objects in the environment.

17 Sep 2018 7:35pm GMT

Linux Plumbers Conference: RISC-V microconference accepted for Linux Plumbers Conference

The open nature of the RISC-V ecosystem has allowed contributions from both academia and industry to lead to an unprecedented number of new hardware design proposals in a very short time span. Linux support is the key to enabling these new hardware options.

The primary objective of the RISC-V microconference at Plumbers is to initiate a community-wide discussion about the design problems/ideas for different Linux kernel features that will lead to a better, stable kernel for RISC-V.

Topics for this microconference include:

If you're interested in participating in this microconference or have other topics to propose, please contact Palmer Dabbelt (palmer@sifive.com) or Atish Patra (atish.patra@wdc.com).

LPC will be held in Vancouver, British Columbia, Canada from Tuesday, November 13 through Thursday, November 15.

We hope to see you there!

17 Sep 2018 4:16pm GMT

12 Sep 2018

feedKernel Planet

Gustavo F. Padovan: linuxdev-br: a Linux international conference in Brazil

linuxdev-br second edition just happened end of last month in Campinas, Brazil. We have put a nice write-up about the conference on the link below. Soon we will start planning next year's event. Come and join our community!

linuxdev-br: a Linux international conference in Brazil

The post linuxdev-br: a Linux international conference in Brazil appeared first on Gustavo Padovan.

12 Sep 2018 11:48am GMT

11 Sep 2018

feedKernel Planet

Linux Plumbers Conference: Looking forward to the Kernel Summit at LPC 2018

The LPC 2018 program committee would like to reiterate that the Kernel Summit is going ahead as planned as a track within the Linux Plumbers Conference in Vancouver, BC, November 13th through 15th. However, the Maintainers Summit half day, which is by invitation only, has been rescheduled to be colocated with OSS Europe in Edinburgh, Scotland on October 22nd. Attendees of the Maintainers Summit, once known, will still receive free passes to LPC and thus will probably be present in Vancouver as well.

Also a reminder that the CFP for the Kernel Summit is still open until September 21st 2018: to submit a discussion topic, please use a separate email for each topic with each subject line tagged with [TECH TOPIC], and send these emails to: ksummit-discuss@lists.linuxfoundation.org

Looking forward to seeing you all in Vancouver!

11 Sep 2018 8:20pm GMT

Linux Plumbers Conference: Tech Topics for Kernel Summit

If you missed the refereed-track deadline and you have a kernel-related topic (or, for that matter, if you just now thought of a kernel-related topic), please consider submitting it for the Kernel Summit. To do this, please use a separate email for each topic with each subject line tagged with [TECH TOPIC], and send these emails to:

ksummit-discuss@lists.linuxfoundation.org

If you submit your topic suggestions before September 21st, and if one of your suggestions is accepted, then you will be given free admission to the Linux Plumbers Conference.

11 Sep 2018 4:40pm GMT

10 Sep 2018

feedKernel Planet

Matthew Garrett: The Commons Clause doesn't help the commons

The Commons Clause was announced recently, along with several projects moving portions of their codebase under it. It's an additional restriction intended to be applied to existing open source licenses with the effect of preventing the work from being sold[1], where the definition of being sold includes being used as a component of an online pay-for service. As described in the FAQ, this changes the effective license of the work from an open source license to a source-available license. However, the site doesn't go into a great deal of detail as to why you'd want to do that.

Fortunately one of the VCs behind this move wrote an opinion article that goes into more detail. The central argument is that Amazon make use of a great deal of open source software and integrate it into commercial products that are incredibly lucrative, but give little back to the community in return. By adopting the commons clause, Amazon will be forced to negotiate with the projects before being able to use covered versions of the software. This will, apparently, prevent behaviour that is not conducive to sustainable open-source communities.

But this is where things get somewhat confusing. The author continues:

Our view is that open-source software was never intended for cloud infrastructure companies to take and sell. That is not the original ethos of open source.

which is a pretty astonishingly unsupported argument. Open source code has been incorporated into proprietary applications without giving back to the originating community since before the term open source even existed. MIT-licensed X11 became part of not only multiple Unixes, but also a variety of proprietary commercial products for non-Unix platforms. Large portions of BSD ended up in a whole range of proprietary operating systems (including older versions of Windows). The only argument in favour of this assertion is that cloud infrastructure companies didn't exist at that point in time, so they weren't taken into consideration[2] - but no argument is made as to why cloud infrastructure companies are fundamentally different to proprietary operating system companies in this respect. Both took open source code, incorporated it into other products and sold them on without (in most cases) giving anything back.

There's one counter-argument. When companies sold products based on open source code, they distributed it. Copyleft licenses like the GPL trigger on distribution, and as a result selling products based on copyleft code meant that the community would gain access to any modifications the vendor had made - improvements could be incorporated back into the original work, and everyone benefited. Incorporating open source code into a cloud product generally doesn't count as distribution, and so the source code disclosure requirements don't trigger. So perhaps that's the distinction being made?

Well, no. The GNU Affero GPL has a clause that covers this case - if you provide a network service based on AGPLed code then you must provide the source code in a similar way to if you distributed it under a more traditional copyleft license. But the article's author goes on to say:

AGPL makes it inconvenient but does not prevent cloud infrastructure providers from engaging in the abusive behavior described above. It simply says that they must release any modifications they make while engaging in such behavior.

IE, the problem isn't that cloud providers aren't giving back code, it's that they're using the code without contributing financially. There's no difference between what cloud providers are doing now and what proprietary operating system vendors were doing 30 years ago. The argument that "open source" was never intended to permit this sort of behaviour is simply untrue. The use of permissive licenses has always allowed large companies to benefit disproportionately when compared to the authors of said code. There's nothing new to see here.

But that doesn't mean that the status quo is good - the argument for why the commons clause is required may be specious, but that doesn't mean it's bad. We've seen multiple cases of open source projects struggling to obtain the resources required to make a project sustainable, even as many large companies make significant amounts of money off that work. Does the commons clause help us here?

As hinted at in the title, the answer's no. The commons clause attempts to change the power dynamic of the author/user role, but it does so in a way that's fundamentally tied to a business model and in a way that prevents many of the things that make open source software interesting to begin with. Let's talk about some problems.

The power dynamic still doesn't favour contributors

The commons clause only really works if there's a single copyright holder - if not, selling the code requires you to get permission from multiple people. But the clause does nothing to guarantee that the people who actually write the code benefit, merely that whoever holds the copyright does. If I rewrite a large part of a covered work and that code is merged (presumably after I've signed a CLA that assigns a copyright grant to the project owners), I have no power in any negotiations with any cloud providers. There's no guarantee that the project stewards will choose to reward me in any way. I contribute to them but get nothing back in return - instead, my improved code allows the project owners to charge more and provide stronger returns for the VCs. The inequity has shifted, but individual contributors still lose out.

It discourages use of covered projects

One of the benefits of being able to use open source software is that you don't need to fill out purchase orders or start commercial negotiations before you're able to deploy. Turns out the project doesn't actually fill your needs? Revert it, and all you've lost is some development time. Adding additional barriers is going to reduce uptake of covered projects, and that does nothing to benefit the contributors.

You can no longer meaningfully fork a project

One of the strengths of open source projects is that if the original project stewards turn out to violate the trust of their community, someone can fork it and provide a reasonable alternative. But if the project is released with the commons clause, it's impossible to sell any forked versions - anyone who wishes to do so would still need the permission of the original copyright holder, and they can refuse that in order to prevent a fork from gaining any significant uptake.

It doesn't inherently benefit the commons

The entire argument here is that the cloud providers are exploiting the commons, and by forcing them to pay for a license that allows them to make use of that software the commons will benefit. But there's no obvious link between these things. Maybe extra money will result in more development work being done and the commons benefiting, but maybe extra money will instead just result in greater payout to shareholders. Forcing cloud providers to release their modifications to the wider world would be of benefit to the commons, but this is explicitly ruled out as a goal. The clause isn't inherently incompatible with this - the negotiations between a vendor and a project to obtain a license to be permitted to sell the code could include a commitment to provide patches rather money, for instance, but the focus on money makes it clear that this wasn't the authors' priority.

What we're left with is a license condition that does nothing to benefit individual contributors or other users, and costs us the opportunity to fork projects in response to disagreements over design decisions or governance. What it does is ensure that a range of VC-backed projects are in a better position to improve their returns, without any guarantee that the commons will be left better off. It's an attempt to solve a problem that's existed since before the term "open source" was even coined, by simply layering on a business model that's also existed since before the term "open source" was even coined[3]. It's not anything new, and open source derives from an explicit rejection of this sort of business model.

That's not to say we're in a good place at the moment. It's clear that there is a giant level of power disparity between many projects and the consumers of those projects. But we're not going to fix that by simply discarding many of the benefits of open source and going back to an older way of doing things. Companies like Tidelift[4] are trying to identify ways of making this sustainable without losing the things that make open source a better way of doing software development in the first place, and that's what we should be focusing on rather than just admitting defeat to satisfy a small number of VC-backed firms that have otherwise failed to develop a sustainable business model.

[1] It is unclear how this interacts with licenses that include clauses that assert you can remove any additional restrictions that have been applied
[2] Although companies like Hotmail were making money from running open source software before the open source definition existed, so this still seems like a reach
[3] "Source available" predates my existence, let alone any existing open source licenses
[4] Disclosure: I know several people involved in Tidelift, but have no financial involvement in the company

comment count unavailable comments

10 Sep 2018 11:38pm GMT

08 Sep 2018

feedKernel Planet

Paul E. Mc Kenney: Ancient Hardware I Have Hacked: Back to Basics!

My return to the IBM mainframe was delayed by my high school's acquisition of a a teletype connected via a 110-baud serial line to a timesharing system featuring the BASIC language. I was quite impressed with this teletype because it could type quite a bit faster than I could. But this is not as good as it might sound, given that I came in dead last in every test of manual dexterity that the school ever ran us through. In fact, on a good day, I might have been able to type 20 words a minute, and it took decades of constant practice to eventually get above 70 words a minute. In contrast, one of the teachers could type 160 words a minute, more than half again faster than the teletype could!

Aside from output speed, I remained unimpressed with computers compared to paper and pencil, let alone compared to my pocket calculator. And given that this was old-school BASIC, there was much to be unimpressed about. You could name your arrays anything you wanted, as long as that name was a single upper-case character. Similarly, you could name your scalar variables anything you wanted, as long as that name was either a single upper-case character or a single upper-case character followed by a single digit. This allowed you to use up to 286 variables, up to 26 of which could be arrays. If you felt that GOTO was harmful, too bad. If you wanted a while loop, you could make one out of IF statements. Not only did IF statements have no else clause, the only thing that could be in the THEN clause was the number of the line to which control would transfer when the IF condition evaluated to true. And each line had to be numbered, and the numbers had to be monotonically increasing, that is, in the absence of control-flow statements, the program would execute the lines of code in numerical order, regardless of the order in which you typed those lines of code. Definitely a step down, even from FORTRAN.

But then the teacher showed the class a documentary movie showing several problems that could be solved by computer. I was unimpressed by most of the problems: Printing out prime numbers was impressive but pointless, and maximizing the volume of a box given limited materials was a simple pencil-and-paper exercise in calculus. But the finite-element analysis fluid-flow problem did get my attention. This featured a rectangular aquarium with a glass divider, so that initially the right-hand half of the aquarium was full of water and the left-hand half was full of air. They abruptly removed the glass divider, causing the water to slosh back and forth. They then showed a video of a computer simulation of the water flow, which matched the actual water flow quite well. There was no way I could imagine doing anything like that by hand, and was thus inspired to continue studying computer programming.

We students therefore searched out things that the computer could do that we were unwilling or unable to. One of my classmates ran the teletype's punch-tape output through its punch-tape reader, thus giving us all great insight as to why teletypes on television shows appeared to be so busy. For some reason, our teacher felt that this project was a waste of both punched tape and paper. He was more impressed with the work of another classmate, who calculated and ASCII-art printed magnetic lines of force. Despite the teletype's use of eight-bit ASCII, its print head was quite innocent of lower-case characters.

I coded up a project that plotted the zeroes of functions of two variables as ASCII art on the teletype. My teacher expressed some disappointment in my brute-force approach to locating the zeroes, but as far as I could see the bottleneck was the teletype, not the CPU. Besides, the timesharing service charged only for connect time, so CPU time was free, and why conserve a zero-cost resource?

I worked around the computer's limited arithmetic using crude multi-precision code with the goal of computing one thousand factorial. In this case, CPU was definitely the bottleneck, especially given my naive multiplication algorithm. The largest timeslot I could reserve on the teletype was an hour, and during that time, the computer was only able to make it to 659 factorial. In contrast, Maxima takes a few tens of milliseconds to compute 1000 factorial on my laptop. What a difference four decades makes!

I wrote my first professional program on this computer, a pro bono effort for a charity fundraiser. This charity was the work of the local branch of the National Honor Society, and the fundraiser was a computer-dating dance. Given that I was 160 pounds (73 kilograms) of computer-geeky social plutonium, I felt the need to consult an expert. The expert I chose was the home-economics teacher, who unfortunately seemed much more interested in working out why I was such a hopeless geek than in helping with matching criteria. I nevertheless extracted sufficient information to construct a simple Hamming-distance matcher. Fortunately most people seemed reasonably satisfied with their computer-chosen dance partners, the most notable exception being a senior girl who objected strenuously to having been matched only with freshmen boys. Further investigation determined that this mismatch was due to a data-entry error. Apparently, even Cupid is subject to Murphy's Law.

I also did my one and (thus far) only stint of white-hat hacking. In those trusting times, the school-administration software printed the user's password in cleartext as it was typed. But it was not necessary to memorize the characters that the user typed. You see, this teletype had what is called a ``HERE IS'' key. When this key was pressed, the teletype would send a 20-character sequence recorded on a mechanical drum located inside the teletype. And the sequence recorded on this particular teletype's mechanical drum was, you guessed it, the password to the school-administration software. I demonstrated this to my teacher, which resulted in the teletype being under continuous guard by a school official until such time as the mechanical drum could be replaced with one containing 20 ASCII NUL characters. (And here you thought that security theater was a recent phenomenon!)

Despite its limitations, my two years with this system were quite entertaining and educational. But then it was time to move on to college.

08 Sep 2018 10:29pm GMT

06 Sep 2018

feedKernel Planet

Linux Plumbers Conference: Devicetree Microconference Accepted into 2018 Linux Plumbers Conference

We are pleased to announce the the Devicetree Microconference has been accepted into the 2018 Linux Plumbers Conference!

Devicetree provides hardware description for many platforms, such as Linux [1], U-Boot [2], BSD [3], and Zephyr [4]. Devicetree continues to evolve to become more robust and attempt to provide the features desired by the varied users.

Some of the overlay related needs are now being addressed by U-boot, but there remain use cases for run time overlay management in the Linux kernel. Support for run time overlay management in the Linux kernel is slowly moving forward, but significant issues remain [5].

Devicetree verification has been an ongoing project for several years, with the most recent in person discussion occurring at the Devicetree Workshop [6] at Kernel Summit 2017. Progress continues on mail lists, and will be an important topic at the microconference.

Other Devicetree related tools, such as the dtc compiler and libfdt [7] continue to see active development.

Additional possible issues to be discussed may include potential changes to the Flattened Device Tree (FDT) format, reducing the Devicetree memory and storage size in the Linux kernel, creating new architecture to provide solutions to current problems, updating the Devicetree Specification, and using devicetrees in constrained contexts.

LPC [8] will be held in Vancouver, British Columbia, Canada from Tuesday, November 13 through Thursday, November 15.

[1] https://elinux.org/Device_Tree_Reference
[2] https://github.com/lentinj/u-boot/blob/master/doc/README.fdt-control
[3] https://wiki.freebsd.org/FlattenedDeviceTree
[4] http://docs.zephyrproject.org/devices/dts/device_tree.html
[5] https://elinux.org/Frank%27s_Evolving_Overlay_Thoughts
[6] https://elinux.org/Device_tree_future#Kernel_Summit_2017.2C_Devicetree_Workshop
[7] https://elinux.org/Device_Tree_Reference#dtc_.28upstream_project.29
[8] https://linuxplumbersconf.org/

06 Sep 2018 12:50pm GMT

04 Sep 2018

feedKernel Planet

Paul E. Mc Kenney: Ancient Hardware I Have Hacked: My First Computer

For the first couple of decades of my life, computers as we know them today were exotic beasts that filled rooms, each requiring the care of a cadre of what were then called systems programmers. Therefore, in my single-digit years the closest thing to a computer that I laid my hands on was a typewriter-sized electromechanical calculator that did addition, subtraction, multiplication, and division. I had the privilege of using this captivating device when helping out with accounting at the small firm at which my mother and father worked.

I was an early fan of hand-held computing devices. In fact, I was in the last math class in my high school that was required to master a slide rule, of which I still have several. I also learned how to use an abacus, including not only addition and subtraction, but multiplication and division as well. Finally, I had the privilege of living through the advent of the electronic pocket calculator. My first pocket calculator was a TI SR-50, which put me firmly on the infix side of the ensuing infix/Polish religious wars.

But none of these qualified as "real computers".

Unusually for an early 1970s rural-Oregon high school, mine offered computer programming courses. About the only thing I knew about computers were that they would be important in the future, so I signed up. Even more unusually for that time and place, we got to use a real computer, namely an IBM 360. This room-filling monster was located fourteen miles (23 kilometers) away at Chemeketa Community College. As far as I know, this was the closest computer to my home and school. Somehow my math teacher managed to wangle use of this machine on Tuesday and Thursday evenings, and he bussed us there and back.

This computer used punched cards and a state-of-the-art chain lineprinter. We were allowed to feed the card reader ourselves, but operating the lineprinter required special training. This machine's console had an attractive red button labeled EMERGENCY PULL. The computer's operator, who would later distinguish himself by creating a full-motion video on an Apple II, quite emphatically stated that this button should be pulled only in case of a bona fide emergency. He also gave us a simple definition of "emergency" that featured flames shooting out of the top of the computer. I never did see any flames anywhere near the computer, much less shooting out of its top, so I never had occasion to pull that button. But perhaps the manufacturers of certain incendiary laptops should have equipped each of them with an attractive red EMERGENCY PULL button.

Having provided us the necessary hardware training, the operator then gave us a sample card deck. We were to put our program at one specific spot in the deck, and our input data in another. Those of us wishing more information about how this worked were directed to an impressively large JCL manual.

The language of the class was FORTRAN, except that FORTRAN was deemed to difficult an initial language for our tender high-school minds. They therefore warmed us up with assembly language. Not IBM's celebrated Basic Assembly Language (BAL), but a simulated assembly language featuring base-10 arithmetic. After a couple of sessions with the simulated assembly, we moved up to FORTRAN, and even used PL/1 for one of our assignments. There were no error messages: There were instead error numbers that you looked up in a thick printed manual located in the same bookcase containing the JCL manual.

I was surprised by the computer's limitations, especially the 6-to-7 digit limits for single-precision floating point. After all, even my TI SR-50 pocket calculator did ten digits! That said, the computer could also do alphabetic characters (but only upper case) and a few symbols-though the exclamation point was notably missing. The state-of-the-art 029 keypunches were happy to punch an exclamation mark, but alas! It printed as "0" (zero) on the lineprinter.

I must confess that I was not impressed with the computer. In addition to its arithmetic limitations, its memory was quite small. Most of our assignments were small exercises in arithmetic that I could complete much more quickly using paper and pencil. In retrospect, this is not too surprising, given that my early laissez-faire programming methodology invariably resulted in interminable debugging sessions. However, it was quite clear that computers were becoming increasingly important, and I therefore resolved to take the class again the following year.

So, the last time I walked out of that machine room in Spring of 1974, I fully expected to walk back the following Fall. Little did I know that it would be almost 30 years before I would once again write code for an IBM mainframe. Nor did I suspect that it would be more than 15 years before work started on the operating system that was to be running on that 30-years-hence mainframe.

My limited foresight notwithstanding, somewhere in Finland a small boy was growing up.

04 Sep 2018 12:55am GMT

03 Sep 2018

feedKernel Planet

Linux Plumbers Conference: CfP extended to Sunday September 9th

Happy Labor Day to those celebrating today!

We have had great response to our call for 2018 Linux Plumbers Conference refereed-track submissions.

However it would seem that we are attracting a lot of procrastinators given the number of emails we have received requesting for an extension.

With the long weekend in North America, we are moving the deadline to Sunday September 9th at 10:59 PM (PST).

Now really is your last chance to make your great submission! Do not delay, submit your proposal now!

03 Sep 2018 1:59pm GMT

Pete Zaitcev: gai.conf

A couple Fedora releases back, I noticed that my laptop stopped using IPv6 to access dual-hosted services. I gave RFC-6724 a read, but it was much too involved for my small mind. Fortunately, it contained a simplified explanation:

Another effect of the default policy table is to prefer communication using IPv6 addresses to communication using IPv4 addresses, if matching source addresses are available.

My IPv6 is NAT-ed, so the laptop sees an RFC-4193 address fc00::/7. This does not match the globally assigned address of the external service. Therefore, a matching source address is not available, and things develop from there.

For now, I forced RFC-3484 with gai.conf. Basically, reverted to Fedora 26 behavior.

03 Sep 2018 3:01am GMT

01 Sep 2018

feedKernel Planet

Pete Zaitcev: Vladimir Butenko 1962-2018

Butenko was simply the most capable programmer that I've ever worked with. He was also very accomplished. I'm sure everyone has an idea what UNIX v7 was. Although BSD, sockets, and VFS were still in the future, it was a sophisticated OS for its time. Butenko wrote his own OS that was about a peer for the v7 in features (including vi). He also wrote a Fortran 77 compiler with IDE, an SQL database, and a myriad other things. Applications, too: games, communications, industrial control.

I still remember one of our first meetings in the late 1983. I wanted someone to explain me the instruction set of Mitra-15, a French 16-bit mini. Documentation was practically impossible to get back then, especially for undergrads. Someone referred me, and I received a lecture at a smoking area near an elevator, which founded my understanding of computer architecture.

The only time I ever got one up, was when I wrote a utility to monitor processes (years later, top(1) does the same thing). Apparently the concept never occurred to Butenko, who was perfectly capable to analyzing the system with a debugger and profiler. Seeing just my UI, he knocked out a clone in a couple of days. Of course, it was superior in every respect.

Butenko worked a lot. The combination of genius and workaholic was unstoppable. Or maybe they were sides of the same coin.

Unfortunately, Butenko was not in with the open source. He used to post to Usenet, lampooning and dismissing Linux. I suspect once you can code your own Linux any time you want, your perspective changes a bit. This was a part of the way we drifted apart later on. I was plugging on my little corner of Linux, while Butenko was somewhere out in the larger world, revolutionizing computer-intermediated communications.

He died suddenly, from a heart failure. Way too early, I think.

01 Sep 2018 4:17am GMT

31 Aug 2018

feedKernel Planet

Linux Plumbers Conference: Android Microconference Accepted into 2018 Linux Plumbers Conference

Android continues to find interesting new applications and problems to solve, both within and outside the mobile arena. Mainlining continues to be an area of focus, as do a number of areas of core Android functionality, including the kernel. Other topics include low memory killer [1], dynamically-allocated Binder devices [2], kernel namespaces [3], EAS [4], userdata filesystem checkpointing and DT [5].

We hope to see you there!

[1] https://lwn.net/Articles/761118/

[2] https://developer.android.com/reference/android/os/Binder

[3] https://lwn.net/Articles/531114/

[4] https://lwn.net/Articles/749738/

[5] https://source.android.com/devices/architecture/dto/

31 Aug 2018 9:49pm GMT

30 Aug 2018

feedKernel Planet

Pete Zaitcev: The shutdown of the project Hummingbird at Rackspace

Wait, wasn't this supposed to be our future?

The abridged history, as I recall was as follows. Mike Burton started the work to port Swift to Go in early 2016, inside the Swift tree. As such, it was a community effort. There was even a discussion at OpenStack Technical Committee about allowing development in Go (the TC disallowed it, but later posted some conditions). At the end of the year, I managed to write an object with MIME and collapsed the stock Swift auditor (the one in Python). That was an impetus for PUT+POST, BTW. But in 2017, the RAX cabal - creiht, redbo, and later gholt - weren't very happy with trying to supplicate to the TC, as well as the massive overhead of developing in the established community, and went their own way. In addition to the TC shenagians, the upstream Swift at SwiftStack and Red Hat needed a migration path. A Hummingbird without a support for Erasure Coding was a non-starter, and the RAX group wasn't interested in accomodating that. By the end of 2017, they were completely on their own, and started going off at the deep end by adding database servers and such. They managed to throw off some good ideas about what the next-generation replication ought to look like. But by cutting themselves off Swift they committed to re-capturng the lightning in the bottle anew, and they just could not pull it off.

On reflection, I suspect their chances would be better if they were serious about interoperating with Swift. The performance gains that they demonstrated were quite impressive. But their paymasters at RAX weren't into this community development and open-source toys (not that RAX went through the change of ownership while Hummingbird was going on).

I think a port of Swift to Go is still on the agenda, but it's unknown at this point if it's going to happen.

30 Aug 2018 4:38am GMT

29 Aug 2018

feedKernel Planet

Linux Plumbers Conference: Power Management and Energy-awareness Microconference Accepted into 2018 Linux Plumbers Conference

Use of Linux on battery-powered systems continues to grow, and general energy-efficiency concerns are not going away any time soon. The Power Management and Energy-awareness micro-conference therefore continues a Linux Plumbers Conference tradition of looking into ways to improve energy efficiency.

In spite of significant progress made over the last year on multiple fronts, including but not limited to the enhancements of the scheduler's load-tracking facility with an improved awareness of the amount of time taken by realtime processes, deadline processes, and interrupt handling in order to improve CPU performance scaling, the work on implementing energy-aware scheduling on asymmetric systems in the kernel (https://lwn.net/Articles/749900/), and the process utilization clamping patch series (https://lwn.net/Articles/762043/), there still are open issues to be discussed and new ideas to consider. This year, the focus is on energy-optimized task scheduling, user space interfaces for passing power/performance hints to the kernel, platform power management mechanisms and power management frameworks.

Specific topics include energy-aware scheduling, per-task and per-cgroup performance hints, timer granularity issues in the runtime PM framework, generic power domains (genpd) framework enhancements, firmware-based and direct control of low-level power management features of computing platforms, a proposed on-chip interconnect API, and improving selection of CPU idle states.

We hope to see you there!

29 Aug 2018 3:40pm GMT

27 Aug 2018

feedKernel Planet

Daniel Vetter: Why no 2D Userspace API in DRM?

The DRM (direct rendering manager, not the content protection stuff) graphics subsystem in the linux kernel does not have a generic 2D accelaration API. Despite an awful lot of of GPUs having more or less featureful blitter units. And many systems need them for a lot of use-cases, because the 3D engine is a bit too slow or too power hungry for just rendering desktops.

It's a FAQ why this doesn't exist and why it won't get added, so I figured I'll answer this once and for all.

Bit of nomeclatura upfront: A 2D engine (or blitter) is a bit of hardware that can copy stuff with some knowledge of the 2D layout usually used for pixel buffers. Some blitters also can do more like basic blending, converting color spaces or stretching/scaling. A 3D engine on the other hand is the fancy bit of high performance compute block, which run small programs (called shaders) on a massively parallel archicture. Generally with huge memory bandwidth and a dedicated controller to feed this beast through an asynchronous command buffer. 3D engines happen to be really good at rendering the pixels for 3D action games, among other things.

There's no 2D Acceleration Standard

3D has it easy: There's OpenGL and Vulkan and DirectX that require a certain feature set. And huge market forces that make sure if you use these features like a game would, rendering is fast.

Aside: This means the 2D engine in a browser actually needs to work like a 3D action game, or the GPU will crawl. The impendence mismatch compared to traditional 2D rendering designs is huge.

On the 2D side there's no such thing: Every blitter engine is its own bespoke thing, with its own features, limitations and performance characteristics. There's also no standard benchmarks that would drive common performance characteristics - today blitters are neeeded mostly in small systems, with very specific use cases. Anything big enough to run more generic workloads will have a 3D rendering block anyway. These systems still have blitters, but mostly just to help move data in and out of VRAM for the 3D engine to consume.

Now the huge problem here is that you need to fill these gaps in various hardware 2D engines using CPU side software rendering. The crux with any 2D render design is that transferring buffers and data too often between the GPU and CPU will kill performance. Usually the cliff is so steep that pure CPU rendering using only software easily beats any simplistic 2D acceleration design.

The only way to fix this is to be really careful when moving data between the CPU and GPU for different rendering operations. Sticking to one side, even if it's a bit slower, tends to be an overall win. But these decisions highly depend upon the exact features and performance characteristics of your 2D engine. Putting a generic abstraction layer in the middle of this stack, where it's guaranteed to be if you make it a part of the kernel/userspace interface, will not result in actual accelaration.

So either you make your 2D rendering look like it's a 3D game, using 3D interfaces like OpenGL or Vulkan. Or you need a software stack that's bespoke to your use-case and the specific hardware you want to run on.

2D Accelaration is Really Hard

This is the primary reason really. If you don't believe that, look at all the tricks a browser employs to render CSS and HTML and text really fast, while still animating all that stuff smoothly. Yes, a web-browser is the pinnacle of current 2D acceleration tech, and you really need all the things in there for decent performance: Scene graphs, clever render culling, massive batching and huge amounts of pains to make sure you don't have to fall back to CPU based software rendering at the wrong point in a rendering pipeline. Plus managing all kinds of assorted caches to balance reuse against running out of memory.

Unfortunately lots of people assume 2D must be a lot simpler than 3D rendering, and therefore they can design a 2D API that's fast enough for everyone. No one jumps in and suggests we'll have a generic 3D interface at the kernel level, because the lessons there are very clear:

There are a bunch of DRM drivers which have a support for 2D render engines exposed to userspace. But they all use highly hardware specific interfaces, fully streamlined for the specific engine. And they all require a decently sized chunk of driver code in userspace to translate from a generic API to the hardware formats. This is what DRM maintainers will recommend you to do, if you submit a patch to add a generic 2D acceleration API.

Exactly like a 3D driver.

If All Else Fails, There's Options

Now if you don't care about the last bit of performance, and your use-case is limited, and your blitter engine is limited, then there's already options:

You can take whatever pixel buffer you have, export it as a dma-buf, and then import it into some other subsystem which already has some kind of limited 2D accelaration support. Depending upon your blitter engine, a v4l2 mem2m device, or for simpler things there's also dmaengines.

On top, the DRM subsystem does allow you to implement the traditional accelaration methods exposed by the fbdev subsystem. In case you have userspace that really insists on using these; it's not recommended for anything new.

What about KMS?

The above is kinda a lie, since the KMS (kernel modesetting) IOCTL userspace API is a fairly full-featured 2D rendering interface. The aim of course is to render different pixel buffers onto a screen. With the recently added writeback support operations targetting memory are now possible. This could be used to expose a traditional blitter, if you only expose writeback support and no other outputs in your KMS driver.

There's a few downsides:

So all together this isn't the high-speed 2D accelaration API you're looking for either. It is a valid alternative to the options above though, e.g. instead of a v4l2 mem2m device.

FAQ for the FAQ, or: OpenVG?

OpenVG isn't the standard you're looking for either. For one it's a userspace API, like OpenGL. All the same reasons for not implementing a generic OpenGL interface at the kernel/userspace apply to OpenVG, too.

Second, the Mesa3D userspace library did support OpenVG once. Didn't gain traction, got canned. Just because it calls itself a standard doesn't make it a widely adopted industry default. Unlike OpenGL/Vulkan/DirectX on the 3D side.

Thanks to Dave Airlie and Daniel Stone for reading and commenting on drafts of this text.

27 Aug 2018 12:00am GMT