20 Jan 2020

feedLXer Linux News

How to Protect Your Server with Fail2ban on CentOS 8

Fail2ban is a free, open-source and most widely used IPS (Intrusion Prevention System) application that can be used to protect your server against brute force password login attacks. In this post I have explained how to configure and install Fail2ban utility on Centos 8. Also Check out how you can protect your SSH and HTTP against different kinds of attacks using Fail2Ban.

20 Jan 2020 3:19am GMT

feedKernel Planet

Paul E. Mc Kenney: The Old Man and His Macbook

I received a MacBook at the same time I received the smartphone. This was not my first encounter with a Mac, in fact, I long ago had the privilege of trying out a Lisa. I occasionally made use of the original Macintosh (perhaps most notably to prepare a resume when applying for a job at Sequent), and even briefly owned an iMac, purchased to run some educational software for my children. But that iMac was my last close contact with the Macintosh line, some 20 years before the MacBook: Since then, I have used Windows and more recently, Linux.

So how does the MacBook compare? Let's start with some positives:



There are of course some annoyances:



Overall impression? It is yet another laptop, with its own advantages, quirks, odd corners, and downsides. I can see how people who grew up on Macbook and who use nothing else could grow to love it passionately. But switching back and forth between MacBook and Linux is a bit jarring, though of course MacBook and Linux have much more in common than did the five different systems I switched back and forth between in the late 1970s.

My current plan is to stick with it for a year (nine months left!), and decide where to go from there. I might continue sticking with it, or I might try moving to Linux. We will see!

20 Jan 2020 1:52am GMT

feedLXer Linux News

One open source chat tool to rule them all

Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.read more

20 Jan 2020 1:07am GMT

19 Jan 2020

feedKernel Planet

Paul E. Mc Kenney: Other weighty matters

I used to be one of those disgusting people who could eat whatever he wanted, whenever he wanted, and as much as he wanted-and not gain weight.

In fact, towards the end of my teen years, I often grew very tired of eating. You see, what with all my running and growing, in order to maintain weight I had to eat until I felt nauseous. I would feel overstuffed for about 30 minutes and then I would feel fine for about two hours. Then I would be hungry again. In retrospect, perhaps I should have adopted hobbit-like eating habits, but then again, six meals a day does not mesh well with school and workplace schedules, to say nothing of with family traditions.

Once I stopped growing in my early 20s, I was able to eat more normally. Nevertheless, I rarely felt full. In fact, on one of those rare occasions when I did profess a feeling of fullness, my friends not only demanded that I give it to them in writing, but also that I sign and date the resulting document. This document was rendered somewhat less than fully official due to its being written on a whiteboard.

And even by age 40, eating what most would consider to be a normal diet caused my weight to drop dramatically and abruptly.

However, my metabolism continued to slow down, and my body's ability to tolerate vigorous exercise waned as well. But these change took place slowly, and so the number on the scale crept up almost imperceptibly.

However, life flowed quickly, and I failed to pay the scale as much attention as I should have. And so it was that the realization that something had to change took place in an airport in Florida. I was called out for a full-body search after going through one of the full-body scanners. A young man patted me down quite thoroughly, but wasn't able to find whatever it was that he was looking for. He called in a more experienced colleague, who quickly determined that what had apparently appeared to be a explosive device under my shirt was instead an embarrassingly thick layer of body fat. And yes, I did take entirely too much satisfaction from the fact that he chose to dress down his less-experienced colleague, but I could no longer deny that I was a good 25-30 pounds overweight. And in the poor guy's defense, the energy content of that portion of my body fat really did rival that of a small bomb.

Let that be a lesson to you. If you refuse take the hint from your bathroom scale, you might well find yourself instead taking it from the United States of America's Transportation Security Administration.

Accepting the fact that I was overweight was one thing. Actually doing something about it was quite another. You see, my body had become a card-carrying member of House Stark, complete with their slogan: "Winter is coming." And my body is wise in the ways of winter. It knows not only that winter is coming, but also that food will be hard to come by, especially given my slowing reflexes and decreasing agility. Now, my body has never actually seen such a winter, but countless generations of of natural selection have hammered a deep and abiding anticipation of such winters into my very DNA. Furthermore, my body knows exactly one way to deal with such a winter, and that is to eat well while the eating is good.

However, I have thus far had the privilege of living in a time and place where the eating is always good and where winter never comes, at least not the fearsome winters that my body is fanatically motivated to prepare for.

This line of thought reminded me of a piece written long ago by the late Isaac Asimov, in which he suggested that we should stop eating before we feel full. (Shortly after writing this, an acquaintance is said to have pointed out that Asimov could stand to lose some weight, and Asimov is said to have reacted by re-reading his own writing and then successfully implementing its recommendation.) The fact that I now weighed in at more than 210 pounds provided additional motivation.

With much effort, I was able to lose more than ten pounds, but then my weight crept back up again. I was able to keep my weight to about 205, and there it remained for some time.

At least, there it remained until I lost more than ten pounds due to illness. I figured that since I had paid the price of the illness, I owed it to myself to take full advantage of the resulting weight loss. Over a period of some months, I managed to get down to 190 pounds, which was a great improvement over 210, but significantly heavier than my 180-pound target weight.

But my weight remained stubbornly fixed at about 190 for some months.

Then I remembered the control systems class I took decades ago and realized that my body and I comprised a control system designed to maintain my weight at 190. You see, my body wanted a good fifty pounds of fat to give me a good chance of surviving the food-free winter that it knew was coming. So, yes, I wanted my weight to be 180. But only when the scale read 190 or more would I panic and take drastic action, such as fasting for a day, inspired by several colleagues' lifestyle fasts. Below 190, I would eat normally, that is, I would completely give in to my body's insistence that I gain weight.

As usual, the solution was simple but difficult to implement. I "simply" slowly decreased my panic point from 190 downwards, one pound at a time.

One of the ways that my body convinces me to overeat is through feelings of anxiety. "If I miss this meal, bad things will happen!!!" However, it is more difficult for my body to convince me that missing a meal would be a disaster if I have recently fasted. Therefore, fasting turned out to be an important component of my weight-loss regimen. A fast might mean just skipping breakfast, it might mean skipping both breakfast and lunch, or it might be a 24-hour fast. But note that a 24-hour fast skips first dinner, then breakfast, and finally lunch. Skipping breakfast, lunch, and then dinner results in more than 30 hours of fasting, which seems a bit excessive.

Of course, my body is also skilled at exploiting any opportunity for impulse eating, and I must confess that I do not yet consistently beat it at this game.

Exercise continues to be important, but it also introduces some complications. You see, exercise is inherently damaging to muscles. The strengthening effects of exercise are not due to the exercise itself, but rather to the body's efforts to repair the damage and then some. Therefore, in the 24 hours or so after exercise, my muscles suffer significant inflammation due to this damage, which results in a pound or two of added water weight (but note that everyone's body is different, so your mileage may vary). My body is not stupid, and so it quickly figured out that one of the consequences of a heavy workout was reduced rations the next day. It therefore produced all sorts of reasons why a heavy workout would be a bad idea, and with a significant rate of success.

So I allow myself an extra pound the day after a heavy workout. This way my body enjoys the exercise and gets to indulge the following day. Win-win! ;-)

There are also some foods that result in added water weight, with corned beef, ham, and bacon being prominent among them. The amount of water weight seems to vary based on I know not what, but sometimes ranges up to three pounds. I have not yet worked out exactly what to do about this, but one strategy might be to eat these types of food only on the day of a heavy workout. Another strategy would be to avoid them completely, but that is crazy talk, especially in the case of bacon.

So after two years, I have gotten down to 180, and stayed there for several months. What does the future hold?

Sadly, it is hard to say. In my case it appears that something like 90% of the effort required to lose weight is required to keep that weight off. So if you really do want to know what the future holds, all I can say is "Ask me in the future."

But the difficulty of keeping weight off should come as no surprise.

After all, my body is still acutely aware that winter is coming!

19 Jan 2020 11:53pm GMT

feedLXer Linux News

MariaDB X4 brings smart transactions to open source database

Open source database vendor MariaDB updates its flagship platform with new features to enable the convergence of transactional and analytical databases

19 Jan 2020 10:56pm GMT

Cloud-based test farm lets you check out edge AI software on Linux dev boards

FØCAL is a profiling and automated test farm platform based on Docker and LTTng for testing Linux edge AI software on the BeagleBone, Raspberry Pi, Jetson, Up Squared, and Google Coral. A venture-backed startup called FØCAL has launched a cloud-based test farm of the same name designed for hardware/software codesign of Linux-based edge AI and […]

19 Jan 2020 8:44pm GMT

Open source fights cancer, Tesla adopts Coreboot, Uber and Lyft release open source machine learning

In this edition of our open source news roundup, we take a look machine learning tools from Uber and Lyft, open source software to fight cancer, saving students money with open textbooks, and more!read more

19 Jan 2020 6:33pm GMT

Bash if..else Statement

Condition making statement is one of the most fundamental concepts of any computer programming. Bash if statement, if else statement, if elif else statement and nested if statement used to execute code based on a certain condition.

19 Jan 2020 4:21pm GMT

How To Synchronize Files And Directories Using Zaloha.sh

Zaloha.sh is a directory synchronizer that stands out by its small size and simplicity. It is actually a BASH script. It consists of only one file, Zaloha.sh, whose size is 125 kB. Approximately half of that file is documentation, the other half is program code. Zaloha.sh works out-of-the-box. This tutorial explains how to synchronize files and directories on Linux using Zaloha shell script.

19 Jan 2020 2:10pm GMT

Firefox 73 Enters Development with New Default Zoom Settings, Improved Audio

With the Firefox 72 release hitting the stable update channel last week, Mozilla kicked off the development of the next version of its popular, open-source and cross-platform web browser, Firefox 73.

19 Jan 2020 11:58am GMT

4 core skills to level-up your tech career in 2020

We do a lot to level-up our careers. We learn new programming languages; we take on new projects at work; we work on side projects on the weekend; we contribute to open source communities. What if I were to tell you that, while these activities are helpful, there is one set of skills you should focus on if you truly want to advance your career.

19 Jan 2020 9:47am GMT

Organize your email with Notmuch

Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.

19 Jan 2020 7:35am GMT

6 handy Bash scripts for Git

I wrote a bunch of Bash scripts that make my life easier when I'm working with Git repositories. Many of my colleagues say there's no need; that everything I need to do can be done with Git commands. While that may be true, I find the scripts infinitely more convenient than trying to figure out the appropriate Git command to do what I want.read more

19 Jan 2020 5:24am GMT

How To Limit User's Access To The Linux System

Using Restricted Shell, we can easily limit user's access to the Linux system. Once you put the users in restricted shell mode, they are allowed to execute only limited set of commands.

19 Jan 2020 3:12am GMT

How and why to use Creative Commons licensed work

Creative Commons (CC) copyright is a series of copyright licenses that make it easy for creators to share their work and adapt the work of others. Just because something is online doesn't mean you are free to use it however you like.How do I know if a work has a CC license?If you don't see a Creative Commons license on the work or the creator doesn't tell you their work is free to use, you cannot use it.There are three ways to know if a work has a Creative Commons license: read more

19 Jan 2020 1:00am GMT

18 Jan 2020

feedLXer Linux News

Keep a journal of your activities with this Python program

Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.read more

18 Jan 2020 10:49pm GMT

Id command in Linux

id is a command-line utility that prints the real and effective user and group IDs.

18 Jan 2020 8:37pm GMT

PinePhone, the $149 Linux Phone, Has Started Shipping for the Brave of Heart

The long-anticipated PinePhone Linux-powered smartphone has finally started shipping to customers who were brave enough to purchase the first batch.

18 Jan 2020 6:26pm GMT

14 Jan 2020

feedKernel Planet

Pete Zaitcev: Nobody is Google

A little while ago I been to a talk by Michael Verhulst of Terminal Labs. He made a career of descending into Devops disaster zones and righting things up, and shared some observations. One of his aphorisms was:

Nobody Is Google - Not Even Google

If I understood him right, he meant to say that scalability costs money and businesses flounder on buying more scalability than they need. Or, people think they are Google, but they are not. Even inside Google, only a small fraction of services operate at Google scale (aka planetary scale).

Apparently this sort of thing happens quite often.

14 Jan 2020 8:05pm GMT

13 Jan 2020

feedKernel Planet

Linux Plumbers Conference: Happy New Year!

The new year is in full swing and so are the preparations for the Linux Plumbers Conference in 2020! Updates coming soon! Until then you can watch some videos.

13 Jan 2020 8:39pm GMT

08 Jan 2020

feedKernel Planet

Paul E. Mc Kenney: Parallel Programming: December 2019 Update

There is a new release of Is Parallel Programming Hard, And, If So, What Can You Do About It?.

This release features a number of formatting and build-system improvements by the indefatigible Akira Yokosawa. On the formatting side, we have listings automatically generated from source code, clever references, selective PDF hyperlink highlighting, and finally settling the old after-period one-space/two-space debate by mandating newline instead. On the build side, we improved checks for incompatible packages, SyncTeX database file generation (instigated by Balbir Singh), better identification of PDFs, build notes for recent Fedora releases, fixes for some multiple-figure page issues, and improved font handling, and a2ping workarounds for ever-troublesome Ghostscript. In addition, the .bib file format was dragged kicking and screaming out of the 1980s, prompted by Stamatis Karnouskos. The new format is said to be more compatible with modern .bib-file tooling.

On the content side, the "Hardware and its Habits", "Tools of the Trade", "Locking", "Deferred Processing", "Data Structures", and "Formal Verification" chapters received some much needed attention, the latter by Akira, who also updated the "Shared-Variable Shenanigans" section based on a recent LWN article. SeongJae Park, Stamatis, and Zhang Kai fixed a large quantity of typos and addressed numerous other issues. There is now a full complement of top-level section epigraphs, and there are a few scalability results up to 420 CPUs, courtesy of a system provided by my new employer.

On the code side, there have been a number of bug fixes and updates from ACCESS_ONCE() to READ_ONCE() or WRITE_ONCE(), with significant contributions from Akira, Junchang Wang, and Slavomir Kaslev.

A full list of the changes since the previous release may be found here, and as always, git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git will be updated in real time.

08 Jan 2020 5:02am GMT

27 Dec 2019

feedKernel Planet

Paul E. Mc Kenney: Exit Libris, Take Two

Still the same number of bookshelves, and although I do have a smartphone, I have not yet succumbed to the ereader habit. So some books must go!

27 Dec 2019 7:04pm GMT

Matthew Garrett: Wifi deauthentication attacks and home security

I live in a large apartment complex (it's literally a city block big), so I spend a disproportionate amount of time walking down corridors. Recently one of my neighbours installed a Ring wireless doorbell. By default these are motion activated (and the process for disabling motion detection is far from obvious), and if the owner subscribes to an appropriate plan these recordings are stored in the cloud. I'm not super enthusiastic about the idea of having my conversations recorded while I'm walking past someone's door, so I decided to look into the security of these devices.

One visit to Amazon later and I had a refurbished Ring Video Doorbell 2™ sitting on my desk. Tearing it down revealed it uses a TI SoC that's optimised for this sort of application, linked to a DSP that presumably does stuff like motion detection. The device spends most of its time in a sleep state where it generates no network activity, so on any wakeup it has to reassociate with the wireless network and start streaming data.

So we have a device that's silent and undetectable until it starts recording you, which isn't a great place to start from. But fortunately wifi has a few, uh, interesting design choices that mean we can still do something. The first is that even on an encrypted network, the packet headers are unencrypted and contain the address of the access point and whichever device is communicating. This means that it's possible to just dump whatever traffic is floating past and build up a collection of device addresses. Address ranges are allocated by the IEEE, so it's possible to map the addresses you see to manufacturers and get some idea of what's actually on the network[1] even if you can't see what they're actually transmitting. The second is that various management frames aren't encrypted, and so can be faked even if you don't have the network credentials.

The most interesting one here is the deauthentication frame that access points can use to tell clients that they're no longer welcome. These can be sent for a variety of reasons, including resource exhaustion or authentication failure. And, by default, they're entirely unprotected. Anyone can inject such a frame into your network and cause clients to believe they're no longer authorised to use the network, at which point they'll have to go through a new authentication cycle - and while they're doing that, they're not able to send any other packets.

So, the attack is to simply monitor the network for any devices that fall into the address range you want to target, and then immediately start shooting deauthentication frames at them once you see one. I hacked airodump-ng to ignore all clients that didn't look like a Ring, and then pasted in code from aireplay-ng to send deauthentication packets once it saw one. The problem here is that wifi cards can only be tuned to one frequency at a time, so unless you know the channel your potential target is on, you need to keep jumping between frequencies while looking for a target - and that means a target can potentially shoot off a notification while you're looking at other frequencies.

But even with that proviso, this seems to work reasonably reliably. I can hit the button on my Ring, see it show up in my hacked up code and see my phone receive no push notification. Even if it does get a notification, the doorbell is no longer accessible by the time I respond.

There's a couple of ways to avoid this attack. The first is to use 802.11w which protects management frames. A lot of hardware supports this, but it's generally disabled by default. The second is to just ignore deauthentication frames in the first place, which is a spec violation but also you're already building a device that exists to record strangers engaging in a range of legal activities so paying attention to social norms is clearly not a priority in any case.

Finally, none of this is even slightly new. A presentation from Def Con in 2016 covered this, demonstrating that Nest cameras could be blocked in the same way. The industry doesn't seem to have learned from this.

[1] The Ring Video Doorbell 2 just uses addresses from TI's range rather than anything Ring specific, unfortunately

comment count unavailable comments

27 Dec 2019 3:26am GMT

25 Dec 2019

feedKernel Planet

Paul E. Mc Kenney: The Old Man and His Smartphone, 2019 Holiday Season Episode

I used my smartphone as a camera very early on, but the need to log in made it less than attractive for snapshots. Except that I saw some of my colleagues whip out their smartphones and immediately take photos. They kindly let me in on the secret: Double-clicking the power button puts the phone directly into camera mode. This resulted in a substantial uptick in my smartphone-as-camera usage. And the camera is astonishingly good by decade-old digital-camera standards, to say nothing of old-school 35mm film standards.

I also learned how to make the camera refrain from mirror-imaging selfies, but this proved hard to get right. The selfie looks wrong when immediately viewed if it is not mirror imaged! I eventually positioned myself to include some text in the selfie in order to reliably verify proper orientation.

Those who know me will be amused to hear that I printed a map the other day, just from force of habit. But in the event, I forgot to bring not only both the map and the smartphone, but also the presents that I was supposed to be transporting. In pleasant contrast to a memorable prior year, I remembered the presents before crossing the Columbia, which was (sort of) in time to return home to fetch them. I didn't bother with either the map or the smartphone, but reached my destination nevertheless. Cautionary tales notwithstanding, sometimes you just have to trust the old neural net's direction-finding capabilities. (Or at least that is what I keep telling myself!)

I also joined the non-exclusive group who uses a smartphone to photograph whiteboards prior to erasing them. I still have not succumbed to the food-photography habit, though. Taking a selfie with the non-selfie lens through a mirror is possible, but surprisingly challenging.

I have done a bit of ride-sharing, and the location-sharing features are quite helpful when meeting someone-no need to agree on a unique landmark, only to find the hard way that said landmark is not all that unique!

The smartphone is surprisingly useful for browsing the web while on the go, with any annoyances over the small format heavily outweighed by the ability to start and stop browsing very quickly. But I could not help but feel a pang of jealousy while watching a better equipped smartphone user type using swiping motions rather than a finger-at-a-time approach. Of course, I could not help but try it. Imagine my delight to learn that the swiping-motion approach was not some add-on extra, but instead standard! Swiping typing is not a replacement for a full-sized keyboard, but it is a huge improvement over finger-at-a-time typing, to say nothing of my old multi-press flip phone.

Recent foreign travel required careful prioritization and scheduling of my sole international power adapter among the three devices needing it. But my new USB-A-to-USB-C adapter allows me to charge my smartphone from my heavy-duty rcutorture-capable ThinkPad, albeit significantly more slowly than via AC adapter, and even more slowly when the laptop is powered off. Especially when I am actively using the smartphone. To my surprise, I can also charge my MacBook from my ThinkPad using this same adapter-but only when the MacBook is powered off. If the MacBook is running, all this does is extend the MacBook's battery life. Which admittely might still be quite useful.

All in all, it looks like I can get by with just the one international AC adapter. This is a good thing, especially considering how bulky those things are!

My smartphone's notifications are still a bit annoying, though I have gotten it a bit better trained to only bother me when it is important. And yes, I have missed a few important notifications!!! When using my laptop, which also receives all these notifications, my defacto strategy has been to completely ignore the smartphone. Which more than once has had the unintended consequence of completely draining my smartphone's battery. The first time this happened was quite disconcerting because it appeared that I had bricked my new smartphone. Thankfully, a quick web search turned up the unintuitive trick of simultaneously depressing the volume-down and power buttons for ten seconds.

But if things go as they usually do, this two-button salute will soon become all too natural!

25 Dec 2019 12:11am GMT

23 Dec 2019

feedKernel Planet

Paul E. Mc Kenney: Weight-Training Followup

A couple of years ago I posted on my adventures with weightlifting, along with the dangers that disorganized and imbalanced weight-lifting regimes posed to the simple act of coughing. A colleague pointed me to the book "Getting Stronger" by Bill Pearl. I was quite impressed by the depth and breadth of this book's coverage of weightlifting, including even a section entitled "Training Program for Those over 50". Needless to say, I started with that section.

Any new regime will entail some discomfort, but I was encouraged by the fact that the discomfort caused by this book's "Over 50 Program" coincided with the muscles that had objected so strenuously during my coughing fits. I continued this program until it became reasonably natural, and then progressed through the book's three General Conditioning programs. Most recently I have been alternating three sets of the third program with one set of either of the first two programs, with between one and two weight workouts per week. I fill in with stationary bicycle, elliptical trainer, or rowing machine for between three and five workouts per week.

I did note that my strength was sometimes inexplicably and wildly inconsistent. On one exercise, where you kneel and pull down to lift a weight, I found that sometimes 160 lbs was quite easy, while other times 100 lbs was extremely difficult. I could not correlate this variation in strength with anything in particular. At least not until I made a closer examination of the weight machines. It turned out that of the six stations, four provided a two-to-one mechanical advantage, so that when having randomly selected one of those four, I only needed to exert 80 lbs of force to lift the 160 lbs. Mystery solved! In another memorable instance, I was having great difficulty lifting the usual weight, then noticed that I had mistakenly attached to the barbell a pair of 35 lb weights instead of my customary pair of 25s. In another case where an unexpected doubling of strength surprised me, I learned that the weights were in pounds despite being in a country that has long used the metric system. Which is at least quite a bit less painful than making the opposite mistake! All in all, my advice to you is to check weight machines carefully and also to first use small weights so as to double-check the system of measurement.

I initially ignored the book's section on stretching, relying instead on almost five decades of my own experience with stretching. As you might guess, but this proved to be a mistake: Decades of experience stretching for running does not necessarily carry over to weight lifting. Therefore, as is embarassingly often the case, I advise you to do what I say rather than what I actually did. So do yourself a favor and do the book's recommended set of stretches.

One current challenge is calf cramps while sleeping. These are thankfully nowhere near as severe as those of my late teens, in which I would sometimes wake up flying through the air with one or the other of my legs stubbornly folded double at the knee. Back then, I learned to avoid these by getting sufficient sodium and potassium, by massaging my plantar fascia (for example, by rolling it back and forth over a golf ball), by (carefully!!!) massaging the space between my achilles tendon and tibia/fibula, and of course by stretching my calf.

Fortunately, my current bouts of calf cramps can be headed off simply by rotating my feet around my ankle, so that my big toe makes a circle in the air. This motion is of course not possible if the cramp has already taken hold, but rocking my foot side to side brought immediate relief on the last bout, which is much less painful than my traditional approach that involves copious quantities of brute force and awkwardness. Here is hoping that this technique continues to be effective!

23 Dec 2019 1:19am GMT

17 Dec 2019

feedKernel Planet

Pete Zaitcev: Fedora versus Lulzbot

I selected Lulzbot Mini as my 3D printer in large part because of the strong connection between the true open source and the company. It came with some baggage: not the cheapest, stuck in the world of 3mm filament, Cura generates mediocre support. But I could tolerate all that.

However, some things happened.

One, the maker of the printer, Alef Objects collapsed and sold itself to FAME 3D.

Two, Fedora shipped a completely, utterly busted Cura twice: both Fedora 30 and Fedora 31 came out with the package that just cannot be used.

I am getting a tiny bit concerned for the support for my Lulzbot going forward. Since printers expend things like cleaning strips, availability of parts is a big problem. And I cannot live with software being unusable. Last time it happened, Spot fixed the bug pretty quickly, in a matter of days. So, only a tiny bit concerned.

But that bit is enough to consider alternatives. Again, FLOSS-friendly is paramount. The rest is not that important, but I need it to work.

17 Dec 2019 9:34pm GMT

10 Dec 2019

feedKernel Planet

Pete Zaitcev: ARM Again in 2019 or 2020

I was at this topic since NetWinder was a thing, but yet again I started looking at having an ARM box in the house. Arnd provided me with the following tips, when I asked about CuBox:

2GB of RAM [in the best CuBox] is not great, but doesn't stop you from building kerrnels. In this case, the CPU is going to be the real bottleneck.

Of course it's not SBSA, since this is an embedded 32 bit SoC. Almost no 64 bit machines are SBSA either (most big server chips claim compliance but are lacking somewhat), but generally nobody actually cares.

HoneyComb LX2K is probably what you want if you can spare the change and wait for the last missing bug fixes.

The RK3399 chip is probably a good compromise between performance and cost, there are various boards with 4GB of RAM and an m.2 slot for storage, around $100.

Why does have to be like this? All I wanted was to make usbmon work on ARM.

The assertion that nobody cares about SBSA is rather interesting. Obviously, nobody in the embedded area does. They just fork Linux, clone a bootloader, flash it and ship, and then your refrigerator sends spam and your TV is used to attack your printer, while they move on the next IoT product. But I do care. I want to download Fedora and run it, like I can on x86. Is that too much to ask?

The HoneyComb LX2K is made by the same company as CuBox, but the naked board costs $750.

10 Dec 2019 9:17pm GMT

Daniel Vetter: Upstream Graphics: Too Little, Too Late

Unlike the tradition of my past few talks at Linux Plumbers or Kernel conferences, this time around in Lisboa I did not start out with a rant proposing to change everything. Instead I celebrated roughly 10 years of upstream graphics progress and finally achieving paradise. But that was all just prelude to a few bait-and-switches later fulfill expectations on what's broken this time around in upstream, totally, and what needs to be fixed and changed, maybe.

The LPC video recording is now released, slides are uploaded. If neither of that is to your taste, read below the break for the written summary.

Mission Accomplished

10 or so years ago upstream graphics was essentially a proof of concept for the promised to come. Kernel display modeset just landed, finally bringing a somewhat modern display driver userspace API to linux. And GEM, the graphics execution manager landed, bringing proper GPU memory management and multi client rendering. Realistically a lot needed to be done still, from rendering drivers for all the various SoC, to an atomic display API that can expose all the features, not just what was needed to light up a linux desktop back in the days. And lots of work to improve the codebase and make it much easier and quicker to write drivers.

There's obviously still a lot to do, but I think we've achieved that - for full details, check out my ELCE talk about everything great for upstream graphics.

Now despite all this justified celebrating, there is one sticking point still:

NVIDIA

The trouble with team green from an open source perspective - for them it's a great boon - is that they own the GPU software stack in two crucial ways:

Together these create a huge software moat around the high margin hardware business. All an open stack would achieve is filling in that moat and inviting competition to eat the nice surplus. In other words, stupid to even attempt, vendor lock-in just pays too well.

Now of course the reverse engineered nouveau driver still exists. But if you have to pay for reverse engineering already, then you might as well go with someone else's hardware, since you're not going to get any of the CUDA/GL goodies.

And the business case for open source drivers indeed exists so much that even paying for reverse engineering a full stack is no problem. The result is a vibrant community of hardware vendors, customers, distros and consulting shops who pay the bills for all the open driver work that's being done. And in userspace even "upstream first" works - releases happen quickly and often enough, with sufficiently smooth merge process that having a vendor tree is simply not needed. Plus customer's willingness to upgrade if necessary, because it's usually a well-contained component to enable new hardware support.

In short without a solid business case behind open graphics drivers, they're just not going to happen, viz. NVIDIA.

Not Shipping Upstream

Unfortunately the business case for "upstream first" on the kernel side is completely broken. Not for open source, and not for any fundamental reasons, but simply because the kernel moves too slowly, is too big, drivers aren't well contained enough and therefore customer will not or even can not upgrade. For some hardware upstreaming early enough is possible, but graphics simply moves too fast: By the time the upstreamed driver is actually in shipping distros, it's already one hardware generation behind. And missing almost a year of tuning and performance improvements. Worse it's not just new hardware, but also GL and Vulkan versions that won't work on older kernels due to missing features, fragementing the ecosystem further.

This is entirely unlike the userspace side, where refactoring and code sharing in a cross-vendor shared upstream project actually pays off. Even in the short term.

There's a lot of approaches trying to paper over this rift with the linux kernel:

Also, there just isn't a single LTS kernel. Even upstream has multiple, plus every distro has their own flavour, plus customers love to grow their own variety trees too. Often they're not even coordinated on the same upstream release. Cheapest way to support this entire madness is to completely ignore upstream and just write your own subsystem. Or at least not use any of the helper libraries provided by kernel subsystems, completely defeating the supposed benefit of upstreaming code.

No matter the strategy, they all boil down to paying twice - if you want to upstream your code. And there's no added return for the doubled bill. In conclusion, upstream first needs a business case, like the open source graphics stack in general. And that business case is very much real, except for upstreaming, it's only real in userspace.

In the kernel, "upstream first" is a sham, at least for graphics drivers.

Thanks to Alex Deucher for reading and commenting on drafts of this text.

10 Dec 2019 12:00am GMT

03 Dec 2019

feedKernel Planet

Daniel Vetter: ELCE Lyon: Everything Great About Upstream Graphics

At ELC Europe in Lyon I held a nice little presentation about the state of upstream graphics drivers, and how absolutely awesome it all is. Of course with a big focus on SoC and embedded drivers. Slides and the video recording

Key takeaways for the busy:

In other words, world domination is assured and progressing according to plan.

03 Dec 2019 12:00am GMT

27 Nov 2019

feedKernel Planet

Arnaldo Carvalho de Melo: sudo is fast again

A big hammer solution:

[root@quaco ~]# rpm -e fprintd fprintd-pam
[error] [/etc/nsswitch.conf] is not a symbolic link!
[error] [/etc/nsswitch.conf] was not created by authselect!
[error] Unexpected changes to the configuration were detected.
[error] Refusing to activate profile unless those changes are removed or overwrite is requested.
Unable to disable feature [17]: File exists
[root@quaco ~]#

The warnings are not that reassuring, trying to use authselect to check the config also doesn't bring extra confidence:

[root@quaco ~]# authselect check
[error] [/etc/nsswitch.conf] is not a symbolic link!
[error] [/etc/nsswitch.conf] was not created by authselect!
Current configuration is not valid. It was probably modified outside authselect.

The fprintd is still in the config files:

[root@quaco ~]# grep fprintd /etc/pam.d/system-auth
auth sufficient pam_fprintd.so
[root@quaco ~]#

But since it is not installed, I get my fast sudo again, back to work.

27 Nov 2019 7:28pm GMT

Arnaldo Carvalho de Melo: What is ‘sudo su -‘ doing?

Out of the blue sudo started taking a long time to ask for my password, so I sleeptyped:

$ strace sudo su -

sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
$

Oops, perhaps it would be a good time for me to try using 'perf trace', so I tried:

perf trace --duration 5000 --call-graph=dwarf

To do system wide syscall tracing looking for syscalls taking more than 5 seconds to complete, together with DWARF callchains.

And after tweaking that -duration parameter and using -filter-pids to exclude some long timeout processes that seemed unrelated, and even without using '-e \!futex' to exclude some syscalls taking that long to complete and again, looking unrelated to sudo's being stuck I got the clue I needed from this entry:


12345.846 (25024.785 ms): sudo/3571 poll(ufds: 0x7ffdcc4376a0, nfds: 1, timeout_msecs: 25000) = 0 (Timeout)
__GI___poll (inlined)
[0x30dec] (/usr/lib64/libdbus-1.so.3.19.11)
[0x2fab0] (/usr/lib64/libdbus-1.so.3.19.11)
[0x176cb] (/usr/lib64/libdbus-1.so.3.19.11)
[0x1809f] (/usr/lib64/libdbus-1.so.3.19.11)
[0x1518b] (/usr/lib64/libdbus-glib-1.so.2.3.4)
dbus_g_proxy_call (/usr/lib64/libdbus-glib-1.so.2.3.4)
pam_sm_authenticate (/usr/lib64/security/pam_fprintd.so)
[0x41f1] (/usr/lib64/libpam.so.0.84.2)
pam_authenticate (/usr/lib64/libpam.so.0.84.2)
[0xb703] (/usr/libexec/sudo/sudoers.so)
[0xa8f4] (/usr/libexec/sudo/sudoers.so)
[0xc754] (/usr/libexec/sudo/sudoers.so)
[0x24a83] (/usr/libexec/sudo/sudoers.so)
[0x1d759] (/usr/libexec/sudo/sudoers.so)
[0x6ef3] (/usr/bin/sudo)
__libc_start_main (/usr/lib64/libc-2.29.so)
[0x887d] (/usr/bin/sudo)

So its about PAM, authentication using some fprintd module, and sudo polls with a timeout of 25000 msecs, no wonder when I first tried with -failure, to ask just for syscalls that returned some error I wasn't getting anything…

Lets see what is this thing:

[root@quaco ~]# rpm -qf /usr/lib64/security/pam_fprintd.so
fprintd-pam-0.9.0-1.fc30.x86_64
[root@quaco ~]# rpm -q --qf "%{description}\n" fprintd-pam
PAM module that uses the fprintd D-Bus service for fingerprint
authentication.
[root@quaco ~]

I don't recall enabling this and from a quick look this t480s doesn't seem to have any fingerprint reader, lets see how to disable this on this Fedora 30 system…

27 Nov 2019 7:15pm GMT

26 Nov 2019

feedKernel Planet

Pete Zaitcev: Greg K-H uses mutt

Greg threw a blog post that contains a number of interesting animations of his terminal sessions. Fascinating!

I do not use mutt because I often need to copy-paste, and I hate dealing with line breaks that the terminal inevitably hoists upon me. So, many years ago I migrated to Sylpheed, and from there to Claws (because of the support of anti-aliased fonts).

Greg's approach differs in that it avoids the copy-paste problem by manipulating the bodies of messages programmatically. So, the patches travel between messages, files, and git without being printed to a terminal.

26 Nov 2019 7:04am GMT

11 Nov 2011

feedLinux Today

Tech Comics: "How to Live with Non-Geeks"

Datamation: Geeks must realize that non-geeks simply don't understand some very basics things.

11 Nov 2011 11:00pm GMT

How To Activate Screen Saver In Ubuntu 11.10

AddictiveTip: Ubuntu 11.10 does not come with a default screen saver, and even Gnome 3 provides nothing but a black screen when your system is idle.

11 Nov 2011 10:00pm GMT

XFCE: Your Lightweight, Speedy, Fully-Fledged Linux Desktop

MakeUseOf: As far as Linux goes, customization is king

11 Nov 2011 9:00pm GMT

Fedora Scholarship Recognizes Students for Their Contributions to Open Source Software

Red Hat: The Fedora Scholarship is awarded to one student each year to assist with the recipient's college or university education.

11 Nov 2011 8:00pm GMT

Digital Divide Persists Even as Broadband Adoption Grows

Datamation: New report from Dept. of Commerce shows that the 'have nots' - continue to have not when it comes to Internet.

11 Nov 2011 7:00pm GMT

Why GNOME refugees love Xfce

The Register: Thunar rather than later...

11 Nov 2011 6:00pm GMT

Everything should be open source, says WordPress founder

Between the Lines: "It's a bold statement, but it's the ethos that Mullenweg admirably stuck to, pointing out that sites like Wikipedia replaced Encyclopedia Britannica, and how far Android has gone for mobile."

11 Nov 2011 5:02pm GMT

The Computer I Need

LXer: "Before I had a cell phone I did not realize that I needed one. As of one week ago, I did not realize that I needed a tablet either but I can sense that it might be a similar experience."

11 Nov 2011 4:01pm GMT

GPL violations in Android: Same arguments, different day

IT World: "IP attorney Edward J. Naughton is repeating his arguments that Google's use of Linux kernel header files within Android may be in violation of the GNU General Public License (GPLv2), and tries to discredit Linus Torvalds' thoughts on the matter along the way."

11 Nov 2011 3:04pm GMT

No uTorrent for Linux by Year's End

Softpedia: "When asked why there's no uTorrent client version of Linux users out, BitTorrent Inc. said that the company has other priorities at the moment."

11 Nov 2011 2:01pm GMT

Keep an Eye on Your Server with phpSysInfo

Linux Magazine: "There are quite a few server monitoring solutions out there, but most of them are overkill for keeping an eye on a single personal server."

11 Nov 2011 1:03pm GMT

At long last, Mozilla Releases Lightning 1.0 Calendar

InternetNews: From the 'Date and Time' files:

11 Nov 2011 12:00pm GMT

Richard Stallman's Personal Ad

Editors' Note: You can't make this stuff up...

11 Nov 2011 10:00am GMT

Linux Top 5: Fedora 16 Aims for the Cloud

LinuxPlanet: There are many things to explore on the Linux Planet. This week, a new Fedora release provides plenty of items to examine. The new Fedora release isn't the only new open source release this week, as the Linux Planet welcomes new KDE and Firefox releases as well.

11 Nov 2011 9:00am GMT

Orion Editor Ships in Firefox 8

Planet Orion: Firefox 8 now includes the Orion code editor in its scratchpad feature.

11 Nov 2011 6:00am GMT