17 Sep 2019

feedFedora People

Fedora Community Blog: Fedocal and Nuancier are looking for new maintainers

Recently the Community Platform Engineering (CPE) team announced that we need to focus on key areas and thus let some of our applications go. So we started Friday with Infra to find maintainers for some of those applications. Unfortunately the first few occurrences did not seem to raise as much interest as we had hoped. As a result we are still looking for new maintainers for Fedocal and Nuancier.

What will be the responsibilities of new maintainer?

New maintainer will be completely responsible for the code base and communishift instance:

In other words the application will belong completely to you.

What you will get as a reward?

To take maintainership of the application is not without its rewards. So when you will choose to take it over, you will be rewarded by:

What role does CPE play in this?

This can look like plenty of work at first glance, but we are here to help you start with this. CPE Team will provide you with guidance, help and as part of Friday with Infra, we will help you to get everything set up and fixing the most urgent issues. More information can be found on Friday with Infra wiki.

Sounds interesting, where can I sign up?

If you think you are the right person for this work, send an email to Fedora Infrastructure mailing list or ask in #fedora-apps channel on Freenode. See you soon!

The post Fedocal and Nuancier are looking for new maintainers appeared first on Fedora Community Blog.

17 Sep 2019 6:53am GMT

Kushal Das: Permanent Record: the life of Edward Snowden

book cover

The personal life and thinking of the ordinary person who did an extraordinary thing.

A fantastic personal narrative of his life and thinking process. The book does not get into technical details, but, it will make sure that people relate to the different events mentioned in the book. It tells the story of a person who is born into the system and grew up to become part of the system, and then learns to question the same system.

I bought the book at midnight on Kindle (I also ordered the physical copies), slept for 3 hours in between and finished it off in the morning. Anyone born in 80s will find so many similarities as an 80s kid. Let it be the Commodore 64 as the first computer we saw or basic as the first-ever programming language to try. The lucky ones also got Internet access and learned to roam around of their own and build their adventure along with the busy telephone lines (which many times made the family members unhappy).

If you are someone from the technology community, I don't think you will find Ed's life was not as much different than yours. It has a different scenario and different key players, but, you will be able to match the progress in life like many other tech workers like ourselves.

Maybe you are reading the book just to learn what happened, or maybe you want to know why. But, I hope this book will help to think about the decisions you make in your life and how that affects the rest of the world. Let it be a group picture posted on Facebook or writing the next new tool for the intelligence community.

Go ahead and read the book, and when you are finished, make sure you pass it across to your friend, or buy them new copies. If you have some free time, you may consider to run a Tor relay or a bridge, a simple step will help many around the world.

On a side note, the book mentions SecureDrop project at the very end, and today is also the release of SecureDrop 1.0.0 (the same day of the book release).

17 Sep 2019 4:44am GMT

16 Sep 2019

feedFedora People

mythcat: Fedora 30 : Interactive learning and reinventing the wheel in programming.

Today I returned from an activity that prompted me to find a solution to display logos.
I found this GitHub repo that I read and then turned it into a single script.
It took about an hour.

16 Sep 2019 7:49pm GMT

Stephen Smoogen: EPEL Bug: Bash errors on recent EL-8 systems.

Last week, I got asked about a problem with using EPEL-8 on Oracle Enterprise Linux 8 where trying to install packages failed due to bad license file. I duplicated the problem on RHEL-8 which had not happened before some recent updates.


[smooge@localhost ~]$ repoquery
bash: repoquery: command not found...
Failed to search for file: Failed to download gpg key for repo 'epel': Curl error (37): Couldn't read a file:// file for file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8.0 [Couldn't open file /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8.0]

The problem seems to be that the EPEL release package uses the string $releasever for various short-cut strings. Take for example:


[epel-playground]
name=Extra Packages for Enterprise Linux $releasever - Playground - $basearch
#baseurl=https://download.fedoraproject.org/pub/epel/playground/$releasever/Everything/$basearch/os
metalink=https://mirrors.fedoraproject.org/metalink?repo=playground-epel$releasever&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$releasever


The problem is that when I wrote new versions of the EPEL-8 repo file, I replaced the old key phrase gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 with gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$releasever . When I tested things with the dnf command it worked fine but I didn't check to see where things like bash completion would show up.

Moving back to the format that EPEL-6 and EPEL-7 used fixes the problem, so I will be pushing an updated release file out this week. My apologies for people seeing the errors.

16 Sep 2019 5:47pm GMT

Karsten Hopp: Onboarding Fedora Infrastructure

I'm using / working on Fedora since FC-1 and just recently joined the Infrastructure team.

My first task was getting up to speed wrt ansible as all of the repeating tasks of the team are automated. Fortunately there are lots of tutorials and documents available on the web, as usual some of them outdated or incorrect, but where's the fun in having everything working out of the box ? A good starting point is Ansible's own documentation, i.e. https://docs.ansible.com/ansible/latest/user_guide/intro_getting_started.html , but I've also read some german docs like https://jaxenter.de/multi-tier-deployment-mit-ansible-76731 or the very basic intro https://www.biteno.com/tutorial/ansible-tutorial-einfuehrung/

There are a couple of hundred playbooks in the infrastructure ansible git repository (https://infrastructure.fedoraproject.org/cgit/ansible.git/) with an inventory of almost 900 hosts. I run into problems when I tried to fix https://pagure.io/fedora-infrastructure/issue/8156 , which is about cleaning up ansible_distribution conditionals. Many playbooks already check for ansible_distribution == "RedHat" or ansible_distribution == "Fedora", but as a newbie in Fedora-Infrastructure I have no idea which distribution a certain host is currently running and if a given playbook will ever be run for this host. Does adding checks for the distribution even make sense when conditions maybe never become true ? What seems to be missing (at least I haven't found it) is a huge map with all our machines together with a description of the OS they are running and which playbooks apply to them. There is an ancient (3yrs) old issue open (https://pagure.io/fedora-infrastructure/issue/5290) that already requests and implements part of this, but unfortunately there has been no progress for some time now.

Update:

Looking at the nagios map of all our hosts, there already seem to be host groups 'RedHat' 'CentOS' and 'Fedora', so at least part of the required information is already available.

16 Sep 2019 2:02pm GMT

Fedora Magazine: Copying large files with Rsync, and some misconceptions

There is a notion that a lot of people working in the IT industry often copy and paste from internet howtos. We all do it, and the copy-and-paste itself is not a problem. The problem is when we run things without understanding them.

Some years ago, a friend who used to work on my team needed to copy virtual machine templates from site A to site B. They could not understand why the file they copied was 10GB on site A but but it became 100GB on-site B.

The friend believed that rsync is a magic tool that should just "sync" the file as it is. However, what most of us forget is to understand what rsync really is, and how is it used, and the most important in my opinion is, where it come from. This article provides some further information about rsync, and an explanation of what happened in that story.

About rsync

rsync is a tool was created by Andrew Tridgell and Paul Mackerras who were motivated by the following problem:

Imagine you have two files, file_A and file_B. You wish to update file_B to be the same as file_A. The obvious method is to copy file_A onto file_B.

Now imagine that the two files are on two different servers connected by a slow communications link, for example, a dial-up IP link. If file_A is large, copying it onto file_B will be slow, and sometimes not even possible. To make it more efficient, you could compress file_A before sending it, but that would usually only gain a factor of 2 to 4.

Now assume that file_A and file_B are quite similar, and to speed things up, you take advantage of this similarity. A common method is to send just the differences between file_A and file_B down the link and then use such list of differences to reconstruct the file on the remote end.

The problem is that the normal methods for creating a set of differences between two files rely on being able to read both files. Thus they require that both files are available beforehand at one end of the link. If they are not both available on the same machine, these algorithms cannot be used. (Once you had copied the file over, you don't need the differences). This is the problem that rsync addresses.

The rsync algorithm efficiently computes which parts of a source file match parts of an existing destination file. Matching parts then do not need to be sent across the link; all that is needed is a reference to the part of the destination file. Only parts of the source file which are not matching need to be sent over.

The receiver can then construct a copy of the source file using the references to parts of the existing destination file and the original material.

Additionally, the data sent to the receiver can be compressed using any of a range of common compression algorithms for further speed improvements.

The rsync algorithm addresses this problem in a lovely way as we all might know.

After this introduction on rsync, Back to the story!

Problem 1: Thin provisioning

There were two things that would help the friend understand what was going on.

The problem with the file getting significantly bigger on the other size was caused by Thin Provisioning (TP) being enabled on the source system - a method of optimizing the efficiency of available space in Storage Area Networks (SAN) or Network Attached Storages (NAS).

The source file was only 10GB because of TP being enabled, and when transferred over using rsync without any additional configuration, the target destination was receiving the full 100GB of size. rsync could not do the magic automatically, it had to be configured.

The Flag that does this work is -S or -sparse and it tells rsync to handle sparse files efficiently. And it will do what it says! It will only send the sparse data so source and destination will have a 10GB file.

Problem 2: Updating files

The second problem appeared when sending over an updated file. The destination was now receiving just the 10GB, but the whole file (containing the virtual disk) was always transferred. Even when a single configuration file was changed on that virtual disk. In other words, only a small portion of the file changed.

The command used for this transfer was:

rsync -avS vmdk_file syncuser@host1:/destination

Again, understanding how rsync works would help with this problem as well.

The above is the biggest misconception about rsync. Many of us think rsync will simply send the delta updates of the files, and that it will automatically update only what needs to be updated. But this is not the default behaviour of rsync.

As the man page says, the default behaviour of rsync is to create a new copy of the file in the destination and to move it into the right place when the transfer is completed.

To change this default behaviour of rsync, you have to set the following flags and then rsync will send only the deltas:

--inplace               update destination files in-place
--partial               keep partially transferred files
--append                append data onto shorter files
--progress              show progress during transfer

So the full command that would do exactly what the friend wanted is:

rsync -av --partial --inplace --append --progress vmdk_file syncuser@host1:/destination

Note that the sparse flag -S had to be removed, for two reasons. The first is that you can not use -sparse and -inplace together when sending a file over the wire. And second, when you once sent a file over with -sparse, you can't updated with -inplace anymore. Note that versions of rsync older than 3.1.3 will reject the combination of -sparse and -inplace.

So even when the friend ended up copying 100GB over the wire, that only had to happen once. All the following updates were only copying the difference, making the copy to be extremely efficient.

16 Sep 2019 8:00am GMT

Fedora Community Blog: Fedora 31 Gnome Test Day 2019-09-18

F30 Modularity test day

Wednesday, 2019-09-18 is the Fedora 31 Gnome Test Day! As part of changes Gnome 3.34 in Fedora 31, we need your help to test if everything runs smoothly!

Why Gnome Test Day?

We try to make sure that all the Gnome features are performing as they should. So it's to see whether it's working well enough and catch any remaining issues. It's also pretty easy to join in: all you'll need is Fedora 31 (which you can grab from the wiki page).

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Fedora 31 Gnome Test Day 2019-09-18 appeared first on Fedora Community Blog.

16 Sep 2019 7:26am GMT

Siddhesh Poyarekar: Wrestling with the register allocator: LuaJIT edition

For some time now, I kept running into one specific piece of code in luajit repeatedly for various reasons and last month I came across a fascinating bug in the register allocator pitfall that I had never encountered before. But then as is the norm after fixing such bugs, I concluded that it's too trivial to write about and I was just stupid to not have found it sooner; all bugs are trivial once they're fixed.

The worst thing about solving tricky problems is deciding to write up about it and then halfway through the writeup, concluding that it was way too simple and I was just stupid to not solve it sooner.

- Siddhesh Poyarekar (@siddhesh_p) August 28, 2019
<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

After getting over my imposter syndrome (yeah I know, took a while) I finally have the courage to write about it, so like a famous cook+musician says, enough jibberjabber…

Looping through a hash table

One of the key data structures that luajit uses to implement metatables is a hash table. Due to its extensive use, the lookup loop into such hash tables is optimised in the JIT using an architecture-specific asm_href function. Here is what a snippet of the arm64 version of the function looks like. A key thing to note is that the assembly code is generated backwards, i.e. the last instruction is emitted first.

  /* Key not found in chain: jump to exit (if merged) or load niltv. */
  l_end = emit_label(as);
  as->invmcp = NULL;
  if (merge == IR_NE)
    asm_guardcc(as, CC_AL);
  else if (destused)
    emit_loada(as, dest, niltvg(J2G(as->J)));

  /* Follow hash chain until the end. */
  l_loop = --as->mcp;
  emit_n(as, A64I_CMPx^A64I_K12^0, dest);
  emit_lso(as, A64I_LDRx, dest, dest, offsetof(Node, next));
  l_next = emit_label(as);

  /* Type and value comparison. */
  if (merge == IR_EQ)
    asm_guardcc(as, CC_EQ);
  else
    emit_cond_branch(as, CC_EQ, l_end);

  if (irt_isnum(kt)) {
    if (isk) {
      /* Assumes -0.0 is already canonicalized to +0.0. */
      if (k)
        emit_n(as, A64I_CMPx^k, tmp);
      else
        emit_nm(as, A64I_CMPx, key, tmp);
      emit_lso(as, A64I_LDRx, tmp, dest, offsetof(Node, key.u64));
    } else {
      Reg ftmp = ra_scratch(as, rset_exclude(RSET_FPR, key));
      emit_nm(as, A64I_FCMPd, key, ftmp);
      emit_dn(as, A64I_FMOV_D_R, (ftmp & 31), (tmp & 31));
      emit_cond_branch(as, CC_LO, l_next);
      emit_nm(as, A64I_CMPx | A64F_SH(A64SH_LSR, 32), tisnum, tmp);
      emit_lso(as, A64I_LDRx, tmp, dest, offsetof(Node, key.n));
    }
  } else if (irt_isaddr(kt)) {
    Reg scr;
    if (isk) {
      int64_t kk = ((int64_t)irt_toitype(irkey->t) << 47) | irkey[1].tv.u64;
      scr = ra_allock(as, kk, allow);
      emit_nm(as, A64I_CMPx, scr, tmp);
      emit_lso(as, A64I_LDRx, tmp, dest, offsetof(Node, key.u64));
    } else {
      scr = ra_scratch(as, allow);
      emit_nm(as, A64I_CMPx, tmp, scr);
      emit_lso(as, A64I_LDRx, scr, dest, offsetof(Node, key.u64));
    }
    rset_clear(allow, scr);
  } else {
    Reg type, scr;
    lua_assert(irt_ispri(kt) && !irt_isnil(kt));
    type = ra_allock(as, ~((int64_t)~irt_toitype(ir->t) << 47), allow);
    scr = ra_scratch(as, rset_clear(allow, type));
    rset_clear(allow, scr);
    emit_nm(as, A64I_CMPw, scr, type);
    emit_lso(as, A64I_LDRx, scr, dest, offsetof(Node, key));
  }

  *l_loop = A64I_BCC | A64F_S19(as->mcp - l_loop) | CC_NE;

Here, the emit_* functions emit assembly instructions and the ra_* functions allocate registers. In the normal case everything is fine and the table lookup code is concise and effective. When there is register pressure however, things get interesting.

As an example, here is what a typical type lookup would look like:

0x100        ldr x1, [x16, #52]
0x104   cmp x1, x2
0x108   beq -> exit
0x10c   ldr x16, [x16, #16]
0x110   cmp x16, #0
0x114   bne 0x100

Here, x16 is the table that the loop traverses. x1 is a key, which if it matches, results in an exit to the interpreter. Otherwise the loop moves ahead until the end of the table. The comparison is done with a constant stored in x2.

The value of x is loaded later (i.e. earlier in the code, we are emitting code backwards, remember?) whenever that register is needed for reuse through a process called restoration or spilling. In the restore case, it is loaded into register as a constant or expressed as another constant (look up constant rematerialisation) and in the case of a spill, the register is restored from a slot in the stack. If there is no register pressure, all of this restoration happens at the head of the trace, which is why if you study a typical trace you will notice a lot of constant loads at the top of the trace.

Like the Spill that ruined your keyboard the other day…

Things get interesting when the allocation of x2 in the loop above results in a restore. Looking at the code a bit closer:

    type = ra_allock(as, ~((int64_t)~irt_toitype(ir->t) << 47), allow);
    scr = ra_scratch(as, rset_clear(allow, type));
    rset_clear(allow, scr);
    emit_nm(as, A64I_CMPw, scr, type);
    emit_lso(as, A64I_LDRx, scr, dest, offsetof(Node, key));

The x2 here is type, which is a constant. If a register is not available, we have to make one available by either rematerializing or by restoring the register, which would result in something like this:

0x100   ldr x1, [x16, #52]
0x104   cmp x1, x2
0x108   mov x2, #42
0x10c   beq -> exit
0x110   ldr x16, [x16, #16]
0x114   cmp x16, #0
0x118   bne 0x100

This ends up breaking the loop because the allocator restore/spill logic assumes that the code is linear and the restore will affect only code that follows it, i.e. code that got generated earlier. To fix this, all of the register allocations should be done before the loop code is generated.

Making things right

The result of this analysis was this fix in my LuaJIT fork that allocates registers for operands that will be used in the loop before generating the body of the loop. That is, if the registers have to spill, they will do so after the loop (we are generating code in reverse order) and leave the loop compact. The fix is also in the luajit2 repository in the OpenResty project. The work was sponsored by OpenResty as this wonderfully vague bug could only be produced by some very complex scripts that are part of the OpenResty product.

16 Sep 2019 12:20am GMT

Open Source Security Podcast: Episode 161 - Human nature and ad powered open source

Josh and Kurt start out discussing human nature and how it affects how we view security. A lot of things that look easy are actually really hard. We also talk about the npm library Standard showing command line ads. Are ads part of the future of open source?

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/11260388/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Comment on Twitter with the #osspodcast hashtag




16 Sep 2019 12:00am GMT

14 Sep 2019

feedFedora People

Matthew Garrett: It's time to talk about post-RMS Free Software

Richard Stallman has once again managed to demonstrate incredible insensitivity[1]. There's an argument that in a pure technical universe this is irrelevant and we should instead only consider what he does in free software[2], but free software isn't a purely technical topic - the GNU Manifesto is nakedly political, and while free software may result in better technical outcomes it is fundamentally focused on individual freedom and will compromise on technical excellence if otherwise the result would be any compromise on those freedoms. And in a political movement, there is no way that we can ignore the behaviour and beliefs of that movement's leader. Stallman is driving away our natural allies. It's inappropriate for him to continue as the figurehead for free software.

But I'm not calling for Stallman to be replaced. If the history of social movements has taught us anything, it's that tying a movement to a single individual is a recipe for disaster. The FSF needs a president, but there's no need for that person to be a leader - instead, we need to foster an environment where any member of the community can feel empowered to speak up about the importance of free software. A decentralised movement about returning freedoms to individuals can't also be about elevating a single individual to near-magical status. Heroes will always end up letting us down. We fix that by removing the need for heroes in the first place, not attempting to find increasingly perfect heroes.

Stallman was never going to save us. We need to take responsibility for saving ourselves. Let's talk about how we do that.

[1] There will doubtless be people who will leap to his defense with the assertion that he's neurodivergent and all of these cases are consequences of that.

(A) I am unaware of a formal diagnosis of that, and I am unqualified to make one myself. I suspect that basically everyone making that argument is similarly unqualified.
(B) I've spent a lot of time working with him to help him understand why various positions he holds are harmful. I've reached the conclusion that it's not that he's unable to understand, he's just unwilling to change his mind.

[2] This argument is, obviously, bullshit

comment count unavailable comments

14 Sep 2019 11:57am GMT

Richard W.M. Jones: Cascade – a turn-based text arcade game

cascade

I wrote this game about 20 years ago. Glad to see it still compiled out of the box on the latest Linux distro! Download it from here. If anyone can remember the name or any details of the original 1980s MS-DOS game that I copied the idea from, please let me know in the comments.

14 Sep 2019 9:00am GMT

Paul W. Frields: PulseCaster 0.9 released!

The post PulseCaster 0.9 released! appeared first on The Grand Fallacy.

It says… It says, uh… "Virgil Brigman back on the air".

The Abyss, 1989 (J. Cameron)

OK, I feel slightly guilty using a cheesy quote from James Cameron for this post. But not as guilty as I feel for leaving development of this tool to hang out so long.

That's right, there's a brand new release of PulseCaster available out there - 0.9 to be exact. There are multiple fixes and enhancements in this version.

(By the way… I don't have experience packaging for Debian or Ubuntu. If you're maintaining PulseCaster there and have questions, don't hestitate to get in touch. And thank you for helping make PulseCaster available for users!)

For starters, PulseCaster is now ported to Python 3. I used Python 3.6 and Python 3.7 to do the porting. Nothing in the code should be particular to either version, though. But you'll need to have Python 3 installed to use it, as most Linux bistros do these days.

Another enhancement is that PulseCaster now relies on the excellent pulsectl library for Python, by George Filipkin and Mike Kazantsev. Hats off to them for doing a great job, which allowed me to remove many, many lines of code from this release.

Also, due the use of PyGObject3 in this release, there are numerous improvements that make it easier for me to hack on. Silly issues with the GLib mainloop and other entrance/exit stupidity are hopefully a bit better now.

Also, the code for dealing with temporary files is now a bit less ugly. I still want to do more work on the overall design and interface, and have ideas. I've gotten way better at time management since the last series of releases and hope to do some of this over the USA holiday season this late fall and winter (but no promises).

A new release should be available in Fedora's Rawhide release by the time you read this, and within a few days in Fedora 31. Sorry, although I could bring back a Fedora 30 package, I'm hoping this will entice one or two folks to get on Fedora 31 sooner. So grab that when the Beta comes out and I'll see you there!

If you run into problems with the release, please file an issue in Github. I have fixed mail filters so that I'll be more responsive to them in the future.files


Photo by neil godding on Unsplash.

14 Sep 2019 2:57am GMT

13 Sep 2019

feedFedora People

Fedora Community Blog: FPgM report: 2019-37

Fedora Program Manager weekly report on Fedora Project development and progress

Here's your report of what has happened in Fedora Program Management this week. Fedora 31 Beta is go!

I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

Announcements

Help wanted

Upcoming meetings & test days

Fedora 31

Schedule

Changes

Blocker bugs

Bug ID Blocker status Component Bug status
1749433 Accepted (final) gnome-control-center NEW
1750394 Accepted (final) gnome-control-center NEW
1749086 Accepted (final) kde MODIFIED
1751438 Accepted (final) LiveCD NEW
1728240 Accepted (final) sddm NEW
1750805 Proposed (final) gnome-control-center NEW
1751673 Proposed (final) gnome-session NEW
1749868 Proposed (final) gnome-software NEW
1747408 Proposed (final) libgit2 NEW
1750036 Proposed (final) selinux-policy ASSIGNED
1750345 Proposed (final) webkit2gtk3 NEW
1751852 Proposed (final) xfdesktop ASSIGNED

Fedora 32

Changes

Announced

Submitted to FESCo

Withdrawn

The post FPgM report: 2019-37 appeared first on Fedora Community Blog.

13 Sep 2019 8:41pm GMT

Richard Hughes: GNOME Firmware 3.34.0 Release

This morning I tagged the newest fwupd release, 1.3.1. There are a lot of new things in this release and a whole lot of polishing, so I encourage you to read the release notes if this kind of thing interests you.

Anyway, to the point of this post. With the new fwupd 1.3.1 you can now build just the libfwupd library, which makes it easy to build GNOME Firmware (old name: gnome-firmware-updater) in Flathub. I tagged the first official release 3.34.0 to celebrate the recent GNOME release, and to indicate that it's ready for use by end users. I guess it's important to note this is just a random app hacked together by 3 engineers and not something lovelingly designed by the official design team. All UX mistakes are my own :)

GNOME Firmware is designed to be a not-installed-by-default power-user tool to investigate, upgrade, downgrade and re-install firmware.
GNOME Software will continue to be used for updates as before. Vendor helpdesks can ask users to install GNOME Firmware rather than getting them to look at command line output.

We need to polish up GNOME Firmware going forwards, and add the last few features we need. If this interests you, please send email and I'll explain what needs doing. We also need translations, although that can perhaps wait until GNOME Firmware moves to GNOME proper, rather than just being a repo in my personal GitLab. If anyone does want to translate it before then, please open merge requests, and be sure to file issues if any of the strings are difficult to translate or ambigious. Please also file issues (or even better merge requests!) if it doesn't build or work for you.

If you just want to try out a new application, it takes 10 seconds to install it from Flathub.

13 Sep 2019 1:12pm GMT

Remi Collet: PHP version 7.2.23RC1 and 7.3.10RC1

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.3.10RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30-31 or remi-php73-test repository for Fedora 29 and Enterprise Linux.

RPM of PHP version 7.2.23RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 29 or remi-php72-test repository for Enterprise Linux.

emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.3 as Software Collection:

yum --enablerepo=remi-test install php73

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Update of system version 7.3:

yum --enablerepo=remi-php73,remi-php73-test update php\*

or, the modular way (Fedora and RHEL 8):

dnf module enable php:remi-7.3
dnf --enablerepo=remi-modular-test update php\*

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

or, the modular way (Fedora and RHEL 8):

dnf module enable php:remi-7.2
dnf --enablerepo=remi-modular-test update php\*

Notice: version 7.3.10RC1 in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.6.

emblem-notice-24.pngPackages of 7.4.0RC1 are also available

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php72, php73)

Base packages (php)

13 Sep 2019 8:10am GMT

Richard W.M. Jones: nbdkit supports exportnames

(You'll need the very latest version of libnbd and nbdkit from git for this to work.)

The NBD protocol lets the client send an export name string to the server. The idea is a single server can serve different content to clients based on a requested export. nbdkit has largely ignored export names, but we recently added basic support upstream.

One consequence of this is you can now write a shell plugin which reflects the export name back to the client:

$ cat export.sh
#!/bin/bash -
case "$1" in
    open) echo "$3" ;;
    get_size) LC_ALL=C echo ${#2} ;;
    pread) echo "$2" | dd skip=$4 count=$3 iflag=skip_bytes,count_bytes ;;
    *) exit 2 ;;
esac
$ chmod +x export.sh
$ nbdkit -f sh export.sh

The size of the disk is the same as the export name:

$ nbdsh -u 'nbd://localhost/fooooo' -c 'print(h.get_size())'
6

The content and size of the disk is the exportname:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ f o o o o o β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Not very interesting in itself. But we can now pass the content of small disks entirely in the export name. Using a slightly more advanced plugin which supports base64-encoded export names (so we can pass in NUL bytes):

$ cat export-b64.sh
#!/bin/bash -
case "$1" in
    open) echo "$3" ;;
    get_size) echo "$2" | base64 -d | wc -c ;;
    pread) echo "$2" | base64 -d |
           dd skip=$4 count=$3 iflag=skip_bytes,count_bytes ;;
    can_write) exit 0 ;;
    pwrite) exit 1 ;;
    *) exit 2 ;;
esac
$ chmod +x export-b64.sh
$ nbdkit -f sh export-b64.sh

We can pass in an entire program to qemu:

qemu-system-x86_64 -fda 'nbd:localhost:10809:exportname=uBMAzRD8uACgjtiOwLQEo5D8McC5SH4x//OriwVAQKuIxJK4AByruJjmq7goFLsQJbELq4PAFpOr/sST4vUFjhWA/1x167+g1LEFuAQL6IUBg8c84vW+lvyAfAIgci3+xYD9N3SsrZetPCh0CzwgdQTGRP4o6F4Bgf5y/XXbiPAsAnLSNAGIwojG68qAdAIIRYPlB1JWVXUOtADNGjsWjPx09okWjPy+gPy5BACtPUABl3JD6BcBge9CAYoFLCByKVZXtAT25AHGrZfGBCC4CA7oAgFfXusfrQnAdC09APCXcxTo6ACBxz4BuAwMiXz+gL1AAQt1BTHAiUT+gD0cdQbHBpL8OAroxgDizL6S/K0IwHQMBAh1CLQc/g6R/HhKiUT+izzorgB1LrQCzRaoBHQCT0+oCHQCR0eoA3QNgz6A/AB1Bo1FCKOA/Jc9/uV0Bz0y53QCiQRdXlqLBID6AXYKBYACPYDUchvNIEhIcgODwARQ0eixoPbx/syA/JRYcgOAzhaJBAUGD5O5AwDkQDz8cg2/gvyDPQB0A6/i+Ikd6cL+GBg8JDx+/yQAgEIYEEiCAQC9234kPGbDADxa/6U8ZmYAAAAAAAAAAHICMcCJhUABq8NRV5xQu6R9LteTuQoA+Ij4iPzo4f/Q4+L1gcdsAlhAqAd14J1fWcNPVao='

(Spoiler)

13 Sep 2019 8:02am GMT