25 Mar 2017

feedPlanet Debian

Dirk Eddelbuettel: RApiDatetime 0.0.2

Two days after the initial 0.0.1 release, a new version of RApiDatetime has just arrived on CRAN.

RApiDatetime provides six entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. These six functions are all fairly essential and useful, but not one of them was previously exported by R.

Josh Ulrich took one hard look at the package -- and added the one line we needed to enable the Windows support that was missing in the initial release. We now build on all platforms supported by R and CRAN. Otherwise, I just added a NEWS file and called it a bugfix release.

Changes in RApiDatetime version 0.0.2 (2017-03-25)

  • Windows support has added (Josh Ulrich in #1)

Changes in RApiDatetime version 0.0.1 (2017-03-23)

  • Initial release with six accessible functions

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the rapidatetime page.

For questions or comments please use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 Mar 2017 10:38pm GMT

Bits from Debian: Debian Project Leader elections 2017

It's that time of year again for the Debian Project: the elections of its Project Leader!

The Project Leader position is described in the Debian Constitution.

Two Debian Developers run this year for becoming Project Leader: Mehdi Dogguy, who has held the office for the last year, and Chris Lamb.

We are in the middle of the campaigning period that will last until the end of April 1st. The candidates and Debian contributors are already engaging in debates and discussions on the debian-vote mailing list.

The voting period starts on April 2nd, and during the following two weeks, Debian Developers can vote to choose the person who will guide the project for one year.

The results will be published on April 16th with the term for new the project leader starting the following day.

25 Mar 2017 9:30pm GMT

Russ Allbery: Spring haul

Work has been hellishly busy lately, so that's pretty much all I've been doing. The major project I'm working on should be basically done in the next couple of weeks, though (fingers crossed), so maybe I'll be able to surface a bit more after that.

In the meantime, I'm still acquiring books I don't have time to read, since that's my life. In this case, two great Humble Book Bundles were too good of a bargain to pass up. There are a bunch of books in here that I already own in paperback (and hence showed up in previous haul posts), but I'm running low on shelf room, so some of those paper copies may go to the used bookstore to make more space.

Kelley Armstrong - Lost Souls (sff)
Clive Barker - Tortured Souls (horror)
Jim Butcher - Working for Bigfoot (sff collection)
Octavia E. Butler - Parable of the Sower (sff)
Octavia E. Butler - Parable of the Talents (sff)
Octavia E. Butler - Unexpected Stories (sff collection)
Octavia E. Butler - Wild Seed (sff)
Jacqueline Carey - One Hundred Ablutions (sff)
Richard Chizmar - A Long December (sff collection)
Jo Clayton - Skeen's Leap (sff)
Kate Elliot - Jaran (sff)
Harlan Ellison - Can & Can'tankerous (sff collection)
Diana Pharoh Francis - Path of Fate (sff)
Mira Grant - Final Girls (sff)
Elizabeth Hand - Black Light (sff)
Elizabeth Hand - Saffron & Brimstone (sff collection)
Elizabeth Hand - Wylding Hall (sff)
Kevin Hearne - The Purloined Poodle (sff)
Nalo Hopkinson - Skin Folk (sff)
Katherine Kurtz - Camber of Culdi (sff)
Katherine Kurtz - Lammas Night (sff)
Joe R. Lansdale - Fender Lizards (mainstream)
Robert McCammon - The Border (sff)
Robin McKinley - Beauty (sff)
Robin McKinley - The Hero and the Crown (sff)
Robin McKinley - Sunshine (sff)
Tim Powers - Down and Out in Purgatory (sff)
Cherie Priest - Jacaranda (sff)
Alastair Reynolds - Deep Navigation (sff collection)
Pamela Sargent - The Shore of Women (sff)
John Scalzi - Miniatures (sff collection)
Lewis Shiner - Glimpses (sff)
Angie Thomas - The Hate U Give (mainstream)
Catherynne M. Valente - The Bread We Eat in Dreams (sff collection)
Connie Willis - The Winds of Marble Arch (sff collection)
M.K. Wren - Sword of the Lamb (sff)
M.K. Wren - Shadow of the Swan (sff)
M.K. Wren - House of the Wolf (sff)
Jane Yolen - Sister Light, Sister Dark (sff)

25 Mar 2017 9:21pm GMT

Eddy Petrișor: LVM: Converting root partition from linear to raid1 leads to boot failure... and how to recover

I have a system which has 3 distinct HDDs used as physucal volumes for Linux LVM. One logical volume is the root partition and it was initally created as a linear LV (vg0/OS).
Since I have PV redundancy, I thought it might be a good idea to convert the root LV from liear to raid1 with 2 mirrors.

WARNING: It seems LVM raid1 logicalvolume for / is not supported with grub2, at least not with Ubuntu's 2.02~beta2-9ubuntu1.6 (14.04LTS) or Debian Jessie's grub-pc 2.02~beta2-22+deb8u1!

So I did this:
lvconvert -m2 --type raid1 vg0/OS

Then I restarted to find myself at the 'grub rescue>' prompt.

The initial problem was seen on an Ubuntu 14.04 LTS (aka trusty) system, but I reproduced it on a VM with Debian Jessie.

I downloaded the Super Grub2 Disk and tried to boot the VM. After choosing the option to load the LVM and RAID support, I was able to boot my previous system.

I tried several times to reinstall GRUB, thinking that was the issue, but I always got this kind of error:


/usr/sbin/grub-probe: error: disk `lvmid/QtJiw0-wsDf-A2zh-2v2y-7JVA-NhPQ-TfjQlN/phCDlj-1XAM-VZnl-RzRy-g3kf-eeUB-dBcgmb' not found.

In the end, after digging for more than 4 hours for answers, I decided I might be able to revert the config to linear configuration, from the (initramfs) prompt.

Initally the LV was inactive, so I activated it:

lvchange -a y /dev/vg0/OS

Then restored the LV to linear:

lvconvert -m0 vg0/OS

Then tried to reboot without reinstalling GRUB, just for kicks, which succeded.

In order to confirm this was the issue, I redid the whole thing, and indeed, with a raid1 root, I always got the error lvmid error.

I'll have to check on Monday at work if I can revert it the same way the Ubuntu 14.04 system, but I suspect I will have no issues.


Is it true root on lvm-raid1 is nto supported?

25 Mar 2017 3:39pm GMT

Urvika Gola: Speaking at FOSSASIA’17 | Seasons of Debian : Summer of Code & Winter of Outreachy

I got an amazing chance to speak at FOSSASIA 2017 held at Singapore on "Seasons of Debian - Summer of Code and Winter of Outreachy". I gave a combined talk with my co-speaker Pranav Jain, who contributed to Debian through GSoC. We talked about two major open source initiatives - Outreachy and Google Summer of Code and the work we did on a common project - Lumicall under Debian.

WhatsApp Image 2017-03-23 at 11.32.33 PM

The excitement started even before the first day! On 16th March, there was a speakers meetup at Microsoft office in Singapore. There, I got the chance to connect with other speakers and learn about their work The meetup was concluded by Microsoft Office tour! As a student it was very exciting to see first hand the office of a company that I had only dreamt of being at.

On 17th March, i.e the first day of the three days long conference, I met Hong Phuc Dang, Founder of FOSSASIA. She is very kind and just talking to her just made me cheerful!
Meeting so many great developers from different organizations was exciting.

On 18th March, was the day of our talk! I was a bit nervous to speak in front of amazing developers but, that's how you grow 🙂 Our talk was preceded by a lovely introduction by Mario Behling.

WhatsApp Image 2017-03-25 at 2.32.01 PM.jpeg

I talked about how Outreachy Programme has made a significant impact in increasing the participation of women in Open Source, with one such woman being me

I also talked about Android Programming concepts which I used in while adding new features into Lumicall. Pranav talked about Debian Organization and how to get started with GSoC by sharing his journey!

After our talk, students approached us asking questions about how to participate in Outreachy and GsOC. I felt that a lot more students were receptive to knowing about this new opportunity.

Our own talk was part of the mini DebConf track. Under this track, there were two other amazing sessions namely, "Debian - The best Linux distribution" and "Open Build Service in Debian".

The variety of experiences I gained from FOSSASIA was very diverse. I learned how to speak at a huge platform, learned from other interesting talks, share ideas with smart developers and saw an exciting venue and wonderful city!

I would not be able to experience this without the continuous support of Debian and Outreachy ! 🙂


25 Mar 2017 9:10am GMT

24 Mar 2017

feedPlanet Debian

Gunnar Wolf: Dear lazyweb: How would you visualize..?

Dear lazyweb,

I am trying to get a good way to present the categorization of several cases studied with a fitting graph. I am rating several vulnerabilities / failures according to James Cebula et. al.'s paper, A taxonomy of Operational Cyber Security Risks; this is a somewhat deep taxonomy, with 57 end items, but organized in a three levels deep hierarchy. Copying a table from the cited paper (click to display it full-sized):

My categorization is binary: I care only whether it falls within a given category or not. My first stab at this was to represent each case using a star or radar graph. As an example:

As you can see, to a "bare" star graph, I added a background color for each top-level category (blue for actions of people, green for systems and technology failures), red for failed internal processes and gray for external events), and printed out only the labels for the second level categories; for an accurate reading of the graphs, you have to refer to the table and count bars. And, yes, according to the Engineering Statistics Handbook:

Star plots are helpful for small-to-moderate-sized multivariate data sets. Their primary weakness is that their effectiveness is limited to data sets with less than a few hundred points. After that, they tend to be overwhelming.

I strongly agree with the above statement - And stating that "a few hundred points" can be understood is even an overstatement. 50 points are just too much. Now, trying to increase usability for this graph, I came across the Sunburst diagram. One of the proponents for this diagram, John Stasko, has written quite a bit about it.

Now... How to create my beautiful Sunburst diagram? That's a tougher one. Even though the page I linked to in the (great!) Data visualization catalogue presents even some free-as-in-software tools to do this... They are Javascript projects that will render their beautiful plots (even including an animation)... To the browser. I need them for a static (i.e. to be printed) document. Yes, I can screenshot and all, but I want them to be automatically generated, so I can review and regenerate them all automatically. Oh, I could just write JSON and use SaaS sites such as Aculocity to do the heavy-lifting, but if you know me, you will understand why I don't want to.

So... I set out to find a Gunnar-approved way to display the information I need. Now, as the Protovis documentation says, an icicle is simply a sunburst transformed from polar to cartesian coordinates... But I came to a similar conclusion: The tools I found are not what I need. OK, but an icicle graph seems much simpler to produce - I fired up my Emacs, and started writing using Ruby, RMagick and RVG... I decided to try a different way. This is my result so far:

So... What do you think? Does this look right to you? Clearer than the previous one? Worst? Do you have any idea on how I could make this better?

Oh... You want to tell me there is something odd about it? Well, yes, of course! I still need to tweak it quite a bit. Would you believe me if I told you this is not really a left-to-right icicle graph, but rather a strangely formatted Graphviz non-directed graph using the dot formatter?

I can assure you you don't want to look at my Graphviz sources... But in case you insist... Take them and laugh. Or cry. Of course, this file comes from a hand-crafted template, but has some autogenerated bits to it. I have still to tweak it quite a bit to correct several of its usability shortcomings, but at least it looks somewhat like what I want to achieve.

Anyway, I started out by making a "dear lazyweb" question. So, here it goes: Do you think I'm using the right visualization for my data? Do you have any better suggestions, either of a graph or of a graph-generating tool?

Thanks!

[update] Thanks for the first pointer, Lazyweb! I found a beautiful solution; we will see if it is what I need or not (it is too space-greedy to be readable... But I will check it out more thoroughly). It lays out much better than anything I can spew out by myself - Writing it as a mindmap using TikZ directly from within LaTeX, I get the following result:

24 Mar 2017 8:46pm GMT

Sylvain Beucler: Practical basics of reproducible builds

As GNU FreeDink upstream, I'd very much like to offer pre-built binaries: one (1) official, tested, current, distro-agnostic version of the game with its dependencies.
I'm actually already doing that for the Windows version.
One issue though: people have to trust me -- and my computer's integrity.
Reproducible builds could address that.
My release process is tightly controlled, but is my project reproducible? If not, what do I need? Let's check!

I quickly see that documentation is getting better, namely https://reproducible-builds.org/ :)
(The first docs I read on reproducibility looked more like a crazed date-o-phobic rant than actual solution - plus now we have SOURCE_DATE_EPOCH implemented in gcc ;))

However I was left unsatisfied by the very high-level viewpoint and the lack of concrete examples.
The document points to various issues but is very vague about what tools are impacted.

So let's do some tests!


Let's start with a trivial program:

$ cat > hello.c
#include <stdio.h>
int main(void) {
    printf("Hello, world!\n");
}

OK, first does GCC compile this reproducibly?
I'm not sure because I heard of randomness in identifiers and such in the compilation process...

$ gcc-5 hello.c -o hello-5
$ md5sum hello-5
a00416d7392442321bad4afc5a461321  hello-5
$ gcc-5 hello.c -o hello-5
$ md5sum hello-5
a00416d7392442321bad4afc5a461321  hello-5

Cool, ELF compiler output is stable through time!
Now do 2 versions of GCC compile a hello world identically?

$ gcc-6 hello.c -o hello-6
$ md5sum hello-6
f7f52c2f5f82fe2a95061a771a6c5acd  hello-6
$ hexcompare hello-5 hello-6
[lots of red]
...

Well let's not get our hopes too high ;)
Trivial build options change?

$ gcc-6 hello.c -lc -o hello-6
$ gcc-6 -lc hello.c -o hello-6b
$ md5sum hello-6 hello-6b
f7f52c2f5f82fe2a95061a771a6c5acd  hello-6
f73ee6d8c3789fd8f899f5762025420e  hello-6b
$ hexcompare hello-6 hello-6b
[lots of red]
...

OK, let's be very careful with build options then. What about 2 different build paths?

$ cd ..
$ cp -a repro/ repro2/
$ cd repro2/
$ gcc-6 hello.c -o hello-6
$ md5sum hello-6
f7f52c2f5f82fe2a95061a771a6c5acd  hello-6

Basic compilation is stable across directories.
Now I tried recompiling identically FreeDink on 2 different git clones.
Disappointment:

$ md5sum freedink/native/src/freedink freedink2/native/src/freedink
839ccd9180c72343e23e5d9e2e65e237  freedink/native/src/freedink
6d5dc6aab321fab01b424ac44c568dcf  freedink2/native/src/freedink
$ hexcompare freedink2/native/src/freedink freedink/native/src/freedink
[lots of red]

Hmm, what about stripped versions?

$ strip freedink/native/src/freedink freedink2/native/src/freedink
$ md5sum freedink/native/src/freedink freedink2/native/src/freedink
415e96bb54456f3f2a759f404f18c711  freedink/native/src/freedink
e0702d798807c83d21f728106c9261ad  freedink2/native/src/freedink
$ hexcompare freedink/native/src/freedink freedink2/native/src/freedink
[1 single red spot]

OK, what's happening? diffoscope to the rescue:

$ diffoscope freedink/native/src/freedink freedink2/native/src/freedink
--- freedink/native/src/freedink
+++ freedink2/native/src/freedink
├── readelf --wide --notes {}
│ @@ -3,8 +3,8 @@
│    Owner                 Data size  Description
│    GNU                  0x00000010  NT_GNU_ABI_TAG (ABI version tag)
│      OS: Linux, ABI: 2.6.32
│  
│  Displaying notes found in: .note.gnu.build-id
│    Owner                 Data size  Description
│    GNU                  0x00000014  NT_GNU_BUILD_ID (unique build ID bitstring)-    Build ID: a689574d69072bb64b28ffb82547e126284713fa
│ +    Build ID: d7be191a61e84648a58c18e9c108b3f3ce500302

What on earth is Build ID and how it is computed?
After much digging, I find it's a 2008 plan with application in selecting matching detached debugging symbols.
https://fedoraproject.org/wiki/RolandMcGrath/BuildID is the most detailed overview/rationale I found.
It is supposed to be computed from parts of the binary. It's actually pretty resistant to changes, e.g. I could add the missing "return 0;" in my hello source and get the exact same Build ID!
On the other hand my FreeDink binaries do match except for the Build ID so there must be a catch.

Let's try our basic example with default ./configure CFLAGS:

$ (cd repro/ && gcc -g -O2 hello.c -o hello)
$ (cd repro/ && gcc -g -O2 hello.c -o hello-b)
$ md5sum repro/hello repro/hello-b
6b2cd79947d7c5ed2e505ddfce167116  repro/hello
6b2cd79947d7c5ed2e505ddfce167116  repro/hello-b
# => OK for now

$ (cd repro2/ && gcc -g -O2 hello.c -o hello)
$ md5sum repro2/hello
20b4d09d94de5840400be05bc76e4172  repro2/hello
$ strip repro/hello repro2/hello
$ diffoscope repro/hello repro2/hello
--- repro/hello
+++ repro2/hello2
├── readelf --wide --notes {}
│ @@ -3,8 +3,8 @@
│    Owner                 Data size  Description
│    GNU                  0x00000010  NT_GNU_ABI_TAG (ABI version tag)
│      OS: Linux, ABI: 2.6.32
│  
│  Displaying notes found in: .note.gnu.build-id
│    Owner                 Data size  Description
│    GNU                  0x00000014  NT_GNU_BUILD_ID (unique build ID bitstring)-    Build ID: 462a3c613537bb57f20bd3ccbe6b7f6d2bdc72ba
│ +    Build ID: b4b448cf93e7b541ad995075d2b688ef296bd88b
# => issue reproduced with -g -O2 and different build directories

$ (cd repro/ && gcc -O2 hello.c -o hello)
$ (cd repro2/ && gcc -O2 hello.c -o hello)
$ md5sum repro/hello repro2/hello
1571d45eb5807f7a074210be17caa87b  repro/hello
1571d45eb5807f7a074210be17caa87b  repro2/hello
# => culprit is not -O2, so culprit is -g

Bummer. So the build ID must be computed also from the debug symbols, even if I strip them afterwards :(
OK, so when https://reproducible-builds.org/docs/build-path/ says "Some tools will record the path of the source files in their output", that means the compiler, and more importantly the stripped executable.

Conclusion: apparently to achieve reproducible builds I need identical full build paths and to keep track of them.

What about Windows/MinGW btw?

$ /opt/mxe/usr/bin/i686-w64-mingw32.static-gcc hello.c -o hello.exe
$ md5sum hello.exe 
e0fa685f6866029b8e03f9f2837dc263  hello.exe
$ /opt/mxe/usr/bin/i686-w64-mingw32.static-gcc hello.c -o hello.exe
$ md5sum hello.exe 
df7566c0ac93ea4a0b53f4af83d7fbc9  hello.exe
$ /opt/mxe/usr/bin/i686-w64-mingw32.static-gcc hello.c -o hello.exe
$ md5sum hello.exe 
bbf4ab22cbe2df1ddc21d6203e506eb5  hello.exe

PE compiler output is not stable through time.
(any clue?)

OK, there's still a long road ahead of us...


There are lots of other questions.
Is autoconf output reproducible?
Does it actually matter if autoconf is reproducible if upstream is providing a pre-generated ./configure?
If not what about all the documentation on making tarballs reproducible, along with the strip-nondeterminism tool?
Where do we draw the line between build and build environment?
What are the legal issues of distributing a docker-based build environment without every single matching distro source packages?

That was my modest contribution to practical reproducible builds documentation for developers, I'd very much like to hear about more of it.
Who knows, maybe in the near future we'll get reproducible official builds for Eclipse, ZAP, JetBrains, Krita, Android SDK/NDK... :)

24 Mar 2017 8:40am GMT

Dirk Eddelbuettel: RApiDatetime 0.0.1

Very happy to announce a new package of mine is now up on the CRAN repository network: RApiDatetime.

It provides six entry points for C-level functions of the R API for Date and Datetime calculations: asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. These six functions are all fairly essential and useful, but not one of them was previously exported by R. Hence the need to put them together in the this package to complete the accessible API somewhat.

These should be helpful for fellow package authors as many of us have either our own partial copies of some of this code, or rather farm back out into R to get this done.

As a simple (yet real!) illustration, here is an actual Rcpp function which we could now cover at the C level rather than having to go back up to R (via Rcpp::Function()):

    inline Datetime::Datetime(const std::string &s, const std::string &fmt) {
        Rcpp::Function strptime("strptime");    // we cheat and call strptime() from R
        Rcpp::Function asPOSIXct("as.POSIXct"); // and we need to convert to POSIXct
        m_dt = Rcpp::as<double>(asPOSIXct(strptime(s, fmt)));
        update_tm();
    }

I had taken a first brief stab at this about two years ago, but never finished. With the recent emphasis on C-level function registration, coupled with a possible use case from anytime I more or less put this together last weekend.

It currently builds and tests fine on POSIX-alike operating systems. If someone with some skill and patience in working on Windows would like to help complete the Windows side of things then I would certainly welcome help and pull requests.

For questions or comments please use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 Mar 2017 1:30am GMT

23 Mar 2017

feedPlanet Debian

Simon McVittie: GTK hackfest 2017: D-Bus communication with containers

At the GTK hackfest in London (which accidentally became mostly a Flatpak hackfest) I've mainly been looking into how to make D-Bus work better for app container technologies like Flatpak and Snap.

The initial motivating use cases are:

At the moment, Flatpak runs a D-Bus proxy for each app instance that has access to D-Bus, connects to the appropriate bus on the app's behalf, and passes messages through. That proxy is in a container similar to the actual app instance, but not actually the same container; it is trusted to not pass messages through that it shouldn't pass through. The app-identification mechanism works in practice, but is Flatpak-specific, and has a known race condition due to process ID reuse and limitations in the metadata that the Linux kernel maintains for AF_UNIX sockets. In practice the use of X11 rather than Wayland in current systems is a much larger loophole in the container than this race condition, but we want to do better in future.

Meanwhile, Snap does its sandboxing with AppArmor, on kernels where it is enabled both at compile-time (Ubuntu, openSUSE, Debian, Debian derivatives like Tails) and at runtime (Ubuntu, openSUSE and Tails, but not Debian by default). Ubuntu's kernel has extra AppArmor features that haven't yet gone upstream, some of which provide reliable app identification via LSM labels, which dbus-daemon can learn by querying its AF_UNIX socket. However, other kernels like the ones in openSUSE and Debian don't have those. The access-control (AppArmor mediation) is implemented in upstream dbus-daemon, but again doesn't work portably, and is not sufficiently fine-grained or flexible to do some of the things we'll likely want to do, particularly in dconf.

After a lot of discussion with dconf maintainer Allison Lortie and Flatpak maintainer Alexander Larsson, I think I have a plan for fixing this.

This is all subject to change: see fd.o #100344 for the latest ideas.

Identity model

Each user (uid) has some uncontained processes, plus 0 or more containers.

The uncontained processes include dbus-daemon itself, desktop environment components such as gnome-session and gnome-shell, the container managers like Flatpak and Snap, and so on. They have the user's full privileges, and in particular they are allowed to do privileged things on the user's session bus (like running dbus-monitor), and act with the user's full privileges on the system bus. In generic information security jargon, they are the trusted computing base; in AppArmor jargon, they are unconfined.

The containers are Flatpak apps, or Snap apps, or other app-container technologies like Firejail and AppImage (if they adopt this mechanism, which I hope they will), or even a mixture (different app-container technologies can coexist on a single system). They are containers (or container instances) and not "apps", because in principle, you could install com.example.MyApp 1.0, run it, and while it's still running, upgrade to com.example.MyApp 2.0 and run that; you'd have two containers for the same app, perhaps with different permissions.

Each container has an container type, which is a reversed DNS name like org.flatpak or io.snapcraft representing the container technology, and an app identifier, an arbitrary non-empty string whose meaning is defined by the container technology. For Flatpak, that string would be another reversed DNS name like com.example.MyGreatApp; for Snap, as far as I can tell it would look like example-my-great-app.

The container technology can also put arbitrary metadata on the D-Bus representation of a container, again defined and namespaced by the container technology. For instance, Flatpak would use some serialization of the same fields that go in the Flatpak metadata file at the moment.

Finally, the container has an opaque container identifier identifying a particular container instance. For example, launching com.example.MyApp twice (maybe different versions or with different command-line options to flatpak run) might result in two containers with different privileges, so they need to have different container identifiers.

Contained server sockets

App-container managers like Flatpak and Snap would create an AF_UNIX socket inside the container, bind() it to an address that will be made available to the contained processes, and listen(), but not accept() any new connections. Instead, they would fd-pass the new socket to the dbus-daemon by calling a new method, and the dbus-daemon would proceed to accept() connections after the app-container manager has signalled that it has called both bind() and listen(). (See fd.o #100344 for full details.)

Processes inside the container must not be allowed to contact the AF_UNIX socket used by the wider, uncontained system - if they could, the dbus-daemon wouldn't be able to distinguish between them and uncontained processes and we'd be back where we started. Instead, they should have the new socket bind-mounted into their container's XDG_RUNTIME_DIR and connect to that, or have the new socket set as their DBUS_SESSION_BUS_ADDRESS and be prevented from connecting to the uncontained socket in some other way. Those familiar with the kdbus proposals a while ago might recognise this as being quite similar to kdbus' concept of endpoints, and I'm considering reusing that name.

Along with the socket, the container manager would pass in the container's identity and metadata, and the method would return a unique, opaque identifier for this particular container instance. The basic fields (container technology, technology-specific app ID, container ID) should probably be added to the result of GetConnectionCredentials(), and there should be a new API call to get all of those plus the arbitrary technology-specific metadata.

When a process from a container connects to the contained server socket, every message that it sends should also have the container instance ID in a new header field. This is OK even though dbus-daemon does not (in general) forbid sender-specified future header fields, because any dbus-daemon that supported this new feature would guarantee to set that header field correctly, the existing Flatpak D-Bus proxy already filters out unknown header fields, and adding this header field is only ever a reduction in privilege.

The reasoning for using the sender's container instance ID (as opposed to the sender's unique name) is for services like dconf to be able to treat multiple unique bus names as belonging to the same equivalence class of contained processes: instead of having to look up the container metadata once per unique name, dconf can look it up once per container instance the first time it sees a new identifier in a header field. For the second and subsequent unique names in the container, dconf can know that the container metadata and permissions are identical to the one it already saw.

Access control

In principle, we could have the new identification feature without adding any new access control, by keeping Flatpak's proxies. However, in the short term that would mean we'd be adding new API to set up a socket for a container without any access control, and having to keep the proxies anyway, which doesn't seem great; in the longer term, I think we'd find ourselves adding a second new API to set up a socket for a container with new access control. So we might as well bite the bullet and go for the version with access control immediately.

In principle, we could also avoid the need for new access control by ensuring that each service that will serve contained clients does its own. However, that makes it really hard to send broadcasts and not have them unintentionally leak information to contained clients - we would need to do something more like kdbus' approach to multicast, where services know who has subscribed to their multicast signals, and that is just not how dbus-daemon works at the moment. If we're going to have access control for broadcasts, it might as well also cover unicast.

The plan is that messages from containers to the outside world will be mediated by a new access control mechanism, in parallel with dbus-daemon's current support for firewall-style rules in the XML bus configuration, AppArmor mediation, and SELinux mediation. A message would only be allowed through if the XML configuration, the new container access control mechanism, and the LSM (if any) all agree it should be allowed.

By default, processes in a container can send broadcast signals, and send method calls and unicast signals to other processes in the same container. They can also receive method calls from outside the container (so that interfaces like org.freedesktop.Application can work), and send exactly one reply to each of those method calls. They cannot own bus names, communicate with other containers, or send file descriptors (which reduces the scope for denial of service).

Obviously, that's not going to be enough for a lot of contained apps, so we need a way to add more access. I'm intending this to be purely additive (start by denying everything except what is always allowed, then add new rules), not a mixture of adding and removing access like the current XML policy language.

There are two ways we've identified for rules to be added:

Initially, many contained apps would work in the first way (and in particular sockets=session-bus would add a rule that allows almost everything), while over time we'll probably want to head towards recommending more use of the second.

Related topics

Access control on the system bus

We talked about the possibility of using a very similar ruleset to control access to the system bus, as an alternative to the XML rules found in /etc/dbus-1/system.d and /usr/share/dbus-1/system.d. We didn't really come to a conclusion here.

Allison had the useful insight that the XML rules are acting like a firewall: they're something that is placed in front of potentially-broken services, and not part of the services themselves (which, as with firewalls like ufw, makes it seem rather odd when the services themselves install rules). D-Bus system services already have total control over what requests they will accept from D-Bus peers, and if they rely on the XML rules to mediate that access, they're essentially rejecting that responsibility and hoping the dbus-daemon will protect them. The D-Bus maintainers would much prefer it if system services took responsibility for their own access control (with or without using polkit), because fundamentally the system service is always going to understand its domain and its intended security model better than the dbus-daemon can.

Analogously, when a network service listens on all addresses and accepts requests from elsewhere on the LAN, we sometimes work around that by protecting it with a firewall, but the optimal resolution is to get that network service fixed to do proper authentication and access control instead.

For system services, we continue to recommend essentially this "firewall" configuration, filling in the ${} variables as appropriate:

<busconfig>
    <policy user="${the daemon uid under which the service runs}">
        <allow own="${the service's bus name}"/>
    </policy>
    <policy context="default">
        <allow send_destination="${the service's bus name}"/>
    </policy>
</busconfig>

We discussed the possibility of moving towards a model where the daemon uid to be allowed is written in the .service file, together with an opt-in to "modern D-Bus access control" that makes the "firewall" unnecessary; after some flag day when all significant system services follow that pattern, dbus-daemon would even have the option of no longer applying the "firewall" (moving to an allow-by-default model) and just refusing to activate system services that have not opted in to being safe to use without it. However, the "firewall" also protects system bus clients, and services like Avahi that are not bus-activatable, against unintended access, which is harder to solve via that approach; so this is going to take more thought.

For system services' clients that follow the "agent" pattern (BlueZ, polkit, NetworkManager, Geoclue), the correct "firewall" configuration is more complicated. At some point I'll try to write up a best-practice for these.

New header fields for the system bus

At the moment, it's harder than it needs to be to provide non-trivial access control on the system bus, because on receiving a method call, a service has to remember what was in the method call, then call GetConnectionCredentials() to find out who sent it, then only process the actual request when it has the information necessary to do access control.

Allison and I had hoped to resolve this by adding new D-Bus message header fields with the user ID, the LSM label, and other interesting facts for access control. These could be "opt-in" to avoid increasing message sizes for no reason: in particular, it is not typically useful for session services to receive the user ID, because only one user ID is allowed to connect to the session bus anyway.

Unfortunately, the dbus-daemon currently lets unknown fields through without modification. With hindsight this seems an unwise design choice, because header fields are a finite resource (there are 255 possible header fields) and are defined by the D-Bus Specification. The only field that can currently be trusted is the sender's unique name, because the dbus-daemon sets that field, overwriting the value in the original message (if any).

To make it safe to rely on the new fields, we would have to make the dbus-daemon filter out all unknown header fields, and introduce a mechanism for the service to check (during connection to the bus) whether the dbus-daemon is sufficiently new that it does so. If connected to an older dbus-daemon, the service would not be able to rely on the new fields being true, so it would have to ignore the new fields and treat them as unset. The specification is sufficiently vague that making new dbus-daemons filter out unknown header fields is a valid change (it just says that "Header fields with an unknown or unexpected field code must be ignored", without specifying who must ignore them, so having the dbus-daemon delete those fields seems spec-compliant).

This all seemed fine when we discussed it in person; but GDBus already has accessors for arbitrary header fields by numeric ID, and I'm concerned that this might mean it's too easy for a system service to be accidentally insecure: It would be natural (but wrong!) for an implementor to assume that if g_message_get_header (message, G_DBUS_MESSAGE_HEADER_FIELD_SENDER_UID) returned non-NULL, then that was guaranteed to be the correct, valid sender uid. As a result, fd.o #100317 might have to be abandoned. I think more thought is needed on that one.

Unrelated topics

As happens at any good meeting, we took the opportunity of high-bandwidth discussion to cover many useful things and several useless ones. Other discussions that I got into during the hackfest included, in no particular order:

More notes are available from the GNOME wiki.

Acknowledgements

The GTK hackfest was organised by GNOME and hosted by Red Hat and Endless. My attendance was sponsored by Collabora. Thanks to all the sponsors and organisers, and the developers and organisations who attended.

23 Mar 2017 6:07pm GMT

Neil McGovern: GNOME ED Update – Week 12

New release!

In case you haven't seen it yet, there's a new GNOME release - 3.24! The release is the result of 6 months' work by the GNOME community.

The new release is a major step forward for us, with new features and improvements, and some exciting developments in how we build applications. You can read more about it in the announcement and release notes.

As always, this release was made possible partially thanks to the Friends of GNOME project. In particular, it helped us provide a Core apps hackfest in Berlin last November, which had a direct impact on this release.

Conferences

GTK+ hackfest

I've just come back from the GTK+ hackfest in London - thanks to RedHat and Endless for sponsoring the venues! It was great to meet a load of people who are involved with GNOME and GTK, and some great discussions were had about Flatpak and the creation of a "FlatHub" - somewhere that people can get all their latest Flatpaks from.

LibrePlanet

As I'm writing this, I'm sitting on a train going to Heathrow, for my flight to LibrePlanet 2017! If you're going to be there, come and say hi. I've a load of new stickers that have been produced as well so these can brighten up your laptop.

23 Mar 2017 11:43am GMT

Mike Hommey: Why is the git-cinnabar master branch slower to clone?

Apart from the memory considerations, one thing that the data presented in the "When the memory allocator works against you" post that I haven't touched in the followup posts is that there is a large difference in the time it takes to clone mozilla-central with git-cinnabar 0.4.0 vs. the master branch.

One thing that was mentioned in the first followup is that reducing the amount of realloc and substring copies made the cloning more than 15 minutes faster on master. But the same code exists in 0.4.0, so this isn't part of the difference.

So what's going on? Looking at the CPU usage during the clone is enlightening.

On 0.4.0:

On master:

(Note: the data gathering is flawed in some ways, which explains why the git-remote-hg process goes above 100%, which is not possible for this python process. The data is however good enough for the high level analysis that follows, so I didn't bother to get something more acurate)

On 0.4.0, the git-cinnabar-helper process was saturating one CPU core during the File import phase, and the git-remote-hg process was saturating one CPU core during the Manifest import phase. Overall, the sum of both processes usually used more than one and a half core.

On master, however, the total of both processes barely uses more than one CPU core.

What happened?

This and that happened.

Essentially, before those changes, git-remote-hg would send instructions to git-fast-import (technically, git-cinnabar-helper, but in this case it's only used as a wrapper for git-fast-import), and use marks to track the git objects that git-fast-import created.

After those changes, git-remote-hg asks git-fast-import the git object SHA1 of objects it just asked to be created. In other words, those changes replaced something asynchronous with something synchronous: while it used to be possible for git-remote-hg to work on the next file/manifest/changeset while git-fast-import was working on the previous one, it now waits.

The changes helped simplify the python code, but made the overall clone process much slower.

If I'm not mistaken, the only real use for that information is for the mapping of mercurial to git SHA1s, which is actually rarely used during the clone, except at the end, when storing it. So what I'm planning to do is to move that mapping to the git-cinnabar-helper process, which, incidentally, will kill not 2, but 3 birds with 1 stone:

So expect git-cinnabar 0.5 to get moar faster, and to use moar less memory.

23 Mar 2017 7:38am GMT

Mike Hommey: Analyzing git-cinnabar memory use

In previous post, I was looking at the allocations git-cinnabar makes. While I had the data, I figured I'd also look how the memory use correlates with expectations based on repository data, to put things in perspective.

As a reminder, this is what the allocations look like (horizontal axis being the number of allocator function calls):

There are 7 different phases happening during a git clone using git-cinnabar, most of which can easily be identified on the graph above:

Overall, except for the stupid spike during the final phase, the manifest DAG and the glibc allocator runaway memory use described in previous posts, there is nothing terribly bad with the git-cinnabar memory usage, all things considered. Mozilla-central is just big.

The spike is already half addressed, and work is under way for the glibc allocator runaway memory use. The manifest DAG, interestingly, is actually mostly useless. It's only used to track the heads of the DAG, and it's very much possible to track heads of a DAG without actually storing the entire DAG. In fact, that's what git-cinnabar already does for changeset heads… so we would only need to do the same for manifest heads.

One could argue that the 1.4GB of raw RevChunk data we're keeping in memory for later user could be kept on disk instead. I haven't done this so far because I didn't want to have to handle temporary files (and answer questions like "where to put them?", "what if there isn't enough disk space there?", "what if disk access is slow?", etc.). But the majority of this data is from manifests. I'm already planning changes in how git-cinnabar stores manifests data that will actually allow to import them directly, instead of keeping them in memory until files are imported. This would instantly remove 1.18GB of memory usage. The downside, however, is that this would be more CPU intensive: Importing changesets will require creating the corresponding git trees, and getting the stored manifest data. I think it's worth, though.

Finally, one thing that isn't obvious here, but that was found while analyzing why RSS would be going up despite memory usage going down, is that git-cinnabar is doing way too many reallocations and substring allocations.

So let's look at two metrics that hopefully will highlight the problem:

Assuming all the requested memory is filled at some point, the former gives us an upper bound to the amount of memory that is ever filled or copied (the amount that would be filled if no realloc was ever in-place), while the the latter gives us a lower bound (the amount that would be filled or copied if all reallocs were in-place).

Ideally, we'd want the upper and lower bounds to be close to each other (indicating few realloc calls), and the total amount at the end of the process to be as close as possible to the amount of data we're handling (which we've seen is around 3TB).

… and this is clearly bad. Like, really bad. But we already knew that from the previous post, although it's nice to put numbers on it. The lower bound is about twice the amount of data we're handling, and the upper bound is more than 10 times that amount. Clearly, we can do better.

We'll see how things evolve after the necessary code changes happen. Stay tuned.

23 Mar 2017 4:30am GMT

22 Mar 2017

feedPlanet Debian

Arturo Borrero González: IPv6 and CGNAT

IPv6

Today I ended reading an interesting article by the 4th spanish ISP regarding IPv6 and CGNAT. The article is in spanish, but I will translate the most important statements here.

Having a spanish Internet operator to talk about this subject is itself good news. We have been lacking any news regarding IPv6 in our country for years. I mean, no news from private operators. Public networks like the one where I develop my daily job has been offering native IPv6 since almost a decade…

The title of the article is "What is CGNAT and why is it used".

They start by admiting that this technique is used to address the issue of IPv4 exhaustion. Good. They move on to say that IPv6 was designed to address IPv4 exhaustion. Great. Then, they state that ''the internet network is not ready for IPv6 support''. Also that ''IPv6 has the handicap of many websites not supporting it''. Sorry?

That is not true. If they refer to the core of internet (i.e, RIRs, interexchangers, root DNS servers, core BGP routers, etc) they have been working with IPv6 for ages now. If they refer to something else, for example Google, Wikipedia, Facebook, Twitter, Youtube, Netflix or any random hosting company, they do support IPv6 as well. Hosting companies which don't support IPv6 are only a few, at least here in Europe.

The traffic to/from these services is clearly the vast majority of the traffic traveling in the wires nowaday. And they support IPv6.

The article continues defending CGNAT. They refer to IPv6 as an alternative to CGNAT. No, sorry, CGNAT is an alternative to you not doing your IPv6 homework.

The article ends by insinuing that CGNAT is more secure and useful than IPv6. That's the final joke. They mention some absurd example of IP cams being accessed from the internet by anyone.

Sure, by using CGNAT you are indeed making the network practically one-way only. There exists RFC7021 which refers to the big issues of a CGNAT network. So, by using CGNAT you sacrifice a lot of usability in the name of security. This supposed security can be replicated by the most simple possible firewall, which could be deployed in Dual Stack IPv4/IPv6 using any modern firewalling system, like nftables.

(Here is a good blogpost of RFC7021 for spanish readers: Midiendo el impacto del Carrier-Grade NAT sobre las aplicaciones en red)

By the way, Google kindly provides some statistics regarding their IPv6 traffic. These stats clearly show an exponential growth:

Google IPv6 traffic

Others ISP operators are giving IPv6 strong precedence over IPv4, that's the case of Verizon in USA: Verizon Static IP Changes IPv4 to Persistent Prefix IPv6.

My article seems a bit like a rant, but I couldn't miss the oportunity to claim for native IPv6. None of the major spanish ISP have IPv6.

22 Mar 2017 5:47pm GMT

Michael Stapelberg: Debian stretch on the Raspberry Pi 3 (update)

I previously wrote about my Debian stretch preview image for the Raspberry Pi 3.

Now, I'm publishing an updated version, containing the following changes:

A couple of issues remain, notably the lack of HDMI, WiFi and bluetooth support (see wiki:RaspberryPi3 for details. Any help with fixing these issues is very welcome!

As a preview version (i.e. unofficial, unsupported, etc.) until all the necessary bits and pieces are in place to build images in a proper place in Debian, I built and uploaded the resulting image. Find it at https://people.debian.org/~stapelberg/raspberrypi3/2017-03-22/. To install the image, insert the SD card into your computer (I'm assuming it's available as /dev/sdb) and copy the image onto it:

$ wget https://people.debian.org/~stapelberg/raspberrypi3/2017-03-22/2017-03-22-raspberry-pi-3-stretch-PREVIEW.img
$ sudo dd if=2017-03-22-raspberry-pi-3-stretch-PREVIEW.img of=/dev/sdb bs=5M

If resolving client-supplied DHCP hostnames works in your network, you should be able to log into the Raspberry Pi 3 using SSH after booting it:

$ ssh root@rpi3
# Password is "raspberry"

22 Mar 2017 4:36pm GMT

Dirk Eddelbuettel: Suggests != Depends

A number of packages on CRAN use Suggests: casually.

They list other packages as "not required" in Suggests: -- as opposed to absolutely required via Imports: or the older Depends: -- yet do not test for their use in either examples or, more commonly, unit tests.

So e.g. the unit tests are bound to fail because, well, Suggests != Depends.

This has been accomodated for many years by all parties involved by treating Suggests as a Depends and installing unconditionally. As I understand it, CRAN appears to flip a switch to automatically install all Suggests from major repositories glossing over what I consider to be a packaging shortcoming. (As an aside, treatment of Additonal_repositories: is indeed optional; Brooke Anderson and I have a fine paper under review on this)

I spend a fair amount of time with reverse dependency ("revdep") checks of packages I maintain, and I will no longer accomodate these packages.

These revdep checks take long enough as it is, so I will now blacklist these packages that are guaranteed to fail when their "optional" dependencies are not present.

Writing R Extensions says in Section 1.1.3

All packages that are needed10 to successfully run R CMD check on the package must be listed in one of 'Depends' or 'Suggests' or 'Imports'. Packages used to run examples or tests conditionally (e.g. via if(require(pkgname))) should be listed in 'Suggests' or 'Enhances'. (This allows checkers to ensure that all the packages needed for a complete check are installed.)

In particular, packages providing "only" data for examples or vignettes should be listed in 'Suggests' rather than 'Depends' in order to make lean installations possible.

[...]

It used to be common practice to use require calls for packages listed in 'Suggests' in functions which used their functionality, but nowadays it is better to access such functionality via :: calls.

and continues in Section 1.1.3.1

Note that someone wanting to run the examples/tests/vignettes may not have a suggested package available (and it may not even be possible to install it for that platform). The recommendation used to be to make their use conditional via if(require("pkgname"))): this is fine if that conditioning is done in examples/tests/vignettes.

I will now exercise my option to use 'lean installations' as discussed here. If you want your package included in tests I run, please make sure it tests successfully when only its required packages are present.

22 Mar 2017 3:16pm GMT

Mike Hommey: When the memory allocator works against you, part 2

This is a followup to the "When the memory allocator works against you" post from a few days ago. You may want to read that one first if you haven't, and come back. In case you don't or didn't read it, it was all about memory consumption during a git clone of the mozilla-central mercurial repository using git-cinnabar, and how the glibc memory allocator is using more than one would expect. This post is going to explore how/why it's happening.

I happen to have written a basic memory allocation logger for Firefox, so I used it to log all the allocations happening during a git clone exhibiting the runaway memory increase behavior (using a python that doesn't use its own allocator for small allocations).

The result was a 6.5 GB log file (compressed with zstd ; 125 GB uncompressed!) with 2.7 billion calls to malloc, calloc, free, and realloc, recorded across (mostly) 2 processes (the python git-remote-hg process and the native git-cinnabar-helper process ; there are other short-lived processes involved, but they do less than 5000 calls in total).

The vast majority of those 2.7 billion calls is done by the python git-remote-hg process: 2.34 billion calls. We'll only focus on this process.

Replaying those 2.34 billion calls with a program that reads the log allowed to reproduce the runaway memory increase behavior to some extent. I went an extra mile and modified glibc's realloc code in memory so it doesn't call memcpy, to make things faster. I also ran under setarch x86_64 -R to disable ASLR for reproducible results (two consecutive runs return the exact same numbers, which doesn't happen with ASLR enabled).

I also modified the program to report the number of live allocations (allocations that haven't been freed yet), and the cumulated size of the actually requested allocations (that is, the sum of all the sizes given to malloc, calloc, and realloc calls for live allocations, as opposed to what the memory allocator really allocated, which can be more, per malloc_usable_size).

RSS was not tracked because the allocations are never filled to make things faster, such that pages for large allocations are never dirty, and RSS doesn't grow as much because of that.

Full disclosure: it turns out the "system bytes" and "in-use bytes" numbers I had been collecting in the previous post were smaller than what they should have been, and were excluding memory that the glibc memory allocator would have mmap()ed. That however doesn't affect the trends that had been witnessed. The data below is corrected.

(Note that in the graph above and the graphs that follow, the horizontal axis represents the number of allocator function calls performed)

While I was here, I figured I'd check how mozjemalloc performs, and it has a better behavior (although it has more overhead).

What doesn't appear on this graph, though, is that mozjemalloc also tells the OS to drop some pages even if it keeps them mapped (madvise(MADV_DONTNEED)), so in practice, it is possible the actual RSS decreases too.

And jemalloc 4.5:

(It looks like it has better memory usage than mozjemalloc for this use case, but its stats are being thrown off at some point, I'll have to investigate)

Going back to the first graph, let's get a closer look at what the allocations look like when the "system bytes" number is increasing a lot. The highlights in the following graphs indicate the range the next graph will be showing.

So what we have here is a bunch of small allocations (small enough that they don't seem to move the "requested" line ; most under 512 bytes, so under normal circumstances, they would be allocated by python, a few between 512 and 2048 bytes), and a few large allocations, one of which triggers a bump in memory use.

What can appear weird at first glance is that we have a large allocation not requiring more system memory, later followed by a smaller one that does. What the allocations actually look like is the following:


void *ptr0 = malloc(4850928); // #1391340418
void *ptr1 = realloc(some_old_ptr, 8000835); // #1391340419
free(ptr0); // #1391340420
ptr1 = realloc(ptr1, 8000925); // #1391340421
/* ... */
void *ptrn = malloc(879931); // #1391340465
ptr1 = realloc(ptr1, 8880819); // #1391340466
free(ptrn); // #1391340467

As it turns out, inspecting all the live allocations at that point, while there was a hole large enough to do the first two reallocs (the second actually happens in place), at the point of the third one, there wasn't a large enough hole to fit 8.8MB.

What inspecting the live allocations also reveals, is that there is a large number of large holes between all the allocated memory ranges, presumably coming from previous similar patterns. There are, in fact, 91 holes larger than 1MB, 24 of which are larger than 8MB. It's the accumulation of those holes that can't be used to fulfil larger allocations that makes the memory use explode. And there aren't enough small allocations happening to fill those holes. In fact, the global trend is for less and less memory to be allocated, so, smaller holes are also being poked all the time.

Really, it's all a straightforward case of memory fragmentation. The reason it tends not to happen with jemalloc is that jemalloc groups allocations by sizes, which the glibc allocator doesn't seem to be doing. The following is how we got a hole that couldn't fit the 8.8MB allocation in the first place:


ptr1 = realloc(ptr1, 8880467); // #1391324989; ptr1 is 0x5555de145600
/* ... */
void *ptrx = malloc(232); // #1391325001; ptrx is 0x5555de9bd760 ; that is 13 bytes after the end of ptr1.
/* ... */
free(ptr1); // #1391325728; this leaves a hole of 8880480 bytes at 0x5555de145600.

All would go well if ptrx was free()d, but it looks like it's long-lived. At least, it's still allocated by the time we reach the allocator call #1391340466. And since the hole is 339 bytes too short for the requested allocation, the allocator has no other choice than request more memory to the system.

What's bothering, though, is that the allocator chose to allocate ptrx in the space following ptr1, when it allocated similarly sized buffers after allocating ptr1 and before allocating ptrx in completely different places, and while there are plenty of holes in the allocated memory where it could fit.

Interestingly enough, ptrx is a 232 bytes allocation, which means under normal circumstances, python itself would be allocating it. In all likeliness, when the python allocator is enabled, it's allocations larger than 512 bytes that become obstacles to the larger allocations. Another possibility is that the 256KB fragments that the python allocator itself allocates to hold its own allocations become the obstacles (my original hypothesis). I think the former is more likely, though, putting back the blame entirely on glibc's shoulders.

Now, it looks like the allocation pattern we have here is suboptimal, so I re-ran a git clone under a debugger to catch when a realloc() for 8880819 bytes happens (the size is peculiar enough that it only happened once in the allocation log). But doing that with a conditional breakpoint is just too slow, so I injected a realloc wrapper with LD_PRELOAD that sends a SIGTRAP signal to the process, so that an attached debugger can catch it.

Thanks to the support for python in gdb, it was then posible to pinpoint the exact python instructions that made the realloc() call (it didn't come as a surprise ; in fact, that was one of the places I had in mind, but I wanted definite proof):


new = ''
end = 0
# ...
for diff in RevDiff(rev_patch):
    new += data[end:diff.start]
    new += diff.text_data
    end = diff.end
    # ...
new += data[end:]

What happens here is that we're creating a mercurial manifest we got from the server in patch form against a previous manifest. So data contains the original manifest, and rev_patch the patch. The patch essentially contains instructions of the form "replace n bytes at offset o with the content c".

The code here just does that in the most straightforward way, implementation-wise, but also, it turns out, possibly the worst way.

So let's unroll this loop over a couple iterations:

new = ''

This allocates an empty str object. In fact, this doesn't actually allocate anything, and only creates a pointer to an interned representation of an empty string.

new += data[0:diff.start]

This is where things start to get awful. data is a str, so data[0:diff.start] creates a new, separate, str for the substring. One allocation, one copy.

Then appends it to new. Fortunately, CPython is smart enough, and just assigns data[0:diff.start] to new. This can easily be verified with the CPython REPL:

>>> foo = ''
>>> bar = 'bar'
>>> foo += bar
>>> foo is bar
True

(and this is not happening because the example string is small here ; it also happens with larger strings, like 'bar' * 42000)

Back to our loop:

new += diff.text_data

Now, new is realloc()ated to have the right size to fit the appended text in it, and the contents of diff.text_data is copied. One realloc, one copy.

Let's go for a second iteration:

new += data[diff.end:new_diff.start]

Here again, we're doing an allocation for the substring, and one copy. Then new is realloc()ated again to append the substring, which is an additional copy.

new += new_diff.text_data

new is realloc()ated yet again to append the contents of new_diff.text_data.

We now finish with:

new += data[new_diff.end:]

which, again creates a substring from the data, and then proceeds to realloc()ate new one freaking more time.

That's a lot of malloc()s and realloc()s to be doing…

After modifying the code to implement the first and fifth items, memory usage during a git clone of mozilla-central looks like the following (with the python allocator enabled):

(Note this hasn't actually landed on the master branch yet)

Compared to what it looked like before, this is largely better. But that's not the only difference: the clone was also about 1000 seconds faster. That's more than 15 minutes! But that's not all so surprising when you know the volumes of data handled here. More insight about this coming in an upcoming post.

But while the changes outlined above make the glibc allocator behavior less likely to happen, it doesn't totally obliviate it. In fact, it seems it is still happening by the end of the manifest import phase. We're still allocating increasingly large temporary buffers because the size of the imported manifests grows larger and larger, and every one of them is the result of patching a previous one.

The only way to avoid those large allocations creating holes would be to avoid doing them in the first place. My first attempt at doing that, keeping manifests as lists of lines instead of raw buffers, worked, but was terribly slow. So slow, in fact, that I had to stop a clone early and estimated the process would likely have taken a couple days. Iterating over multiple generators at the same time, a lot, kills performance, apparently. I'll have to try with significantly less of that.

22 Mar 2017 6:57am GMT