16 Jul 2018

feedPlanet GNOME

Jussi Pakkanen: How expensive is globbing for sources in large projects

A common holy war in build systems is whether you should explicitly list all sources that make up a target or use a globbing pattern. There are both technical and non-technical arguments on both sides. The latter mostly deal with reliability and flexibility vs convenience. In this post we are going to ignore them completely and instead focus on the technical parts, specifically the overhead of globbing. The measurement script used can be downloaded from this repo.

In this test we used the full checkout of Chromium source code. The tests were run under Windows, since it is noticeably slower than Linux on both file operations and process invocations. The task simulation consists of roughly three parts:

  1. Scan the source tree for all directories that contain sources
  2. Generate glob patterns for detected directories (corresponding roughly to "one target for all sources in one directory")
  3. Run the globs

This ignores a bunch of steps, such as serialising the glob results to files and calculating the delta between two glob sets. These are probably fairly fast compared to file access operations, though.

Scanning the source tree and generating the globs

There is no direct correlation between this step and a regular build system. It is mostly interesting as a comparison between file operations between a hot and a cold cache. Running the scan on a cold cache takes 2 minutes but for a warm cache about 6 seconds.

Since this step is always run first, the following tests are all operating with a hot cache.

The actual globbing

Running all globs on the Chromium source tree takes between 2 and 6 seconds. This is the absolute lowest time that can be obtained for a no-op build without daemons because all globs must be re-evaluated every time.

The rule of thumb for UI design is that everything under one second is perceived as instantaneous. This means that for these sizes globbing causes a noticeable delay. Whether this is seen as insignificant or aggravating depends on each user.

Extra bonus: C++ modules

Since we have the measurement script, let's use it for something more interesting. Modules are an upcoming C++ feature to increase build times and a ton of other coolness depending on who you ask. The current specification works by having a kind of "module export declaration" at the beginning of source files. The idea is that you first compile those to generate a sort of a module declaration file and then you can start the actual compilation that uses said files.

If you thought "waitaminute, that sounds exactly like how FORTRAN is compiled", you are correct. Because of this it has the same problem that you can't compile source files in an arbitrary order, but instead you must first somehow scan them to find out the interdependencies between source (not header) files. In practice what this means is that instead of single-phase compilation all files must be processed twice. All scan operations must be done before any compilation jobs can start because otherwise you might start to compile a file before its dependencies are fully processed.

The scanning can be done in one of two ways. Either the build system scans the sources meaning it needs to understand the syntax of source files or the compiler can be invoked in a special preprocessing mode. Note that build systems such as Ninja do not do any such operations by themselves but instead always invoke external processes to do their work.

Testing the performance impact of these two is straightforward. The first one can be done by reading the first ten lines of each source file and then throwing them away. Measuring this time gives a fairly good estimate of the file processing overhead. The second way can be measured by doing the exact same thing but also invoking the compiler with no-op command line arguments to get the process invocation overhead.

Scanning the files directly takes roughly 120 seconds. For an 8 core machine this means a 15 second delay (at minimum) before any compilation tasks can begin. This is not great but for a full build it should be tolerable.

When spawning a compiler process the same operation takes 69 minutes. This is intolerably slow and would require an order of magnitude speedup in compilation times to be worthwhile. Unlike regular compilations, dependency scanning can not be sped up with unity builds because the specification requires that the module declaration must be at the very beginning of source files (and presumably there can not be more than one in a single TU).

16 Jul 2018 11:54pm GMT

Petr Kovar: GUADEC 2018

Back from GUADEC, held in the beautiful Andalusian city of Almería, Spain, from 6th July through 11th July, 2018, I wanted to share a few notes wrt documentation and localization activities at the conference and during the traditional post-conference hacking days.

Mike Hill and Rudolfs Mazurs already blogged about their reflections of GUADEC. I'd add that we had some i18n and L10n-related talks in the schedule (machine translations, input methods), which was a nice improvement in terms of representation from previous years.

The space available for the Birds of a Feather sessions this year was rather limited, so we could only secure an afternoon slot (thanks Kat!) for our docs+translators meetup. We were joined for a while by a group of local documentation writers creating Spanish manuals for the local market. After that, we focused on two main areas.

One was related to the recent migration of GNOME projects to GitLab and involved looking into usability of our wiki docs for contributors and, specifically, newcomers. We found quite a few outdated references to git.gnome.org, Bugzilla and the like, with the biggest issue, however, being the suboptimal overall structure of the contributor guides on the wiki. We also looked into how to improve submitting user feedback and switching languages for the users of help.gnome.org (and yelp).

The other area discussed was making our CI checks for GNOME documentation modules much more robust, with the idea of using the GitLab CI integration to its full potential with tests verifying translations and more.

You can find all notes from the meetup in our Etherpad.

There was also some continued discussion on reworking the GNOME developer center, but I couldn't take part in its final installment on Wednesday, as I was already flying out.

I'd like to thank the GNOME Foundation for their continued support in funding my travel.

16 Jul 2018 3:06pm GMT

Bastian Ilsø Hougaard: GUADEC18 Developer Center BoF Part 1: The Developer Experience

At this year's GUADEC lightning talks I spontaneously announced and arranged a Developer Center BoF (Birds of a Feather) session. We were six attendants who met together Wednesday the 11th July. I think it is important that we communicate our doings to the rest of the community, so I will make a few short blog posts based on our meeting notes and my own thoughts on the subject.

What is this all about?

We are several people in our community who are dissatisfied with the developer documentation experience, in particular if you use bindings and are unfamiliar with GNOME terminology and API. The GNOME Developer Center supposedly provides "all the information that you need to create fantastic software using GNOME technologies" but is not maintained and looks dated. There has been discussions on improving the experience previously, but I feel it's time to take inspiration from our gitlab migration and organize a proper initiative now (wiki page coming soon).

At the GUADEC 2018 BoF we conducted a bottom-up analysis of our current developer experience which can help lay some foundation for informed decision making. It consisted of the following:

In this blog post, Part 1: The Developer Experience, we will define the current developer experience.

What constitutes our developer documentation experience?

What follows is an overview of important sources of GNOME documentation we are aware of. Searching and finding information in these sources constitute the current developer documentation experirence. It might be easy to think that we should only concern ourselves with the developer center itself, but in practice, developers search much wider for help than that. This is a list of things we identified at the BoF and additional information that later came to my mind:

This paints a picture of a very scattered experience, but obviously, the intention here is not that the developer center should unify absolutely everything. However, by knowing what is out there we can in the future make more informed decisions on what the GNOME Developer Center should host itself and what external sources could be useful to link to (and which we can consciously leave out).

The next step in our analysis is then to understand the possible audiences which I will cover soon in Part 2: Audiences.

Your input?

Is there other important information which has been vital for your GNOME developer experience? Helping understand what our current experience consists of is useful information so we can be more conscious when designing the next generation developer center. Leave a comment here or in this gitlab issue.

If you are interested in contributing to this initative, join the call next week or join the hackfest in February. More information on both of these coming soon.

16 Jul 2018 3:06pm GMT

09 Jul 2018

feedplanet.freedesktop.org

Eric Anholt: 2018-07-09

Now that my V3D GPU hangs are cleared up, I made a lot of progress on conformance:

Fixes from last week:

Fail/(Pass+Fail) ratios after my weekend run:

I've debugged the first MSAA crash, but there are some MSAA resolve bugs after that which I need to sort out (some of which are probably the cause of those EGL failures).

On the VC4 front, Boris landed the writeback changes, and I submitted the DT for it in my ARM pull request.

09 Jul 2018 12:30am GMT

08 Jul 2018

feedplanet.freedesktop.org

Peter Hutterer: meson fails with "ERROR: Native dependency 'foo' not found" - and how to fix it

A common error when building from source is something like the error below:


meson.build:50:0: ERROR: Native dependency 'foo' not found

or a similar warning


meson.build:63:0: ERROR: Invalid version of dependency, need 'foo' ['>= 1.1.0'] found '1.0.0'.

Seeing that can be quite discouraging, but luckily, in many cases it's not too difficult to fix. As usual, there are many ways to get to a successful result, I'll describe what I consider the simplest.

What does it mean? Dependencies are simply libraries or tools that meson needs to build the project. Usually these are declared like this in meson.build:


dep_foo = dependency('foo', version: '>= 1.1.0')

In human words: "we need the development headers for library foo (or 'libfoo') of version 1.1.0 or later". meson uses the pkg-config tool in the background to resolve that request. If we require package foo, pkg-config searches for a file foo.pc in the following directories:

The error message simply means pkg-config couldn't find the file and you need to install the matching package from your distribution or from source.

And important note here: in most cases, we need the development headers of said library, installing just the library itself is not sufficient. After all, we're trying to build against it, not merely run against it.

What package provides the foo.pc file?

In many cases the package is the development version of the package name. Try foo-devel (Fedora, RHEL, SuSE, ...) or foo-dev (Debian, Ubuntu, ...). yum and dnf provide a great shortcut to install any pkg-config dependency:


$> dnf install "pkgconfig(foo)"

$> yum install "pkgconfig(foo)"

will automatically search and install the right package, including its dependencies.
apt-get requires a bit more effort:


$> apt-get install apt-file
$> apt-file update
$> apt-file search --package-only foo.pc
foo-dev
$> apt-get install foo-dev

For those running Arch and pacman, the sequence is:


$> pacman -S pkgfile
$> pkgfile -u
$> pkgfile foo.pc
extra/foo
$> pacman -S extra/foo

Once that's done you can re-run meson and see if all dependencies have been met. If more packages are missing, follow the same process for the next file.

Any users of other distributions - let me know how to do this on yours and I'll update the post

My version is wrong!

It's not uncommon to see the following error after installing the right package:


meson.build:63:0: ERROR: Invalid version of dependency, need 'foo' ['>= 1.1.0'] found '1.0.0'.

Now you're stuck and you have a problem. What this means is that the package version your distribution provides is not new enough to build your software. This is where the simple solutions and and it all gets a bit more complicated - with more potential errors. Unless you are willing to go into the deep end, I recommend moving on and accepting that you can't have the newest bits on an older distribution. Because now you have to build the dependencies from source and that may then require to build their dependencies from source and before you know you've built 30 packages. If you're willing read on, otherwise - sorry, you won't be able to run your software today.

Manually installing dependencies

Now you're in the deep end, so be aware that you may see more complicated errors in the process. First of all you need to figure out where to get the source from. I'll now use cairo as example instead of foo so you see actual data. On rpm-based distributions like Fedora run dnf or yum:


$> dnf info cairo-devel # or yum info cairo-devel
Loaded plugins: auto-update-debuginfo, langpacks
Installed Packages
Name : cairo-devel
Arch : x86_64
Version : 1.13.1
Release : 0.1.git337ab1f.fc20
Size : 2.4 M
Repo : installed
From repo : fedora
Summary : Development files for cairo
URL : http://cairographics.org
License : LGPLv2 or MPLv1.1
Description : Cairo is a 2D graphics library designed to provide high-quality
: display and print output.
:
: This package contains libraries, header files and developer
: documentation needed for developing software which uses the cairo
: graphics library.

The important field here is the URL line - got to that and you'll find the source tarballs. That should be true for most projects but you may need to google for the package name and hope. Search for the tarball with the right version number and download it. On Debian and related distributions, cairo is provided by the libcairo2-dev package. Run apt-cache show on that package:


$> apt-cache show libcairo2-dev
Package: libcairo2-dev
Source: cairo
Version: 1.12.2-3
Installed-Size: 2766
Maintainer: Dave Beckett
Architecture: amd64
Provides: libcairo-dev
Depends: libcairo2 (= 1.12.2-3), libcairo-gobject2 (= 1.12.2-3),[...]
Suggests: libcairo2-doc
Description-en: Development files for the Cairo 2D graphics library
Cairo is a multi-platform library providing anti-aliased
vector-based rendering for multiple target backends.
.
This package contains the development libraries, header files needed by
programs that want to compile with Cairo.
Homepage: http://cairographics.org/
Description-md5: 07fe86d11452aa2efc887db335b46f58
Tag: devel::library, role::devel-lib, uitoolkit::gtk
Section: libdevel
Priority: optional
Filename: pool/main/c/cairo/libcairo2-dev_1.12.2-3_amd64.deb
Size: 1160286
MD5sum: e29852ae8e8e5510b00b13dbc201ce66
SHA1: 2ed3534d02c01b8d10b13748c3a02820d10962cf
SHA256: a6099cfbcc6bd891e347dd9abc57b7f137e0fd619deaff39606fd58f0cc60d27

In this case it's the Homepage line that matters, but the process of downloading tarballs is the same as above. For Arch users, the interesting line is URL as well:


$> pacman -Si cairo | grep URL
Repository : extra
Name : cairo
Version : 1.12.16-1
Description : Cairo vector graphics library
Architecture : x86_64
URL : http://cairographics.org/
Licenses : LGPL MPL
....

Now to the complicated bit: In most cases, you shouldn't install the new version over the system version because you may break other things. You're better off installing the dependency into a custom folder ("prefix") and point pkg-config to it. So let's say you downloaded the cairo tarball, now you need to run:


$> mkdir $HOME/dependencies/
$> tar xf cairo-someversion.tar.xz
$> cd cairo-someversion
$> autoreconf -ivf
$> ./configure --prefix=$HOME/dependencies
$> make && make install
$> export PKG_CONFIG_PATH=$HOME/dependencies/lib/pkgconfig:$HOME/dependencies/share/pkgconfig
# now go back to original project and run meson again

So you create a directory called dependencies and install cairo there. This will install cairo.pc as $HOME/dependencies/lib/cairo.pc. Now all you need to do is tell pkg-config that you want it to look there as well - so you set PKG_CONFIG_PATH. If you re-run meson in the original project, pkg-config will find the new version and meson should succeed. If you have multiple packages that all require a newer version, install them into the same path and you only need to set PKG_CONFIG_PATH once. Remember you need to set PKG_CONFIG_PATH in the same shell as you are running configure from.

In the case of dependencies that use meson, you replace autotools and make with meson and ninja:


$> mkdir $HOME/dependencies/
$> tar xf foo-someversion.tar.xz
$> cd foo-someversion
$> meson builddir -Dprefix=$HOME/dependencies
$> ninja -C builddir install
$> export PKG_CONFIG_PATH=$HOME/dependencies/lib/pkgconfig:$HOME/dependencies/share/pkgconfig
# now go back to original project and run meson again

If you keep seeing the version error the most common problem is that PKG_CONFIG_PATH isn't set in your shell, or doesn't point to the new cairo.pc file. A simple way to check is:


$> pkg-config --modversion cairo
1.13.1

Is the version number the one you installed or the system one? If it is the system one, you have a typo in PKG_CONFIG_PATH, just re-set it. If it still doesn't work do this:


$> cat $HOME/dependencies/lib/pkgconfig/cairo.pc
prefix=/usr
exec_prefix=/usr
libdir=/usr/lib64
includedir=/usr/include

Name: cairo
Description: Multi-platform 2D graphics library
Version: 1.13.1

Requires.private: gobject-2.0 glib-2.0 >= 2.14 [...]
Libs: -L${libdir} -lcairo
Libs.private: -lz -lz -lGL
Cflags: -I${includedir}/cairo

If the Version field matches what pkg-config returns, then you're set. If not, keep adjusting PKG_CONFIG_PATH until it works. There is a rare case where the Version field in the installed library doesn't match what the tarball said. That's a defective tarball and you should report this to the project, but don't worry, this hardly ever happens. In almost all cases, the cause is simply PKG_CONFIG_PATH not being set correctly. Keep trying :)

Let's assume you've managed to build the dependencies and want to run the newly built project. The only problem is: because you built against a newer library than the one on your system, you need to point it to use the new libraries.


$> export LD_LIBRARY_PATH=$HOME/dependencies/lib

and now you can, in the same shell, run your project.

Good luck!

08 Jul 2018 11:46pm GMT

02 Jul 2018

feedplanet.freedesktop.org

Eric Anholt: 2018-07-02

For V3D, last week I spent mostly building up .CLIF dumping support so that I could talk to the HW team about the GPU hangs I've been experiencing. This involved reformatting my dump output, naming some missing (0-filled) fields of the GPU packets I emit, and restructuring the dumper so that I could dump the contents of each BO all at once (whereas before my dumps didn't really care about BOs and would just dump each memory area as it was found in the worklist).

The final result was a CLIF file I sent to the HW team of a simple transform feedback test that was locking up the GPU. They reported that I was missing a wait packet that was required, beyond what the HW specs said. (I had actually earlier tried emitting the wait, but typoed the condition for enabling it!) With that, I think I've cleared up all of the GPU hangs in piglit runs of khr_gles3.

On the VC4 front, the DSI panel enable/disable sequencing patch is in, which should enable other people to successfully build DSI panel drivers for Raspberry Pi. Boris has resubmitted the VC4 writeback (TXP) series now that the core writeback support is in, and there were just a couple of cleanups and it should now be ready to land.

02 Jul 2018 12:30am GMT