26 May 2015

feedplanet.freedesktop.org

Christian Schaller: Fedora Workstation 22 is out!

So we just got the second Fedora Workstation release out the door, and I am quite happy with it, we had quite a few last minute hardware issues pop up, but due to the hard work of the team we where able to get them fixed in time for todays release.

Every release we do is of course the result of both work we do as part of the Fedora Workstation team, but we also rely on a lot of other people upstream. I would like to especially call out Laurent Pinchart, who is the upstream maintainer of the UVC driver, who fixed a bug we discovered with some built in webcams on newer laptops. So thank you Laurent! So for any users of the Toshiba z20t Portege laptop, your rear camera now works thanks to Laurent :)

Having a relatively short development cycle this release doesn't contain huge amounts of major changes, but our team did manage to sneak in a few nice new features. As you can see from this blog entry from Allan Day the notification area re-design that he and Florian worked on landed. It is a huge improvement in my opinion and will let us continue polishing the notification behavior of applications going forward.

We have a bunch of improvements to the Nautilus file manager thanks to the work of Carlos Soriano. Recommend reading through his blog as there is a quite sizeable collection of smaller fixes and improvements he was able to push through.

Another thing we got properly resolved for Fedora Workstation 22 is installing it in Boxes. Boxes is our easy to use virtual machine manager which we are putting resources into to make a great developer companion. So while this is a smaller fix for Boxes and Fedora, we have some great Boxes features lining up for the next Fedora release, so stayed tuned for more on that in another blog post.

Wayland support is also marching forward with this release. The GDM session you get upon installing Fedora Workstation 22 will now default to Wayland, but fall back to X if there is an issue. It is a first step towards migrating the default session to Wayland. We still have some work to do there to get the Wayland session perfect, but we are closing the gap rapidly. Jonas Ådahl and Owen Taylor is pushing that effort forward.

Related to Wayland we introduce libinput as the backend for both X and Wayland in this release. While we shipped libinput in Fedora 21, when we wrote libinput we did so with Wayland as the primary target, yet at the same time we realized that we didn't want to maintain two separate input systems going forward, so in this release also X.org uses libinput for input. This means we have one library to work on now that will improve input in both your Wayland session and X sessions.

This is also the first release featuring the new Adwaita theme for Qt. This release supports Qt4, but we hope to support Qt5 in an upcoming Fedora release and also include a dark variant of the theme for Qt applications. Martin Briza has been leading that effort.

Another nice little feature addition this release is the notification of long running jobs in the terminal. It was a feature we wanted to do from early on in the Fedora Workstation process, but it took quite some while to figure out the fine details for how we wanted to do it. Basically it means you no longer need to check in with your open terminals to see if a job has completed, instead you are now getting a notification. So you can for instance start a compile and then not have to think about it again until you get the notification. We are still tweaking the notifications a little bit for this one, to make sure we cut down the amount of unhelpful notifications to an absolute minimum, so if you have feedback on how we can improve this feature we be happy to hear it. For example we are thinking about turning off the notification for UI applications launched from a terminal.

Anyway, we have a lot of features in the pipeline now for Fedora Workstation 23 since quite a few of the items planned for Fedora Workstation 22 didn't get completed in time, so I am looking forward to writing a blog informing you about those soon.

You can also read about this release in Fedora Magazine.

26 May 2015 6:36pm GMT

Julien Danjou: OpenStack Summit Liberty from a Ceilometer & Gnocchi point of view

Last week I was in Vancouver, BC for the OpenStack Summit, discussing the new Liberty version that will be released in 6 months.

I've attended the summit mainly to discuss and follow-up new developments on Ceilometer, Gnocchi and Oslo. It has been a pretty good week and we were able to discuss and plan a few interesting things.

Ops feedback

We had half a dozen Ceilometer sessions, and the first one was dedicated to getting feedbacks from operators using Ceilometer. We had a few operators present, and a few of the Ceilometer team. We had constructive discussion, and my feeling is that operators struggles with 2 things so far: scaling Ceilometer storage and having Ceilometer not killing the rest of OpenStack.

We discussed the first point as being addressed by Gnocchi, and I presented a bit Gnocchi itself, as well as how and why it will fix the storage scalability issue operators encountered so far.

Ceilometer putting down the OpenStack installation is more interesting problem. Ceilometer pollsters request information from Nova, Glance… to gather statistics. Until Kilo, Ceilometer used to do that regularly and at fixed interval, causing high pike load in OpenStack. With the introduction of jitter in Kilo, this should be less of a problem. However, Ceilometer hits various endpoints in OpenStack that are poorly designed, and hitting those endpoints of Nova or other components triggers a lot of load on the platform. Unfortunately, this makes operators blame Ceilometer rather than blaming the components being guilty of poor designs. We'd like to push forward improving these components, but it's probably going to take a long time.

Componentisation

When I started the Gnocchi project last year, I pretty soon realized that we would be able to split Ceilometer itself in different smaller components that could work independently, while being able to leverage each others. For example, Gnocchi can run standalone and store your metrics even if you don't use Ceilometer - nor even OpenStack itself.

My fellow developer Chris Dent had the same idea about splitting Ceilometer a few months ago and drafted a proposal. The idea is to have Ceilometer split in different parts that people could assemble together or run on their owns.

Interestingly enough, we had three 40 minutes sessions planned to talk and debate about this division of Ceilometer, though we all agreed in 5 minutes that this was the good thing to do. Five more minutes later, we agreed on which part to split. The rest of the time was allocated to discuss various details of that split, and I engaged to start doing the work with Ceilometer alarming subsystem.

I wrote a specification on the plane bringing me to Vancouver, that should be approved pretty soon now. I already started doing the implementation work. So fingers crossed, Ceilometer should have a new components in Liberty handling alarming on its own.

This would allow users for example to only deploys Gnocchi and Ceilometer alarm. They would be able to feed data to Gnocchi using their own system, and build alarms using Ceilometer alarm subsystem relying on Gnocchi's data.

Gnocchi

We didn't have a Gnocchi dedicated slot - mainly because I indicated I didn't feel we needed one. We anyway discussed a few points around coffee, and I've been able to draw a few new ideas and changes I'd like to see in Gnocchi. Mainly changing the API contract to be more asynchronously so we can support InfluxDB more correctly, and improve Carbonara (the library we created to manipulate timeseries) based drivers to be faster.

All of those should - plus a few Oslo tasks I'd like to tackle - should keep me busy for the next cycle!

26 May 2015 9:39am GMT

25 May 2015

feedplanet.freedesktop.org

Vincent Untz: SUSE Ruling the Stack in Vancouver

Rule the Stack

Last week during the the OpenStack Summit in Vancouver, Intel organized a Rule the Stack contest. That's the third one, after Atlanta a year ago and Paris six months ago. In case you missed earlier episodes, SUSE won the two previous contests with Dirk being pretty fast in Atlanta and Adam completing the HA challenge so we could keep the crown. So of course, we had to try again!

For this contest, the rules came with a list of penalties and bonuses which made it easier for people to participate. And indeed, there were quite a number of participants with the schedule for booking slots being nearly full. While deploying Kilo was a goal, you could go with older releases getting a 10 minutes penalty per release (so +10 minutes for Juno, +20 minutes for Icehouse, and so on). In a similar way, the organizers wanted to see some upgrade and encouraged that with a bonus that could significantly impact the results (-40 minutes) - nobody tried that, though.

And guess what? SUSE kept the crown again. But we also went ahead with a new challenge: outperforming everyone else not just once, but twice, with two totally different methods.

For the super-fast approach, Dirk built again an appliance that has everything pre-installed and that configures the software on boot. This is actually not too difficult thanks to the amazing Kiwi tool and all the knowledge we have accumulated through the years at SUSE about building appliances, and also the small scripts we use for the CI of our OpenStack packages. Still, it required some work to adapt the setup to the contest and also to make sure that our Kilo packages (that were brand new and without much testing) were fully working. The clock result was 9 minutes and 6 seconds, resulting in a negative time of minus 10 minutes and 54 seconds (yes, the text in the picture is wrong) after the bonuses. Pretty impressive.

But we also wanted to show that our product would fare well, so Adam and I started looking at this. We knew it couldn't be faster than the way Dirk picked, and from the start, we targetted the second position. For this approach, there was not much to do since this was similar to what he did in Paris, and there was work to update our SUSE OpenStack Cloud Admin appliance recently. Our first attempt failed miserably due to a nasty bug (which was actually caused by some unicode character in the ID of the USB stick we were using to install the OS... we fixed that bug later in the night). The second attempt went smoother and was actually much faster than we had anticipated: SUSE OpenStack Cloud deployed everything in 23 minutes and 17 seconds, which resulted in a final time of 10 minutes and 17 seconds after bonuses/penalties. And this was with a 10 minutes penalty due to the use of Juno (as well as a couple of minutes lost debugging some setup issue that was just mispreparation on our side). A key contributor to this result is our use of Crowbar, which we've kept improving over time, and that really makes it easy and fast to deploy OpenStack.

Wall-clock time for SUSE OpenStack Cloud

Wall-clock time for SUSE OpenStack Cloud

These two results wouldn't have been possible without the help of Tom and Ralf, but also without the whole SUSE OpenStack Cloud team that works on a daily basis on our product to improve it and to adapt it to the needs of our customers. We really have an awesome team (and btw, we're hiring)!

For reference, three other contestants succeeded in deploying OpenStack, with the fastest of them ending at 58 minutes after bonuses/penalties. And as I mentioned earlier, there were even more contestants (including some who are not vendors of an OpenStack distribution), which is really good to see. I hope we'll see even more in Tokyo!

Results of the Rule the Stack contest

Results of the Rule the Stack contest

Also thanks to Intel for organizing this; I'm sure every contestant had fun and there was quite a good mood in the area reserved for the contest.

25 May 2015 10:58pm GMT

Ben Widawsky: Aliasing PPGTT [part 2]

Overview

Pictures are the right way to start.

appgtt_concept

Conceptual view of aliasing PPGTT bind/unbind

There is exactly one thing to get from the above drawing, everything else is just to make it as close to fact as possible.

  1. The aliasing PPGTT (aliases|shadows|mimics) the global GTT.

The wordy overview

Support for Per-process Graphics Translation Tables (PPGTT) debuted on Sandybridge (GEN6). The features provided by hardware are a superset of Aliasing PPGTT, which is entirely a software construct. The most obvious unimplemented feature is that the hardware supports multiple PPGTTs. Aliasing PPGTT is a single instance of a PPGTT. Although not entirely true, it's easiest to think of the Aliasing PPGTT as a set page table of page tables that is maintained to have the identical mappings as the global GTT (the picture above). There is more on this in the Summary section

Until recently, aliasing PPGTT was the only way to make use of the hardware feature (unless you accidentally stepped into one of my personal branches). Aliasing PPGTT is implemented as a performance feature (more on this later). It was an important enabling step for us as well as it provided a good foundation for the lower levels of the real PPGTT code.

In the following, I will be using the HSW PRMs as a reference. I'll also assume you've read, or understand part 1.

Selecting GGTT or PPGTT

Choosing between the GGTT and the Aliasing PPGTT is very straight forward. The choice is provided in several GPU commands. If there is no explicit choice, than there is some implicit behavior which is usually sensible. The most obvious command to be provided with a choice is MI_BATCH_BUFFER_START. When a batchbuffer is submitted, the driver sets a single bit that determines whether the batch will execute out of the GGTT or a Aliasing PPGTT1. Several commands as well, like PIPE_CONTROL, have a bit to direct which to use for the reads or writes that the GPU command will perform.

Architecture

The names for all the page table data structures in hardware are the same as for IA CPU. You can see the Intel® 64 and IA-32 Architectures Software Developer Manuals for more information. (At the time of this post: page 1988 Vol3. 4.2 HIERARCHICAL PAGING STRUCTURES: AN OVERVIEW). I don't want to rehash the HSW PRMs too much, and I am probably not allowed to won't copy the diagrams. However, for the sake of having a consolidated post, I will rehash the most pertinent parts.

There is one conceptual Page Directory for a PPGTT - the docs call this a set of Page Directory Entries (PDEs), however since they are contiguous, calling it a Page Directory makes a lot of sense to me. In fact, going back to the Ironlake docs, that seems to be the case. So there is one page directory with up to 512 entries, each pointing to a page table. There are several good diagrams which I won't bother redrawing in the PRMs2

[table "" not found /]

31:12, 11:04, 03:02, 01, 0
Physical Page Address 31:12, Physical Page Address 39:32, Rsvd, Page size (4K/32K), Valid
[/easytable]

[table "" not found /]

31:12, 11, 10:04, 03:01, 0
Physical Page Address 31:12, Cacheability Control[3], Physical Page Address 38:32, Cacheability Control[2:0], Valid
[/easytable]

There's some things we can get from this for those too lazy to click on the links to the docs.

  1. PPGTT page tables exist in physical memory.
  2. PPGTT PTEs have the exact same layout as GGTT PTEs.
  3. PDEs don't have cache attributes (more on this later).
  4. There exists support for big pages3

With the above definitions, we now can derive a lot of interesting attributes about our GPU. As already stated, the PPGTT is a two-level page table (I've not yet defined the size).

With all this information, I now present you a slightly more accurate picture.

real_appgtt

An object with an aliased PPGTT mapping

Size

PP_DCLV - PPGTT Directory Cacheline Valid Register: As the spec tells us, "This register controls update of the on-chip PPGTT Directory Cache during a context restore." This statement is directly contradicted in the very next paragraph, but the important part is the bit about the on-chip cache. This register also determines the amount of virtual address space covered by the PPGTT. The documentation for this register is pretty terrible, so a table is actually useful in this case.

[table "" not found /]

63:32,31:0

MBZ, "PPGTT Directory Cache Restore [1..32] 16 entries"

[/easytable]

[table "" not found /]

31, 30, …, 1, 0

PDE[511:496] enable, PDE [495:480] enable,…,PDE[31:16] enable, PDE[15:0] enable

[/easytable]

The, "why" is not important. Each bit represents a cacheline of PDEs, which is how the register gets its name4. A PDE is 4 bytes, there are 64b in a cacheline, so 64/4 = 16 entries per bit. We now know how much address space we have.

512 PDEs * 1024 PTEs per PT * 4096 PAGE_SIZE = 2GB

Location

PP_DIR_BASE: Sadly, I cannot find the definition to this in the public HSW docs. However, I did manage to find a definition in the Ironlake docs yay me. There are several mentions in more recent docs, and it works the same way as is outlined on Ironlake. Quoting the docs again, "This register contains the offset into the GGTT where the (current context's) PPGTT page directory begins." We learn a very important caveat about the PPGTT here - the PPGTT PDEs reside within the GGTT.

Programming

With these two things, we now have the ability to program the location, and size (and get the thing to load into the on-chip cache). Here is current i915 code which switches the address space (with simple comments added). It's actually pretty ho-hum.

...
ret = intel_ring_begin(ring, 6);
if (ret)
        return ret;

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(2));
intel_ring_emit(ring, RING_PP_DIR_DCLV(ring));
intel_ring_emit(ring, PP_DIR_DCLV_2G);       // program size
intel_ring_emit(ring, RING_PP_DIR_BASE(ring));
intel_ring_emit(ring, get_pd_offset(ppgtt)); // program location
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
...

As you can see, we program the size to always be the full amount (in fact, I fixed this a long time ago, but never merged). Historically, the offset was at the top of the GGTT, but with my PPGTT series merged, that is abstracted out, and the simple get_pd_offset() macro gets the offset within the GGTT. The intel_ring_emit() stuff is because the docs recommended setting the registers via the GPU's LOAD_REGISTER_IMMEDIATE command, though empirically it seems to be fine if we simply write the registers via MMIO (for Aliasing PPGTT). See my previous blog post if you want more info about the commands execution in the GPU's ringbuffer. If it's easier just pretend it's 2 MMIO writes.

Initialization

All of the resources are allocated and initialized upfront. There are 3 main steps. Note that the following comes from a relatively new kernel, and I have already submitted patches which change some of the cosmetics. However, the concepts haven't changed for pre-gen8.

1. Allocate space in the GGTT for the PPGTT PDEs

ret = drm_mm_insert_node_in_range_generic(&dev_priv->gtt.base.mm,
                                          &ppgtt->node, GEN6_PD_SIZE,
                                          GEN6_PD_ALIGN, 0,
                                          0, dev_priv->gtt.base.total,
                                          DRM_MM_TOPDOWN);

2. Allocate the page tables

for (i = 0; i < ppgtt->num_pd_entries; i++) {
        ppgtt->pt_pages[i] = alloc_page(GFP_KERNEL);
        if (!ppgtt->pt_pages[i]) {
                gen6_ppgtt_free(ppgtt);
                return -ENOMEM;
        }
}

3. [possibly] IOMMU map the pages

for (i = 0; i < ppgtt->num_pd_entries; i++) {
        dma_addr_t pt_addr;

        pt_addr = pci_map_page(dev->pdev, ppgtt->pt_pages[i], 0, 4096,
                               PCI_DMA_BIDIRECTIONAL);
        ...
}

As the system binds, and unbinds objects into the aliasing PPGTT, it simply writes the PTEs for the given object (possibly spanning multiple page tables). The PDEs do not change. PDEs are mapped to a scratch page when not used, as are the PTEs.

IOMMU

As we saw in step 3 above, I mention that the page tables may be mapped by the IOMMU. This is one important caveat that I didn't fully understand early on, so I wanted to recap a bit. Recall that the GGTT is allocated out of system memory during the boot firmware's initialization. This means that as long as Linux treats that memory as special, everything will just work (just don't look for IOMMU implicated bugs on our bugzilla). The page tables however are special because they get allocated after Linux is already running, and the IOMMU is potentially managing the memory. In other words, we don't want to write the physical address to the PDEs, we want to write the dma address. Deferring to wikipedia again for the description of an IOMMU., that's all.It tripped be up the first time I saw it because I hadn't dealt with this kind of thing before. Our PTEs have worked the same way for a very long time when mapping the BOs, but those have somewhat hidden details because they use the scatter-gather functions.

Feel free to ask questions in the comments if you need more clarity - I'd probably need another diagram to accommodate.

Cached page tables

Let me be clear, I favored writing a separate post for the Aliasing PPGTT because it gets a lot of the details out of the way for the post about Full PPGTT. However, the entire point of this feature is to get a [to date, unmeasured] performance win. Let me explain… Notice bits 4:3 of the ECOCHK register. Similarly in the i915 code:

ecochk = I915_READ(GAM_ECOCHK);
if (IS_HASWELL(dev)) {
        ecochk |= ECOCHK_PPGTT_WB_HSW;
} else {
        ecochk |= ECOCHK_PPGTT_LLC_IVB;
        ecochk &= ~ECOCHK_PPGTT_GFDT_IVB;
}
I915_WRITE(GAM_ECOCHK, ecochk);

What these bits do is tell the GPU whether (and how) to cache the PPGTT page tables. Following the Haswell case, the code is saying to map the PPGTT page table with write-back caching policy. Since the writes for Aliasing PPGTT are only done at initialization, the policy is really not that important.

Below is how I've chosen to distinguish the two. I have no evidence that this is actually what happens, but it seems about right.

ggtt_flow

Flow chart for GPU GGTT memory access. Red means slow.

Flow chart for GPU PPGTT memory access. Red means slow.

Flow chart for GPU PPGTT memory access. Red means slow.

Red means slow. The point which was hopefully made clear above is that when you miss the TLB on a GGTT access, you need to fetch the entry from memory, which has a relatively high latency. When you miss the TLB on a PPGTT access, you have two caches (the special PDE cache for PPGTT, and LLC) which are backing the request. Note there is an intentional bug in the second diagram - you may miss the LLC on the PTE fetch also. I was trying to keep things simple, and show the hopeful case.

Because of this, all mappings which do not require GGTT mappings get mapped to the aliasing PPGTT.

Distinctions from the GGTT

At this point I hope you're asking why we need the global GTT at all. There are a few limited cases where the hardware is incapable (or it is undesirable) of using a per process address space.

A brief description of why, with all the current callers of the global pin interface.

GEN8 disambiguation

Off the top of my head, the list of some of the changes on GEN8 which will get more detail in a later post. These changes are all upstream from the original Broadwell integration.

Summary

There's actually an interesting thing that you start to notice after reading Distinctions from the GGTT. Just about every thing mapped into the GGTT shouldn't be mapped into the PPGTT. We already stated that we try to map everything else into the PPGTT. The set of objects mapped in the GGTT, and the set of objects mapped into the PPGTT are disjoint5). The patches to make this work are not yet merged. I'd put an image here to demonstrate, but I am feeling lazy and I really want to get this post out today.

Recapping:

Like last time, links to all the SVGs I've created. Use them as you like.
https://bwidawsk.net/blog/wp-content/uploads/2014/06/appgtt_concept.svg
https://bwidawsk.net/blog/wp-content/uploads/2014/06/real_ppgtt.svg
https://bwidawsk.net/blog/wp-content/uploads/2014/06/ggtt_flow.svg
https://bwidawsk.net/blog/wp-content/uploads/2014/06/ppgtt_flow.svg

Download PDF

  1. Actually it will use whatever the current PPGTT is, but for this post, that is always the Aliasing PPGTT

  2. Page walk, Two-Level Per-Process Virtual Memory

  3. Big pages have the same goal as they do on the CPU - to reduce TLB pressure. To date, there has been no implementation of big pages for GEN (though a while ago I started putting something together). There has been some anecdotal evidence that there isn't a big win to be had for many workloads we care about, and so this remains a low priority.

  4. This register thus allows us to limit, or make a sparse address space for the PPGTT. This mechanism is not used, even in the full PPGTT patches

  5. There actually is a case on GEN6 which requires both. Currently this need is implemented by drivers/gpu/drm/i915/i915_gem_execbuffer.c: i915_gem_execbuffer_relocate_entry(

25 May 2015 3:37pm GMT

Ben Widawsky: Future PPGTT [part 4] (Dynamic page table allocations, 64 bit address space, GPU “mirroring”, and yeah, something about relocs too)

Preface

GPU mirroring provides a mechanism to have the CPU and the GPU use the same virtual address for the same physical (or IOMMU) page. An immediate result of this is that relocations can be eliminated. There are a few derivative benefits from the removal of the relocation mechanism, but it really all boils down to that. Other people call it other things, but I chose this name before I had heard other names. SVM would probably have been a better name had I read the OCL spec sooner. This is not an exclusive feature restricted to OpenCL. Any GPU client will hopefully eventually have this capability provided to them.

If you're going to read any single PPGTT post of this series, I think it should not be this one. I was not sure I'd write this post when I started documenting the PPGTT (part 1, part2, part3). I had hoped that any of the following things would have solidified the decision by the time I completed part3.

  1. CODE: The code is not not merged, not reviewed, and not tested (by anyone but me). There's no indication about the "upstreamability". What this means is that if you read my blog to understand how the i915 driver currently works, you'll be taking a crap-shoot on this one.
  2. DOCS: The Broadwell public Programmer Reference Manuals are not available. I can't refer to them directly, I can only refer to the code.
  3. PRODUCT: Broadwell has not yet shipped. My ulterior motive had always been to rally the masses to test the code. Without product, that isn't possible.

Concomitant with these facts, my memory of the code and interesting parts of the hardware it utilizes continues to degrade. Ultimately, I decided to write down what I can while it's still fresh (for some very warped definition of "fresh").

Goal

GPU mirroring is the goal. Dynamic page table allocations are very valuable by itself. Using dynamic page table allocations can dramatically conserve system memory when running with multiple address spaces (part 3 if you forgot), which is something which should become pretty common shortly. Consider for a moment a Broadwell legacy 32b system (more details later). TYou would require about 8MB for page tables to map one page of system memory. With the dynamic page table allocations, this would be reduced to 8K. Dynamic page table allocations are also an indirect requirement for implementing a 64b virtual address space. Having a 64b virtual address space is a pretty unremarkable feature by itself. On current workloads [that I am aware of] it provides no real benefit. Supporting 64b did require cleaning up the infrastructure code quite a bit though and should anything from the series get merged, and I believe the result is a huge improvement in code readability.

Current Status

I briefly mentioned dogfooding these several months ago. At that time I only had the dynamic page table allocations on GEN7 working. The fallout wasn't nearly as bad as I was expecting, but things were far from stable. There was a second posting which is much more stable and contains support of everything through Broadwell. To summarize:

Feature Status TODO
Dynamic page tables Implemented Test and fix bugs
64b Address space Implemented Test and fix bugs
GPU mirroring Proof of Concept Decide on interface; Implement interface.1

Testing has been limited to just one machine, mine, when I don't have a million other things to do. With that caveat, on top of my last PPGTT stabilization patches things look pretty stable.

Present: Relocations

Throughout many of my previous blog posts I've gone out of the way to avoid explaining relocations. My reluctance was because explaining the mechanics is quite tedious, not because it is a difficult concept. It's impossible [and extremely unfortunate for my weekend] to make the case for why these new PPGTT features are cool without touching on relocations at least a little bit. The following picture exemplifies both the CPU and GPU mapping the same pages with the current relocation mechanism.

Current PPGTT support

Current PPGTT support

To get to the above state, something like the following would happen.

  1. Create BOx
  2. Create BOy
  3. Request BOx be uncached via (IOCTL DRM_IOCTL_I915_GEM_SET_CACHING).
  4. Do one of aforementioned operations on BOx and BOy
  5. Perform execbuf2.

Accesses to the BO from the CPU require having a CPU virtual address that eventually points to the pages representing the BO2. The GPU has no notion of CPU virtual addresses (unless you have a bug in your code). Inevitably, all the GPU really cares about is physical pages; which ones. On the other hand, userspace needs to build up a set of GPU commands which sometimes need to be aware of the absolute graphics address.

Several commands do not need an absolute address. 3DSTATE_VS for instance does not need to know anything about where Scratch Space Base Offset
is actually located. It needs to provide an offset to the General State Base Address. The General State Base Address does need to be known by userspace:
STATE_BASE_ADDRESS

Using the relocation mechanism gives userspace a way to inform the i915 driver about the BOs which needs an absolute address. The handles plus some information about the GPU commands that need absolute graphics addresses are submitted at execbuf time. The kernel will make a GPU mapping for all the pages that constitute the BO, process the list of GPU commands needing update, and finally submit the work to the GPU.

Future: No relocations

GPU Mirroring

GPU Mirroring

The diagram above demonstrates the goal. Symmetric mappings to a BO on both the GPU and the CPU. There are benefits for ditching relocations. One of the nice side effects of getting rid of relocations is it allows us to drop the use of the DRM memory manager and simply rely on malloc as the address space allocator. The DRM memory allocator does not get the same amount of attention with regard to performance as malloc does. Even if it did perform as ideally as possible, it's still a superfluous CPU workload. Other people can probably explain the CPU overhead in better detail. Oh, and OpenCL 2.0 requires it.

"OpenCL 2.0 adds support for shared virtual memory (a.k.a. SVM). SVM allows the host and 
kernels executing on devices to directly share complex, pointer-containing data structures such 
as trees and linked lists. It also eliminates the need to marshal data between the host and devices. 
As a result, SVM substantially simplifies OpenCL programming and may improve performance."

Makin' it Happen

64b

As I've already mentioned, the most obvious requirement is expanding the GPU address space to match the CPU.

Page Table Hierarchy

Page Table Hierarchy

If you have taken any sort of Operating Systems class, or read up on Linux MM within the last 10 years or so, the above drawing should be incredibly unremarkable. If you have not, you're probably left with a big 'WTF' face. I probably can't help you if you're in the latter group, but I do sympathize. For the other camp: Broadwell brought 4 level page tables that work exactly how you'd expect them to. Instead of the x86 CPU's CR3, GEN GPUs have PML4. When operating in legacy 32b mode, there are 4 PDP registers that each point to a page directory and therefore map 4GB of address space3. The register is just a simple logical address pointing to a page directory. The actual changes in hardware interactions are trivial on top of all the existing PPGTT work.

The keen observer will notice that there are only 256 PML4 entries. This has to do with the way in which we've come about 64b addressing in x86. This wikipedia article explains it pretty well, and has links.

"This will take one week. I can just allocate everything up front." (Dynamic Page Table Allocation)

Funny story. I was asked to estimate how long it would take me to get this GPU mirror stuff in shape for a very rough proof of concept. "One week. I can just allocate everything up front." If what I have is, "done" then I was off by 10x.

Where I went wrong in my estimate was math. If you consider the above, you quickly see why allocating everything up front is a terrible idea and flat out impossible on some systems.

Page for the PML4
512 PDP pages per PML4 (512, ok we actually use 256)
512 PD pages per PDP (256 * 512 pages for PDs)
512 PT pages per PD (256 * 512 * 512 pages for PTs)
(256 * 5122 + 256 * 512 + 256 + 1) * PAGE_SIZE = ~256G = oops

Dissimilarities to x86

First and foremost, there are no GPU page faults to speak of. We cannot demand allocate anything in the traditional sense. I was naive though, and one of the first thoughts I had was: the Linux kernel [heck, just about everything that calls itself an OS] manages 4 level pages tables on multiple architectures. The page table format on Broadwell is remarkably similar to x86 page tables. If I can't use the code directly, surely I can copy. Wrong.

Here is some code from the Linux kernel which demonstrates how you can get a PTE for a given address in Linux.

typedef unsigned long   pteval_t;
typedef struct { pteval_t pte; } pte_t;

static inline pteval_t native_pte_val(pte_t pte)
{
        return pte.pte;
}

static inline pteval_t pte_flags(pte_t pte)
{
        return native_pte_val(pte) & PTE_FLAGS_MASK;
}

static inline int pte_present(pte_t a)
{
        return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE |
                               _PAGE_NUMA);
}
static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address)
{
        return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(address);
}
#define pte_offset_map(dir, address) pte_offset_kernel((dir), (address))

#define pgd_offset(mm, address) ( (mm)->pgd + pgd_index((address)))
static inline pud_t *pud_offset(pgd_t *pgd, unsigned long address)
{
        return (pud_t *)pgd_page_vaddr(*pgd) + pud_index(address);
}
static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
{
        return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
}

/* My completely fabricated example of finding page presence */
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
pte_t *ptep;
struct mm_struct *mm = current->mm;
unsigned long address = 0xdefeca7e;

pgd = pgd_offset(mm, address);
pud = pud_offset(pgd, address);
pmd = pmd_offset(pud, address);
ptep = pte_offset_map(pmd, address);
printk("Page is present: %s\n", pte_present(*ptep) ? "yes" : "no");

X86 page table code has a two very distinct property that does not exist here (warning, this is slightly hand-wavy).

  1. The kernel knows exactly where in physical memory the page tables reside4. On x86, it need only read CR3. We don't know where our pages tables reside in physical memory because of the IOMMU. When VT-d is enabled, the i915 driver only knows the DMA address of the page tables.
  2. There is a strong correlation between a CPU process and an mm (set of page tables). Keeping mappings around of the page tables is easy to do if you don't want to take the hit to map them every time you need to look at a PTE.

If the Linux kernel needs to find if a page is present or not without taking a fault, it need only look to one of those two options. After about of week of making the IOMMU driver do things it shouldn't do, and trying to push the square block through the round hole, I gave up on reusing the x86 code.

Why Do We Actually Need Page Table Tracking?

The IOMMU interfaces were not designed to pull a physical address from a DMA address. Pre-allocation is right out. It's difficult to try to get the instantaneous state of the page tables…

Another thought I had very early on was that tracking could be avoided if we just never tore down page tables. I knew this wasn't a good solution, but at that time I just wanted to get the thing working and didn't really care if things blew up spectacularly after running for a few minutes. There is actually a really easy set of operations that show why this won't work. For the following, think of the four level page tables as arrays. ie.

  1. [mesa] Create a 2M sized BO. Write to it. Submit it via execbuffer
  2. [i915] See new BO in the execbuffer list. Allocate page tables for it…
    1. [DRM]Find that address 0 is free.
    2. [i915]Allocate PDP for PML4[0]
    3. [i915]Allocate PD for PDP[0][0]
    4. [i915]Allocate PT for PD[0][0][0]/li>
    5. [i915](condensed)Set pointers from PML4->PDP->PD->PT
    6. [i915]Set the 512 PTEs PT[0][0][0][0][511-0] to point to the BO's backing page.
  3. [i915] Dispatch work to the GPU on behalf of mesa.
  4. [i915] Observe the hardware has completed
  5. [mesa] Create a 4k sized BO. Write to it. Submit both BOs via execbuffer.
  6. [i915] See new BO in the execbuffer list. Allocate page tables for it…
    1. [DRM]Find that address 0x200000 is free.
    2. [i915]Allocate PDP[0][0], PD[0][0][0], PT[0][0][0][1].
    3. Set pointers… Wait. Is PDP[0][0] allocated already? Did we already set pointers? I have no freaking idea!
    4. Abort.

Page Tables Tracking with Bitmaps

Okay. I could have used a sentinel for empty entries. It is possible to achieve this same thing by using a sentinel value (point the page table entry to the scratch page). To implement this involves reading back potentially large amounts of data from the page tables which will be slow. It should work though. I didn't try it.

After I had determined I couldn't reuse x86 code, and that I need some way to track which page table elements were allocated, I was pretty set on using bitmaps for tracking usage. The idea of a hash table came and went - none of the upsides of a hash table are useful here, but all of the downsides are present(space). Bitmaps was sort of the default case. Unfortunately though, I did some math at this point, notice the LaTex!.
\frac{2^{47}bytes}{\frac{4096bytes}{1 page}} = 34359738368 pages \\ 34359738368 pages \times \frac{1bit}{1page} = 34359738368 bits \\ 34359738368 bits \times \frac{8bits}{1byte} = 4294967296 bytes
That's 4GB simply to track every page. There's some more overhead because page [tables, directories, directory pointers] are also tracked.
 256entries + (256\times512)entries + (256\times512^2)entries = 67240192entries \\ 67240192entries \times \frac{1bit}{1entry} = 67240192bits \\ 67240192bits \times \frac{8bits}{1byte} = 8405024bytes \\ 4294967296bytes + 8405024bytes = 4303372320bytes \\ 4303372320bytes \times \frac{1GB}{1073741824bytes} = 4.0078G

I can't remember whether I had planned to statically pre-allocate the bitmaps, or I was so caught up in the details and couldn't see the big picture. I remember thinking, 4GB just for the bitmaps, that will never fly. I probably spent a week trying to figure out a better solution. When we invent time travel, I will go back and talk to my former self: 4GB of bitmap tracking if you're using 128TB of memory is inconsequential. That is 0.3% of the memory used by the GPU. Hopefully you didn't fall into that trap, and I just wasted your time, but there it is anyway.

Sample code to walk the page tables

This code does not actually exist, but it is very similar to the real code. The following shows how one would "walk" to a specific address allocating the necessary page tables and setting the bitmaps along the way. Teardown is a bit harder, but it is similar.

static struct i915_pagedirpo *
alloc_one_pdp(struct i915_pml4 *pml4, int entry)
{
        ...
}

static struct i915_pagedir *
alloc_one_pd(struct i915_pagedirpo *pdp, int entry)
{
        ...
}

static struct i915_tab *
alloc_one_pt(struct i915_pagedir *pd, int entry)
{
        ...
}

/**
 * alloc_page_tables - Allocate all page tables for the given virtual address.
 *
 * This will allocate all the necessary page tables to map exactly one page at
 * @address. The page tables will not be connected, and the PTE will not point
 * to a page.
 *
 * @ppgtt:      The PPGTT structure encapsulating the virtual address space.
 * @address:    The virtual address for which we want page tables.
 *
 */
static void
alloc_page_tables(ppgtt, unsigned long address)
{
        struct i915_pagetab *pt;
        struct i915_pagedir *pd;
        struct i915_pagedirpo *pdp;
        struct i915_pml4 *pml4 = &ppgtt->pml4; /* Always there */

        int pml4e = (address >> GEN8_PML4E_SHIFT) & GEN8_PML4E_MASK;
        int pdpe = (address >> GEN8_PDPE_SHIFT) & GEN8_PDPE_MASK;
        int pde = (address >> GEN8_PDE_SHIFT) & I915_PDE_MASK;
        int pte = (address & I915_PDES_PER_PD);

        if (!test_bit(pml4e, pml4->used_pml4es))
                goto alloc;

        pdp = pml4->pagedirpo[pml4e];
        if (!test_bit(pdpe, pdp->used_pdpes;))
                goto alloc;

        pd = pdp->pagedirs[pdpe];
        if (!test_bit(pde, pd->used_pdes)
                goto alloc;

        pt = pd->page_tables[pde];
        if (test_bit(pte, pt->used_ptes))
                return;

alloc_pdp:
        pdp = alloc_one_pdp(pml4, pml4e);
        set_bit(pml4e, pml4->used_pml4es);
alloc_pd:
        pd = alloc_one_pd(pdp, pdpe);
        set_bit(pdpe, pdp->used_pdpes);
alloc_pt:
        pt = alloc_one_pt(pd, pde);
        set_bit(pde, pd->used_pdes);
}

Here is a picture which shows the bitmaps for the 2 allocation example above.

Bitmaps tracking page tables

Bitmaps tracking page tables

The GPU mirroring interface

I really don't want to spend too much time here. In other words, no more pictures. As I've already mentioned, the interface was designed for a proof of concept which already had code using userptr. The shortest path was to simply reuse the interface.

In the patches I've submitted, 2 changes were made to the existing userptr interface (which wasn't then, but is now, merged upstream). I added a context ID, and the flag to specify you want mirroring.

struct drm_i915_gem_userptr {
        __u64 user_ptr;
        __u64 user_size;
        __u32 ctx_id;
        __u32 flags;
#define I915_USERPTR_READ_ONLY          (1<<0)
#define I915_USERPTR_GPU_MIRROR         (1<<1)
#define I915_USERPTR_UNSYNCHRONIZED     (1<<31)
        /**
         * Returned handle for the object.
         *
         * Object handles are nonzero.
         */
        __u32 handle;
        __u32 pad;
};

The context argument is to tell the i915 driver for which address space we'll be mirroring the BO. Recall from part 3 that a GPU process may have multiple contexts. The flag is simply to tell the kernel to use the value in user_ptr as the address to map the BO in the virtual address space of the GEN GPU. When using the normal userptr interface, the i915 driver will pick the GPU virtual address.

What should be: soft pin

There hasn't been too much discussion here, so it's hard to say. I believe the trends of the discussion (and the author's personal preference) would be to add flags to the existing execbuf relocation mechanism. The flag would tell the kernel to not relocate it, and use the presumed_offset field that already exists. This is sometimes called, "soft pin." It is a bit of a chicken and egg problem since the amount of work in userspace to make this useful is non-trivial, and the feature can't merged until there is an open source userspace. Stay tuned. Perhaps I'll update the blog as the story unfolds.

Wrapping it up (all 4 parts)

As usual, please report bugs or ask questions.

So with the 4 parts you should understand how the GPU interacts with system memory. You should know what the Global GTT is, why it still exists, and how it works. You might recall what a PPGTT is, and the intricacies of multiple address space. Hopefully you remember what you just read about 64b and GPU mirror. Expect a rebased patch series from me soon with all that was discussed (quite a bit has changed around me since my original posting of the patches).

This is the last post I will be writing on how GEN hardware interfaces with system memory, and how that related to the i915 driver. Unlike the Rocky movie series, I will stop at the 4th. Like the Rocky movie series, I hope this is the best. Yes, I just went there.

Unlike the usual, "buy me a beer if you liked this", I would like to buy you a beer if you read it and considered giving me feedback. So if you know me, or meet me somewhere, feel free to reclaim the voucher.

Image links

The images I've created. Feel free to do with them as you please.
https://bwidawsk.net/blog/wp-content/uploads/2014/07/legacy.svg
https://bwidawsk.net/blog/wp-content/uploads/2014/07/mirrored.svg
https://bwidawsk.net/blog/wp-content/uploads/2014/07/table_hierarchy.svg
https://bwidawsk.net/blog/wp-content/uploads/2014/07/addr-bitmap.svg

Download PDF

  1. The patches I posted for enabling GPU mirroring piggyback of of the existing userptr interface. Before those patches were merged I added some info to the API (a flag + context) for the point of testing. I needed to get this working quickly and porting from the existing userptr code was the shortest path. Since then userptr has been merged without this extra info which makes things difficult for people trying to test things. In any case an interface needs to be agreed upon. My preference would be to do this via the existing relocation flags. One could add a new flag called "SOFT_PIN"

  2. The GEM and BO terminology is a fancy sounding wrapper for the notion that we want an interface to coherently write data which the GPU can read (input), and have CPU observe data which the GPU has written (output)

  3. The PDP registers are are not PDPEs because they do not have any of the associated flags of a PDPE. Also, note that in my patch series I submitted a patch which defines the number of these to be PDPE. This is incorrect.

  4. I am not sure how KVM works manages page tables. At least conceptually I'd think they'd have a similar problem to the i915 driver's page table management. I should have probably looked a bit closer as I may have been able to leverage that; but I didn't have the idea until just now… looking at the KVM code, it does have a lot of similarities to the approach I took

  5. Let me be clear that I don't think userptr is a bad thing. It's a very hard thing to get right, and much of the trickery needed for it is *not* needed for GPU mirroring

25 May 2015 3:35pm GMT

22 May 2015

feedplanet.freedesktop.org

Bastien Nocera: iio-sensor-proxy 1.0 is out!

Modern (and some less modern) laptops and tablets have a lot of builtin sensors: accelerometer for screen positioning, ambient light sensors to adjust the screen brightness, compass for navigation, proximity sensors to turn off the screen when next to your ear, etc.

Enabling

We've supported accelerometers in GNOME/Linux for a number of years, following work on the WeTab. The accelerometer appeared as an input device, and sent kernel events when the orientation of the screen changed.

Recent devices, especially Windows 8 compatible devices, instead export a HID device, which, under Linux, is handled through the IIO subsystem. So the first version of iio-sensor-proxy took readings from the IIO sub-system and emulated the WeTab's accelerometer: a few too many levels of indirection.

The 1.0 version of the daemon implements a D-Bus interface, which means we can support more than accelerometers. The D-Bus API, this time, is modelled after the Android and iOS APIs.

Enjoying

Accelerometers will work in GNOME 3.18 as well as it used to, once a few bugs have been merged[1]. If you need support for older versions of GNOME, you can try using version 0.1 of the proxy.


Orientation lock in action



As we've adding ambient light sensor support in the 1.0 release, time to put in practice best practice mentioned by Owen's post about battery usage. We already had code like that in gnome-power-manager nearly 10 years ago, but it really didn't work very well.

The major problem at the time was that ambient light sensor reading weren't in any particular unit (values had different meanings for different vendors) and the user felt that they were fighting against the computer for the control of the backlight.

Richard fixed that though, adapting work he did on the ColorHug ALS sensor, and the brightness is now completely in the user's control, and adapts to the user's tastes. This means that we can implement the simplest of UIs for its configuration.

Power saving in action


This will be available in the upcoming GNOME 3.17.2 development release.

Looking ahead

For future versions, we'll want to export the raw accelerometer readings, so that applications, including games, can make use of them, which might bring up security issues. SDL, Firefox, WebKit could all do with being adapted, in the near future.

We're also looking at adding compass support (thanks Elad!), which Geoclue will then export to applications, so that location and heading data is collected through a single API.

Richard and Benjamin Tissoires, of fixing input devices fame, are currently working on making the ColorHug-ALS compatible with Windows 8, meaning it would work out of the box with iio-sensor-proxy.

Links

We're currently using GitHub for bug and code tracking. Releases are mirrored on freedesktop.org, as GitHub is known to mangle filenames. API documentation is available on developer.gnome.org.

[1]: gnome-settings-daemon, gnome-shell, and systemd will need patches

22 May 2015 4:31pm GMT

20 May 2015

feedplanet.freedesktop.org

Iago Toral: Bringing ARB_shader_storage_buffer_object to Mesa and i965

In the last weeks I have been working together with my colleague Samuel on bringing support for ARB_shader_storage_buffer_object, an OpenGL 4.3 feature, to Mesa and the Intel i965 driver, so I figured I would write a bit on what this brings to OpenGL/GLSL users. If you are interested, read on.

Introducing Shader Storage Buffer Objects

This extension introduces the concept of shader storage buffer objects (SSBOs), which is a new type of OpenGL buffer. SSBOs allow GL clients to create buffers that shaders can then map to variables (known as buffer variables) via interface blocks. If you are familiar with Uniform Buffer Objects (UBOs), SSBOs are pretty similar but:

Since SSBOs are read/write, they create a bidirectional channel of communication between the GPU and CPU spaces: the GL application can set the value of shader variables by writing to a regular OpenGL buffer, but the shader can also update the values stored in that buffer by assigning values to them in the shader code, making the changes visible to the GL application. This is a major difference with UBOs.

In a parallel environment such as a GPU where we can have multiple shader instances running simultaneously (processing multiple vertices or fragments from a specific rendering call) we should be careful when we use SSBOs. Since all these instances will be simultaneously accessing the same buffer there are implications to consider relative to the order of reads and writes. The spec does not make many guarantees about the order in which these take place, other than ensuring that the order of reads and writes within a specific execution of a shader is preserved. Thus, it is up to the graphics developer to ensure, for example, that each execution of a fragment or vertex shader writes to a different offset into the underlying buffer, or that writes to the same offset always write the same value. Otherwise the results would be undefined, since they would depend on the order in which writes and reads from different instances happen in a particular execution.

The spec also allows to use glMemoryBarrier() from shader code and glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT) from a GL application to add sync points. These ensure that all memory accesses to buffer variables issued before the barrier are completely executed before moving on.

Another tool for developers to deal with concurrent accesses is atomic operations. The spec introduces a number of new atomic memory functions for use with buffer variables: atomicAdd, atomicMin, atomicMax, atomicAnd, atomicOr, atomicXor, atomicExchange (atomic assignment to a buffer variable), atomicCompSwap (atomic conditional assignment to a buffer variable).

The optional unsized array at the bottom of an SSBO definition can be used to push a dynamic number of entries to the underlying buffer storage, up to the total size of the buffer allocated by the GL application.

Using shader storage buffer objects (GLSL)

Okay, so how do we use SSBOs? We will introduce this through an example: we will use a buffer to record information about the fragments processed by the fragment shader. Specifically, we will group fragments according to their X coordinate (by computing an index from the coordinate using a modulo operation). We will then record how many fragments are assigned to a particular index, the first fragment to be assigned to a given index, the last fragment assigned to a given index, the total number of fragments processed and the complete list of fragments processed.

To store all this information we will use the SSBO definition below:

layout(std140, binding=0) buffer SSBOBlock {
   vec4 first[8];     // first fragment coordinates assigned to index
   vec4 last[8];      // last fragment coordinates assigned to index
   int counter[8];    // number of fragments assigned to index
   int total;         // number of fragments processed
   vec4 fragments[];  // coordinates of all fragments processed
};

Notice the use of the keyword buffer to tell the compiler that this is a shader storage buffer object. Also notice that we have included an unsized array called fragments[], there can only be one of these in an SSBO definition, and in case there is one, it has to be the last field defined.

In this case we are using std140 layout mode, which imposes certain alignment rules for the buffer variables within the SSBO, like in the case of UBOs. These alignment rules may help the driver implement read/write operations more efficiently since the underlying GPU hardware can usually read and write faster from and to aligned addresses. The downside of std140 is that because of these alignment rules we also waste some memory and we need to know the alignment rules on the GL side if we want to read/write from/to the buffer. Specifically for SSBOs, the specification introduces a new layout mode: std430, which removes these alignment restrictions, allowing for a more efficient memory usage implementation, but possibly at the expense of some performance impact.

The binding keyword, just like in the case of UBOs, is used to select the buffer that we will be reading from and writing to when accessing these variables from the shader code. It is the application's responsibility to bound the right buffer to the binding point we specify in the shader code.

So with that done, the shader can read from and write to these variables as we see fit, but we should be aware of the fact that multiple instances of the shader could be reading from and writing to them simultaneously. Let's look at the fragment shader that stores the information we want into the SSBO:

void main() {
   int index = int(mod(gl_FragCoord.x, 8));

   int i = atomicAdd(counter[index], 1);
   if (i == 0)
      first[index] = gl_FragCoord;
   else
      last[index] = gl_FragCoord;

   i = atomicAdd(total, 1);
   fragments[i] = gl_FragCoord;
}

The first line computes an index into our integer array buffer variable by using gl_FragCoord. Notice that different fragments could get the same index. Next we increase in one unit counter[index]. Since we know that different fragments can get to do this at the same time we use an atomic operation to make sure that we don't lose any increments.

Notice that if two fragments can write to the same index, reading the value of counter[index] after the atomicAdd can lead to different results. For example, if two fragments have already executed the atomicAdd, and assuming that counter[index] is initialized to 0, then both would read counter[index] == 2, however, if only one of the fragments has executed the atomic operation by the time it reads counter[index] it would read a value of 1, while the other fragment would read a value of 2 when it reaches that point in the shader execution. Since our shader intends to record the coordinates of the first fragment that writes to counter[index], that won't work for us. Instead, we use the return value of the atomic operation (which returns the value that the buffer variable had right before changing it) and we write to first[index] only when that value was 0. Because we use the atomic operation to read the previous value of counter[index], only one fragment will read a value of 0, and that will be the fragment that first executed the atomic operation.

If this is not the first fragment assigned to that index, we write to last[index] instead. Again, multiple fragments assigned to the same index could do this simultaneously, but that is okay here, because we only care about the the last write. Also notice that it is possible that different executions of the same rendering command produce different values of first[] and last[].

The remaining instructions unconditionally push the fragment coordinates to the unsized array. We keep the last index into the unsized array fragments[] we have written to in the buffer variable total. Each fragment will atomically increase total before writing to the unsized array. Notice that, once again, we have to be careful when reading the value of total to make sure that each fragment reads a different value and we never have two fragments write to the same entry.

Using shader storage buffer objects (GL)

On the side of the GL application, we need to create the buffer, bind it to the appropriate binding point and initialize it. We do this as usual, only that we use the new GL_SHADER_STORAGE_BUFFER target:

typedef struct {
   float first[8*4];      // vec4[8]
   float last[8*4];       // vec4[8]
   int counter[8*4];      // int[8] padded as per std140
   int total;             // int
   int pad[3];            // padding: as per std140 rules
   char fragments[1024];  // up to 1024 bytes of unsized array
} SSBO;

SSBO data;

(...)

memset(&data, 0, sizeof(SSBO));

GLuint buf;
glGenBuffers(1, &buf);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, buf);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(SSBO), &data, GL_DYNAMIC_DRAW);

The code creates a buffer, binds it to binding point 0 of GL_SHADER_STORAGE_BUFFER (the same we have bound our shader to) and initializes the buffer data to 0. Notice that because we are using std140 we have to be aware of the alignment rules at work. We could have used std430 instead to avoid this.

Since we have 1024 bytes for the fragments[] unsized array and we are pushing a vec4 (16 bytes) worth of data to it with every fragment we process then we have enough room for 64 fragments. It is the developer's responsibility to ensure that this limit is not surpassed, otherwise we would write beyond the allocated space for our buffer and the results would be undefined.

The next step is to do some rendering so we get our shaders to work. That would trigger the execution of our fragment shader for each fragment produced, which will generate writes into our buffer for each buffer variable the shader code writes to. After rendering, we can map the buffer and read its contents from the GL application as usual:

SSBO *ptr = (SSBO *) glMapNamedBuffer(buf, GL_READ_ONLY);

/* List of fragments recorded in the unsized array */
printf("%d fragments recorded:\n", ptr->total);
float *coords = (float *) ptr->fragments;
for (int i = 0; i < ptr->total; i++, coords +=4) {
   printf("Fragment %d: (%.1f, %.1f, %.1f, %.1f)\n",
          i, coords[0], coords[1], coords[2], coords[3]);
}

/* First fragment for each index used */
for (int i = 0; i < 8; i++) {
   if (ptr->counter[i*4] > 0)
      printf("First fragment for index %d: (%.1f, %.1f, %.1f, %.1f)\n",
             i, ptr->first[i*4], ptr->first[i*4+1],
             ptr->first[i*4+2], ptr->first[i*4+3]);
}

/* Last fragment for each index used */
for (int i = 0; i < 8; i++) {
   if (ptr->counter[i*4] > 1)
      printf("Last fragment for index %d: (%.1f, %.1f, %.1f, %.1f)\n",
             i, ptr->last[i*4], ptr->last[i*4+1],
             ptr->last[i*4+2], ptr->last[i*4+3]);
   else if (ptr->counter[i*4] == 1)
      printf("Last fragment for index %d: (%.1f, %.1f, %.1f, %.1f)\n",
             i, ptr->first[i*4], ptr->first[i*4+1],
             ptr->first[i*4+2], ptr->first[i*4+3]);
}

/* Fragment counts for each index */
for (int i = 0; i < 8; i++) {
   if (ptr->counter[i*4] > 0)
      printf("Fragment count at index %d: %d\n", i, ptr->counter[i*4]);
}
glUnmapNamedBuffer(buf);

I get this result for an execution where I am drawing a handful of points:

4 fragments recorded:
Fragment 0: (199.5, 150.5, 0.5, 1.0)
Fragment 1: (39.5, 150.5, 0.5, 1.0)
Fragment 2: (79.5, 150.5, 0.5, 1.0)
Fragment 3: (139.5, 150.5, 0.5, 1.0)

First fragment for index 3: (139.5, 150.5, 0.5, 1.0)
First fragment for index 7: (39.5, 150.5, 0.5, 1.0)

Last fragment for index 3: (139.5, 150.5, 0.5, 1.0)
Last fragment for index 7: (79.5, 150.5, 0.5, 1.0)

Fragment count at index 3: 1
Fragment count at index 7: 3

It recorded 4 fragments that the shader mapped to indices 3 and 7. Multiple fragments where assigned to index 7 but we could handle that gracefully by using the corresponding atomic functions. Different executions of the same program will produce the same 4 fragments and map them to the same indices, but the first and last fragments recorded for index 7 can change between executions.

Also notice that the first fragment we recorded in the unsized array (fragments[0]) is not the first fragment recorded for index 7 (fragments[1]). That means that the execution of fragments[0] got first to the unsized array addition code, but the execution of fragments[1] beat it in the race to execute the code that handled the assignment to the first/last arrays, making clear that we cannot make any assumptions regarding the execution order of reads and writes coming from different instances of the same shader execution.

So that's it, the patches are now in the mesa-dev mailing list undergoing review and will hopefully land soon, so look forward to it! Also, if you have any interesting uses for this new feature, let me know in the comments.

20 May 2015 8:57am GMT

12 May 2015

feedplanet.freedesktop.org

Vincent Untz: Deploying Docker for OpenStack with Crowbar

A couple of months ago, I was meeting colleagues of mine working on Docker and discussing about how much effort it would be to add support for it to SUSE OpenStack Cloud. It's been something that had been requested for a long time by quite a number of people and we never really had time to look into it. To find out how difficult it would be, I started looking at it on the evening; the README confirmed it shouldn't be too hard. But of course, we use Crowbar as our deployment framework, and the manual way of setting it up is not really something we'd want to recommend. Now would it be "not too hard" or just "easy"? There was only way to know that... And guess what happened next?

It took a couple of hours (and two patches) to get this working, including the time for packaging the missing dependencies and for testing. That's one of the nice things we benefit from using Crowbar: adding new features like this is relatively straight-forward, and so we can enable people to deploy a full cloud with all of these nice small features, without requiring them to learn about all the technologies and how to deploy them. Of course this was just a first pass (using the Juno code, btw).

Fast-forward a bit, and we decided to integrate this work. Since it was not a simple proof of concept anymore, we went ahead with some more serious testing. This resulted in us backporting patches for the Juno branch, but also making Nova behave a bit better since it wasn't aware of Docker as an hypervisor. This last point is a major problem if people want to use Docker as well as KVM, Xen, VMware or Hyper-V - the multi-hypervisor support is something that really matters to us, and this issue was actually the first one that got reported to us ;-) To validate all our work, we of course asked tempest to help us and the results are pretty good (we still have some failures, but they're related to missing features like volume support).

All in all, the integration went really smoothly :-)

Oh, I forgot to mention: there's also a docker plugin for heat. It's now available with our heat packages now in the Build Service as openstack-heat-plugin-heat_docker (Kilo, Juno); I haven't played with it yet, but this post should be a good start for anyone who's curious about this plugin.

12 May 2015 8:41am GMT

11 May 2015

feedplanet.freedesktop.org

Julien Danjou: My interview about software tests and Python

I've recently been contacted by Johannes Hubertz, who is writing a new book about Python in German called "Softwaretests mit Python" which will be published by Open Source Press, Munich this summer. His book will feature some interviews, and he was kind enough to let me write a bit about software testing. This is the interview that I gave for his book. Johannes translated to German and it will be included in Johannes' book, and I decided to publish it on my blog today. Following is the original version.

How did you come to Python?

I don't recall exactly, but around ten years ago, I saw more and more people using it and decided to take a look. Back then, I was more used to Perl. I didn't really like Perl and was not getting a good grip on its object system.

As soon as I found an idea to work on - if I remember correctly that was rebuildd - I started to code in Python, learning the language at the same time.

I liked how Python worked, and how fast I was to able to develop and learn it, so I decided to keep using it for my next projects. I ended up diving into Python core for some reasons, even doing things like briefly hacking on projects like Cython at some point, and finally ended up working on OpenStack.

OpenStack is a cloud computing platform entirely written in Python. So I've been writing Python every day since working on it.

That's what pushed me to write The Hacker's Guide to Python in 2013 and then self-publish it a year later in 2014, a book where I talk about doing smart and efficient Python.

It had a great success, has even been translated in Chinese and Korean, so I'm currently working on a second edition of the book. It has been an amazing adventure!

Zen of Python: Which line is the most important for you and why?

I like the "There should be one - and preferably only one - obvious way to do it". The opposite is probably something that scared me in languages like Perl. But having one obvious way to do it is and something I tend to like in functional languages like Lisp, which are in my humble opinion, even better at that.

For a python newbie, what are the most difficult subjects in Python?

I haven't been a newbie since a while, so it's hard for me to say. I don't think the language is hard to learn. There are some subtlety in the language itself when you deeply dive into the internals, but for beginners most of the concept are pretty straight-forward. If I had to pick, in the language basics, the most difficult thing would be around the generator objects (yield).

Nowadays I think the most difficult subject for new comers is what version of Python to use, which libraries to rely on, and how to package and distribute projects. Though things get better, fortunately.

When did you start using Test Driven Development and why?

I learned unit testing and TDD at school where teachers forced me to learn Java, and I hated it. The frameworks looked complicated, and I had the impression I was losing my time. Which I actually was, since I was writing disposable programs - that's the only thing you do at school.

Years later, when I started to write real and bigger programs (e.g. rebuildd), I quickly ended up fixing bugs… I already fixed. That recalled me about unit tests and that it may be a good idea to start using them to stop fixing the same things over and over again.

For a few years, I wrote less Python and more C code and Lua (for the awesome window manager), and I didn't use any testing. I probably lost hundreds of hours testing manually and fixing regressions - that was a good lesson. Though I had good excuses at that time - it is/was way harder to do testing in C/Lua than in Python.

Since that period, I have never stopped writing "tests". When I started to hack on OpenStack, the project was adopting a "no test? no merge!" policy due to the high number of regressions it had during the first releases.

I honestly don't think I could work on any project that does not have - at least a minimal - test coverage. It's impossible to hack efficiently on a code base that you're not able to test in just a simple command. It's also a real problem for new comers in the open source world. When there are no test, you can hack something and send a patch, and get a "you broke this" in response. Nowadays, this kind of response sounds unacceptable to me: if there is no test, then I didn't break anything!

In the end, it's just too much frustration to work on non tested projects as I demonstrated in my study of whisper source code.

What do you think to be the most often seen pitfalls of TDD and how to avoid them best?

The biggest problems are when and at what rate writing tests.

On one hand, some people starts to write too precise tests way too soon. Doing that slows you down, especially when you are prototyping some idea or concept you just had. That does not mean that you should not do test at all, but you should probably start with a light coverage, until you are pretty sure that you're not going to rip every thing and start over. On the other hand, some people postpone writing tests for ever, and end up with no test all or a too thin layer of test. Which makes the project with a pretty low coverage.

Basically, your test coverage should reflect the state of your project. If it's just starting, you should build a thin layer of test so you can hack it on it easily and remodel it if needed. The more your project grow, the more you should make it sold and lay more tests.

Having too detailed tests is painful to make the project evolve at the start. Having not enough in a big project makes it painful to maintain it.

Do you think, TDD fits and scales well for the big projects like OpenStack?

Not only I think it fits and scales well, but I also think it's just impossible to not use TDD in such big projects.

When unit and functional tests coverage was weak in OpenStack - at its beginning - it was just impossible to fix a bug or write a new feature without breaking a lot of things without even noticing. We would release version N, and a ton of old bugs present in N-2 - but fixed in N-1 - were reopened.

For big projects, with a lot of different use cases, configuration options, etc, you need belt and braces. You cannot throw code in a repository thinking it's going to work ever, and you can't afford to test everything manually at each commit. That's just insane.

11 May 2015 1:39pm GMT

08 May 2015

feedplanet.freedesktop.org

Ben Widawsky: A bit on Intel GPU frequency

I've had this post sitting in my drafts for the last 7 months. The code is stale, but the concepts are correct. I had intended to add some pictures before posting, but it's clear that won't happen now. Words are better than nothing, I suppose…

Recently I pushed an intel-gpu-tool tool for modifying GPU frequencies. It's meant to simplify using the sysfs interfaces, similar to many other such helper tools in Linux (cpupower has become one of my recent favorites). I wrote this fancy getopt example^w^w^w program to address a specific need I had, but saw its uses for the community as well. Some time after upstreaming the tool, I accidentally put the name of this tool into my favorite search engine (yes, after all these years X clipboard still confuses me). Surprisingly I was greeted by a discussion about the tool. None of it was terribly misinformed, but I figured I might set the record straight anyway.

Introduction

Dynamically changing frequencies is a difficult thing to accomplish. Typically there are multiple clock domains in a chip and coordinating all the necessary ones in order to modify one probably requires a bunch of state which I'd not understand much less be able to explain. To facilitate this (on Gen6+) there is firmware which does whatever it is that firmwares do to change frequencies. When we talk about changing frequencies from a Linux kernel driver perspective, it means we're asking the firmware for a frequency. It can, and does overrule, balk and ignore our frequency requests

A brief explanation on frequencies in i915

The term that is used within the kernel driver and docs is, "RPS" which is [I believe] short for Render P-States. They are analogous to CPU P-states in that lower numbers are faster, higher number are slower. Conveniently, we only have two numbers, 0, and 1 on GEN. Past that, I don't know how CPU P-States work, so I'm not sure how much else is similar.

There are roughly 4 generations of RPS:

  1. IPS (not to be confused with Intermediate Pixel Storage). The implementation of this predates my knowledge of this part of the hardware, so I can't speak much about it. It stopped being a thing after Ironlake (Gen5). I don't care to look, but you can if you want: drivers/platform/x86/intel_ips.c
  2. RPS (Sandybridge, Ivybridge)
    There are 4 numbers of interest: RP0, RP1, RPn, "hw max". The first 3 are directly read from a register
    rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
    dev_priv->rps.rp0_freq          = (rp_state_cap >>  0) & 0xff;
    dev_priv->rps.rp1_freq          = (rp_state_cap >>  8) & 0xff;
    dev_priv->rps.min_freq          = (rp_state_cap >> 16) & 0xff;
    

    RP0 is the maximum value the driver can request from the firmware. It's the highest non-overclocked frequency supported.
    RPn is the minimum value the driver can request from the firmware. It's the lowest frequency supported.
    RP1 is the most efficient frequency.
    hw_max is RP0 if the system does not supports overclocking.
    Otherwise, it is read through a special set of commands where we're told by the firmware the real max. The overclocking max typically cannot be sustained for long durations due to thermal considerations, but that is transparent to software.

  3. RPS (HSW+)
    Similar to the previous RPS, except there is an RPe (for efficient). I don't know how it differs from RP1. I just learned about this after posting the tool - but just be aware it's different. Baytrail and Cherryview also have a concept of RPe.
  4. The Atom based stuff (Baytrail, Cherryview)
    I don't pretend to know how this works [or doesn't]. They use similar terms.
    They have the same set of names as the HSW+ RPS.

How the driver uses it

The driver can make requests to the firmware for a desired frequency:

if (IS_HASWELL(dev) || IS_BROADWELL(dev))
        I915_WRITE(GEN6_RPNSWREQ, HSW_FREQUENCY(val));
else
        I915_WRITE(GEN6_RPNSWREQ, GEN6_FREQUENCY(val) |
                                  GEN6_OFFSET(0) |
                                  GEN6_AGGRESSIVE_TURBO);

This register interface doesn't provide a way to determine the request was granted other than reading back the current frequency, which is error prone, as explained below.

By default, the driver will request frequencies between the efficient frequency (RP1, or RPe), and the max frequency (RP0 or hwmax) based on the system's busyness. The busyness can be calculated either by software, or by the hardware. For the former, the driver can periodically read a register to get busyness and make decisions based on that:

render_count = I915_READ(VLV_RENDER_C0_COUNT_REG);

In the latter case, the firmware will itself measure busyness and give the driver an interrupt when it determines that the GPU is sufficiently overworked or under worked. At each interrupt the driver would raise or lower by the smallest step size (typically 50MHz), and continue on its way. The most complex thing we did (which we still managed to screw up) was disable interrupts telling us to go up when we were already at max, and the equivalent for down.

It seems obvious that there are usual trends, if you're incrementing the frequency you're more likely to increment again in the near future and such. Since leaving for sabbatical and returning to work on mesa, there has been a lot of complexity added here by Chris Wilson, things which mimic concepts like the ondemand CPU frequency governor. I never looked much into those, so I can't talk knowledgeably about it - just realize it's not as naive as it once was, and should do a better job as a result.

The flow seems a bit ping-pongish:

The benefit is though that the firmware can do all the dirty work, and the CPU can sleep. Particularly when there's nothing going on in the system, that should provide significant power savings.

Why you shouldn't set -max or -min

First, what does it do? -max and -min lock the GPU frequency to min and max respectively. What this actually means is that in the driver, even if we get interrupts to throttle up, or throttle down, we ignore them (hopefully the driver will disable such useless interrupts, but I am too scared to check). I also didn't check what this does when we're using software to determine busyness, but it should be very similar

I should mention now that this is what I've inferred through careful observation^w^w random guessing. In the world of thermals and overheating, things can go from good, to bad, to broke faster than the CPU can take an interrupt and adjust things. As a result, the firmware can and will lower frequencies even if it's previously acknowledges it can give a specific frequency.

As an example if you do:

intel_gpu_frequency --set X
assert (intel_gpu_frequency --get == X)

There is a period in the middle, and after the assert where the firmware may have done who knows what. Furthermore, if we try to lock to -max, the GPU is more likely to hit these overheating conditions and throttle you down. So -max doesn't even necessarily get you max. It's sort of an interesting implication there since one would think these same conditions (probably the CPU is heavily used as well) would end up clocking up all the way anyway, and we'd get into the same spot, but I'm not really sure. Perhaps the firmware won't tell us to throttle up so aggressively when it's near its thermal limits. Using -max can actually result in non-optimal performance, and I have some good theories why, but I'll keep them to myself since I really have no proof.

-min on the other hand is equally stupid for a different and more obvious reason. As I said above, it's guaranteed to provide the worst possible performance and not guaranteed to provide the most optimal power usage.

What you should do with it

The usages are primarily benchmarking and performance debug. Assuming you can sustain decent cooling, locking the GPU frequency to anything will give you the most consistent results from run to run. Presumably max, or near max will be the most optimal.

min is useful to get a measure of what the worst possible performance is to see how it might impact various optimizations. It can help you change the ratio of GPU to CPU frequency drastically (and easily). I don't actually expect anyone to use this very much.

How not to get shot in the foot

If you are a developer trying to get performance optimizations or measurements which you think can be negatively impacted by the GPU throttling you can set this value. Again because of the thermal considerations, as you go from run torun making tweaks, I'd recommend setting something maybe 75% or so of max - that is a total ballpark figure. When you're ready to get some absolute numbers, you can try setting -max, and the few frequencies near max to see if you have any unexpected results. Take highest value when done.

The sysfs entries require root for a reason. Any time

Footnotes

On the surface it would seem that the minimum frequency should always use the least amount of power. At idle, I'd assert that is always true. The corollary to that is that at minimum frequency they also finish the workload the slower. Intel GPUs don't idle for long, they go into RC6. The efficient frequency is a blend of maintaining a low frequency and winning the race to idle. AFAIK, it's a number selected after a whole lot of tests run - we could ignore it.

Download PDF

08 May 2015 7:33pm GMT

Daniel Vetter: GFX Kernel Upstreaming Requirements

Upstreaming requirements for the DRM subsystem are a bit special since Dave Airlie requires a full-blown open-source implementation as a demonstration vehicle for any new interfaces added. I've figured it's better to clear this up once instead of dealing with the fallout from surprises and made a few slides for a training session. Dave reviewed and acked them, hence this should be the up-to-date rules - the old mails and blogs back from when some ARM SoC vendors tried to push drm drivers for blob userspace to upstream are a bit outdated.

Any here's the slides for my gfx kernel upstreaming requirements training.

08 May 2015 1:21pm GMT

07 May 2015

feedplanet.freedesktop.org

Christian Schaller: Summary of Fedora Workstation feedback

So I composed an email today to the Fedora-desktop mailing list to summarize the feedback we got here on the blog post on
my request for reasons people where not switching to Fedora. Thought I should share it here too for easier access for the
wider community and for the commentators to see that I did take the time to go through and try to summarize the posts.
Of course feel free to comment on this blog if you think I missed something important or if there are other major items we
should be thinking about. We will be discussing this both on the mailing list, in our Workstation working group meetings and at Flock in August. Anyway, I will let you go on to read the email (you can find the thread here if interested.

************************
A couple of weeks ago I blogged about who Fedora Workstation is an integrated system, but also asking for
feedback for why people are not migrating to Fedora Workstation, especially asking about why people would be using
GNOME 3 on another distro. So I got about 140 comments on that post so I thought I should write up a summary and
post here. There was of course a lot of things mentioned, but I will try to keep this summary to what I picked up
as the recurring topics.

So while this of course is a poll consisting of self selected commentators I still think the sample is big enough that we
should take the feedback into serious consideration for our plans going forward. Some of them I even think are already
handled by underway efforts.

Release cadence
Quite a few people mentioned this, ranging from those who wanted to switch us to a rolling release, a tick/tock
release style, to just long release cycles. Probably more people saying they thought the current 6 Month cycle
was just to harrowing than people who wanted rolling releases or tick/tock releases.

3rd Party Software
This was the single most brought up item. With people saying that they stayed on other distros due to the pain of
getting 3rd party software on Fedora. This ranged from drivers (NVidia, Wi-Fi), to media codecs to end user
applications. Width of software available in general was also brought up quite a few times. If anyone is in any doubt
that our current policy here is costing us users I think these comments clearly demonstrates otherwise.

Optimus support
Quite a few people did bring up that our Optimus support wasn't great. Luckily I know Bastien Nocera is working on
something there based on work by Dave Arlie, so hopefully this is one we can check off soon.

Upgrades
Many people also pointed out that we had no UI for upgrading Fedora.

HiDPI issues
A few comments on various challenges people have with HiDPI screens, especially when dealing with non-GTK3 apps-

Multimonitor support
A few comments that our multimonitor support could be better

SELinux is a pain
A few comments about SELinux still getting in the way at times

Better Android integration
A few people asked for more/better Android device integration features

Built in backup solution
A few people requested we create some kind of integrated backup solution

Also a few concrete requests in terms of applications for Fedora:
http://www.mixxx.org
http://www.vocalproject.net
https://gnumdk.github.io/lollypop/
http://peterlevi.com/variety/
http://foldercolor.tuxfamily.org
choqok for GNOME (microblogging client)

07 May 2015 6:43pm GMT

Corbin Simpson: Communication on Dictionaries

This post is a reaction to http://blog.rongarret.info/2015/05/why-lisp.html and I could not think of a better title.

Before I get into my thoughts, I'd like to quickly recap two arguments presented by two different studiers of communication, as they are essential to my premise.

The first argument is known as "the medium is the message," from Marshall McLuhan. It is, in essence, the concept that the medium in which a message is conveyed is part of the message. "Medium" here is the singular of "media."

The key realization from McLuhan is the idea that, since the message is inalienably embedded in its medium, the process of interpreting the medium is intertwined with interpreting the message. A message restated in a different medium is necessarily a different message from the original. It also means that the ability of a consumer to understand the message is connected to their ability to understand the medium.

The second argument has no nifty name that I know of. It is Hofstadter's assertion, from GEB, that messages are decomposable into (at least) three pieces, which he calls the frame, outer message, and inner message. The frame indicates that a message is a message. The outer message explains how the inner message is structured. The inner message is freeform or formless.

What I'm taking from Hofstadter's model is the idea that, although the frame can identify a message, neither the frame nor the outer message can actually explain how the inner message is to be understood. The GEB examples are all linguistic and describe how no part of a message can explain which language was used to craft it without using the language in question. The outer message is the recognition that the inner message exists and is in a certain language.

I can unify these two ideas. I bet that you can, too. McLuhan's medium is Hofstadter's frame and outer message. A message is not just data, but also the metadata which surrounds it.

Can we talk about Lisp? Not quite. I have one more thing. Consider two people speaking English to each other. We have a medium of speech, a frame of rhythmic audio, and an outer message of English speech. They can understand each other, right? Well, not quite. They might not speak exactly the same dialect of English. They have different life experiences, which means that their internal dictionaries, idioms, figures of speech, and so forth might not line up precisely. As an example, our two speakers might disagree on whether "raspberry" refers to fruit or flatulence. Worse, they might disagree on whether "raspberry" refers to berries! These differences constantly occur as people exchange messages.

Our speakers can certainly negotiate, word by word, any contention, as long as they have some basic words in common. However, this might not be sufficient, in the case of some extremely different English dialects like American English and Scottish English. It also might not be necessary; consider the trope of two foreigners without a common language coming together to barter or trade, or the still-not-understood process of linguistic bootstrapping which an infant performs. Even if the inner message isn't decodable, the outer message and frame can still carry some of the content of the message (since the medium is the message!) and communication can still happen in some limited fashion.

I'll now point out that the idea that "normal" communication is not "limited" is illusory; this sort of impedance mismatch occurs during transmission and receipt of every message, since the precise concepts that a person carries with them are ever-changing. A person might not even be able to communicate effectively with themselves; many people have the surreal, uncanny experience of reading a note which they have written to themselves only a day or two ago, and not understanding what the note was trying to convey.

I would like to now postulate an idea. Since the medium is the message and the deciphering and grokking of messages is imperfect, communication is not just about deciphering messages, but also about discovering the meaning of the message via knowledge of the medium and messenger. To put it shortly and informally, communication is about discovering how others talk, not just about talking.

Okay! Now we're ready to tackle Lisp. What does Lisp source look like? Well, it's lots of lists, which contain other lists, and some atoms, and some numbers. Or, at least, that's what a Lisper sees. Most non-Lispers see parentheses. Lots of parentheses. Lispers like to talk about how the parentheses eventually fade away, leaving pure syntax, metasyntax, metapatterns, and so forth. The language has a simple core, and is readily extended by macros, which are also Lisp code.

I'll get to macros in a second. First, I want to deal with something from the original article. "Think about it: to represent hierarchical data you need two syntactic elements: a token separator and a block delimiter. In S expressions, whitespace is the token separator and parens are the block delimiters. That's it. You can't get more minimal than that." I am obligated to disagree. English, the language of the article, has conventions for semantic blocks, but they are often ignored, omitted, etc. I speak Lojban, which at its most formal has no delimiters nor token separators which are readable to the human eye. Placing spaces in Lojban text is a courtesy to human readers but not needed by computers. Another programming language, Forth, has no block delimiters. It also lacks blocks. Also the token separator is, again, a courtesy; Forth's lexer often reinterprets or ignores whitespace when asked.

In fact, Lisp's syntax is, well, syntax. While relatively simple when compared to many languages, Lisp is still structured and retains syntax that would be completely superfluous were one to speak in the language of digital logic which computers normally use. To use the language from earlier in this post, Lisp's syntax is part of its framing; we identify Lisp code precisely by the preponderance of parentheses. The opening parenthesis signals to both humans and computers alike that a Lisp list is starting.

The article talks about the power to interleave data and code. Code operates on data. Code which is also data can be operated on by metacode, which is also code. A strange loop forms. This is not unlike Hofstadter's suggestion that an inner message could itself contain another outer and inner message. English provides us with ample opportunities to form and observe such loops. Consider this quote: "Within these quotation marks, a new message is possible, which can extend beyond its limits and be interpreted differently from the rest of the message: 'Within these quotation marks, a new message is possible.'" I've decided to not put any nasty logic puzzles into this post, but if I had done so, this would be the spot. Mixing metalevels can be hard.

I won't talk about quoting any longer, other than to note that it's not especially interesting if a system supports quoting. There is a proof that quines are possible in any sufficiently complex (Turing complete, for example) language.

Macros are a kind of code-as-data system which involves reflection and reification, transforming code into data and operating on it before turning the data back into code. In particular, this permits code to generate code. This isn't a good or bad thing. In fact, similar systems are seen across the wider programming ecosystem. C has macros, C++ has templates, Haskell has Template Haskell, object-based systems like Python, Ruby, and JS have metaclass or metaprototype faculties.

Lispers loudly proclaim that macros benefit the expressive power of their code. By adding macros to Lisp code, the code becomes more expressive as it takes on deeper metalevels. This is not unlike the expressive power that code gains as it is factored and changed to be more generic. However, it comes at a cost; macros create dialects.

This "dialect" label could be borrowed from the first part of this post, where I used it to talk about spoken languages. Lispers use it to talk about different Lisps which have different macros, different builtin special forms, etc. Dialects create specialization, both for the code and for those reading and writing the code. This specialization is a direct result of the macros that are incorporated into the dialect.

I should stress that I am not saying that macros are bad. This metapower is neither Good nor Bad, in terms of Quality; it is merely different. Lisp is an environment where macros are accepted. Forth is another such environment; the creator of Forth anticipated that most Forth code would not be ANS FORTH but instead be customized heavily for "the task at hand." A Forth machine's dictionary should be full of words which were created by the programmer specifically for the programmer's needs and desires.

Dialects are evident in many other environments. Besides Forth and Lisp, dialects can be seen in large C++ and Java codebases, where tooling and support libraries make up non-trivial portions of applications. Haskellers are often heard complaining about lenses and Template Haskell magic. Pythonistas tell horror stories of Django, web2py, and Twisted. It's not enough to know a language to be effective in these codebases; it's often necessary to know the precise dialect being used. Without the dialect, a programmer has to infer more meaning from the message; they have to put more effort into deciphering and grokking.

"Surely macros and functions are alike in this regard," you might say, if you were my inner voice. And you would be somewhat right, in that macros and functions are both code. The difference is that a macro is metacode; it is code which operates on code-as-data. This necessarily means that usage of a macro makes every reader of the code change how they interpret the code; a reader must either internalize the macro, adding it to their dictionary, or else reinterpret every usage of the macro in terms of concepts that they already know. (I am expecting a sage nod from you, reader, as you reflect upon instances in your past when you first learned a new word or phrase!) Or, to put it in the terms of yore, the medium is the message, and macros are part of the medium of the inner message. The level of understanding of the reader is improved when they know the macros already!

How can we apply this to improve readability of code? For starters, consider writing code in such a way that quotations and macro applications are explicit, and that it is obvious which macros are being applied to quotations. To quote a great programmer, "Namespaces are one honking great idea." Anything that helps isolate and clarify metacode is good for readability.

Since I'm obligated to mention Monte, I'm going to point out that Monte has a very clever and simple way to write metacode: Monte requires metacode to be called with special quotation marks, and also to be annotated with the name of the dialect to use. This applies to regular expressions, parser generators, XML and JSON, etc.; if a new message is to be embedded inside Monte code, and it should be recognized as such, then Monte provides a metacode system for doing precisely that action. In Monte, this system is called the quasiliteral system, and it is very much like quasiliteral quoting within Lisp, with the two differences I just mentioned: Special quotation marks and a dialect annotation.

I think that that's about the end of this particular ramble. Thanks.

07 May 2015 2:46pm GMT

04 May 2015

feedplanet.freedesktop.org

Julien Danjou: The Hacker's Guide to Python, 2nd edition!

A year passed since the first release of The Hacker's Guide to Python in March 2014. A few hundreds copies have been distributed so far, and the feedback is wonderful!

I already wrote extensively about the making of that book last year, and I cannot emphasize enough how this adventure has been amazing so far. That's why I decided a few months ago to update the guide and add some new content.

So let's talk about what's new in this second edition of the book!

First, I obviously fixed a few things. I had some reports about small mistakes and typos which I applied as I received them. Not a lot fortunately, but it's still better to have fewer errors in a book, right?

Then, I updated some of the content. Things changed since I wrote the first chapters of that guide 18 months ago. Therefore I had to rewrite some of the sections and take into account new software or libraries that were released.

At last, I decided to enhance the book with one more interview. I've requested my fellow OpenStack developer Joshua Harlow, who is leading a few interesting Python projects, to join the long list of interviewees in the book. I hope you'll enjoy it!

If you didn't get the book yet, go check it out and use the coupon THGTP2LAUNCH to get 20% off during the next 48 hours!

04 May 2015 3:00pm GMT

27 Apr 2015

feedplanet.freedesktop.org

Iago Toral: Free access to Valve-produced games on Steam for Mesa contributors

Just like they did for Debian developers before, it is Valve's way of saying thanks and giving something back to the community. This is great news for all Mesa contributors, now we can play some great Valve games for free and we can also have an easier time looking into bug reports for them, which also works great for Valve, closing a perfect circle :)

27 Apr 2015 11:34am GMT

23 Apr 2015

feedplanet.freedesktop.org

Christian Schaller: Thank you for your feedback!

I just wanted to say that you to everyone who commented and provided Feedback on my last blog entry where I asked for feedback on what would make you switch to Fedora Workstation. We will take all that feedback and incorporate it into our Fedora Workstation 23 planning. So a big thanks to all of you!

23 Apr 2015 7:30pm GMT