26 Jun 2017

feedplanet.freedesktop.org

Rob Clark: long overdue update

Since it has been a while since the last update, I guess it is a good time to post an update on some of the progress that has been happening with freedreno and upstream support for snapdragon boards.

freedreno / mesa

While the 17.1 release included enabling reorder support by default, there have been many other interesting features landed since the 17.1 branch point (so they will be included in the future 17.2 release). Many, but not all, are related to a5xx. (Something that I just realized I forgot to blog about, but have demoed here and there.)

GL/GLES Compute Shaders:

So far this is only a5xx (although a4xx seems to work similarly, and would probably be not too hard to get working if someone had the right hardware and a bit of time). SSBOs and atomics are supported, but image support (an important part of compute shaders) is still TODO (and some r/e required, although it seems to share a lot in common with SSBOs). Adreno 3xx support for compute shaders appears to be more work (ie. less in common with a4xx/a5xx, and probably part of the reason that qualcomm never bothered adding support in android blob driver). Patches welcome, but for now a3xx compute support is far enough down my TODO list that it might not otherwise happen.

I know there is a lot of interest in open source OpenCL support for freedreno, and hopefully that is something that will come in the future. But there is the big challenge of how to get opencl shaders (kernels) into a form that can be consumed by freedreno's ir3 shader compiler backend. While there is some potential to re-use spirv_to_nir at some point, there are some complicated details. For compute kernels (ie. OpenCL) there are some restrictions lifted on SPIRV that spirv_to_nir relies on. (Little details like lack of requirement for structured flow control.)

A5xx HW Binning Support:

Traditionally hw binning support, while a pretty big perf boost, has been kinda difficult (translation: lot of things can be done wrong to lead to difficult to debug GPU lockups), this time around it wasn't so hard. I guess experience on a3xx/a4xx has helped. And everyone loves ~30% fps boost in your favorite game!

This has brought performance roughly up to the levels as ifc6540/a420. Which sounds bad, but remember we are comparing apples and oranges. On ifc6540 (snapdragon 805), we don't yet have upstream kernel support so this was using a 3.10 android kernel (with bus-scaling and all the downstream tricks to optimize memory bandwidth and overall SoC performance). But on a530 (dragonboard820c), I never had a working downstream kernel (or had to bother backporting the upstream drm/msm driver to some ancient android kernel.. hurray!). The upshot is that any perf #'s for a5xx don't include bus-scaling, cpufreq, etc. I expect a pretty big performance boost on a530 once we have a way to clock up memory/interconnects. (Ie. on micro-benchmarks a530 is >2x faster than a420 on alu limited workloads, but still a bit slower than a420 on bandwidth limited workloads, despite having a higher theoretical bandwidth.)

Side note, linaro is working on an upstream solution for bus-scaling. This is a very important improvement needed upstream for ARM SoC's, especially ones that optimize so strongly for battery life. (Keep in mind that interconnects, which span across the SoC, and memory, are a big power consumer in a modern SoC.. so a lot of qualcomm's good performance + battery life in phones comes down to these systemwide optimizations.) It is equivalent to slow memory clockings on some generations of nouveau, except in this case it is outside the gpu driver (ie. we aren't talking about vram on a discrete gpu), and the reason is to enable a high end phone SoC to last a couple days on battery, rather than keeping your video card from melting.

A5xx gles3.0/gl3.1 support:

Probably it would have made sense to spend time on this before compute shaders (since they are otherwise only exposed with $MESA_GL_VERSION_OVERRIDE tricks.. but hey, I was curious about how compute shaders worked). After an assortment of small things to r/e and implement, we where just a few (~50) texture/vbo/fb formats away from gl3.1. Nothing really exciting. Mostly just a few weekends probing unknown format #'s and seeing which piglit format tests started passing. The sort have thing that would have taken approximately 10 minutes with docs.. but hey, it needed to be done.

Switching to NIR by default:

This is one thing that benefits a3xx and a4xx as well as a5xx. While freedreno has had NIR support for a while, it hasn't been enabled by default until more recently. The issue was handling of complex dereferences (multi-dimensional arrays, arrays of structs, etc). The problem was that freedreno's ir3 backend preferred to keep things in SSA form (since that gives the instruction scheduler more flexibilty, which is pretty imprortant in the a3xx+ instruction set architecture (ir3)). Adding support to lower arrays to regs allowed moving the deref offset calculation to NIR so that we wouldn't regress by turning NIR on by default. This is useful since it cuts shader compilation time, but also because tgsi_to_nir doesn't support SSBOs, atomics, and other new shiny glsl features. (Now we only rely on tgsi_to_nir for various legacy paths and built-in blit shaders which don't need new shiny glsl features.)

A5xx HW Query Support:

Adreno 5xx changed how hw queries (ie. occlusion query and time-elapsed query, etc) work. For the better, since now we can accumulate per-tile results on the GPU. But it required some new support in freedreno for a different sort of query, and some r/e about how this actually worked. And while we had previously lied about occlusions query support (mostly to expose more than gl1.4 support), that isn't a very good long term solution. In addition, time-elapsed query is useful for performance/profiling work, so helpful for some of the following projects.

A5xx LRZ Support:

Adreno 5xx adds another cute optimization called "LRZ". (Presumably "low resolution Z (depth buffer)". I've spent a some time r/e'ing this feature and implementing support for it in freedreno. It is a neat new hw trick that a5xx has, which serves two purposes.
The basic idea is to have a per-quad depth value so that in the binning pass primitives can be rejected (per tile) based on depth (ie. reject more early). But then recycle the LRZ buffer in draw phase to function as for-free depth pre-pass (ie. reject earlier primitives based on the z value of later primitives).

The benefit depends on how well optimized the game is. Ie. games that are well optimized for traditional GPU architectures (ie. sorting geometry, already doing depth pre-passes, etc) won't benefit as much.. but this helps a lot for badly written games that relied on per-pixel deferred rendering.

Overall, for things like stk/xonotic, it seems like a ~5-10% win.

edit: I forgot to mention, this isn't enabled by default as it causes some issues (which seem like a sort of z-fighting) with 0ad. Other than that, I haven't found anything that it doesn't work with. To enable: FD_MESA_DEBUG=lrz. It would be nice if there were some way to have driver specific flags in driconf to control things like this.

The main remaining performance trick for a5xx is UBWC (ie. bandwidth compression) + tiled textures. I've worked out mostly how UBWC works (in particular texture layout, at least for 2d textures + mipmap, but I think we can infer how 2d arrays, 3d, etc, work from that). Most of the infrastructure for upload/download blits (to convert to/from linear) should be easier thanks to the reorder support. We'll see if I actually find time to implement it before the mesa 17.2 branch point.

Standardized Embedded Nonsense Hacks

Anyone who has dealt with arm (non-server) devices, should be familiar with the silly-embedded-nonsense-hacks world. In particular the non-standard boot-chain which makes it difficult for distro's to support the plethora of arm boards (let alone phones/tablets/etc) out there without per-board support. Which was fine in the early days, but N boards times M distro's, it really doesn't scale.

Thanks to work by Mateusz Kulikowski, we now have u-boot support for dragonboard 410c. It's been on my TODO list to play with for a while. But more recently I realized that u-boot, thanks to the work of many others, can provide enough of EFI runtime-services interface for grub to work. This means that it is a path forward for standardized distros on aarch64 (like fedora and opensuse), which expect UEFI, to boot on boards which don't otherwise have UEFI firmware.

So I decided to spend a bit of time pretending to be a crack smoking firmware engineer. (Not literally, of course.. that would be stupid!)

After fixing some linker script bugs with u-boot's db410c support vs efi_runtime section, and debugging some issues with grub finding the boot disk with the help of Peter Jones (the resident grub/EFI expert who conveniently sits near me), and a couple other misc u-boot fixes, I had a fedora 26 alpha image booting on the db410c.

The next step was figuring out display, so we could have grub boot menu on screen, like you would expect on a grown-up platform. As it turns out, on most devices, lk (little kernel, ie. what normally loads the kernel+initrd on snapdragon android devices) already supports lighting up the display, since most/all android devices put up the initial splash-screen before the kernel is loaded. Unfortunately this was not the case with the db410c's lk. But Archit (qcom engineer who has contributed a whole lot of drm/msm and other drm patches) pointed me at a different lk branch (among the 100's) which had msm8916 display + adv7533 dsi->hdmi bridge (like what db410c uses). After digging through a convoluted git history, I was able to track down the relevant gpio/i2c/adv7533 patches to port to the lk branch used on db410c.

After that, I added support for lk to populate a framebuffer node, using the simple-framebuffer bindings to pass the pre-configured scanout buffer (+dimensions) to u-boot. This plus a new simplefb video driver for u-boot, enables u-boot to expose display support to grub via the EFI GOP protocol. (Along the way I had to add 32bpp rgb support to lk since u-boot and grub don't understand packed 24bpp rgb.)

All this got to the point of:



This is a fedora image, booting off of usb disk (ie. not just rootfs on usb disk, but also grub/kernel/initrd/dtb). With graphical grub menu to select which kernel to boot, just like you would expect on a PC. The grubaa64.efi here is vanilla distro boot-loader, and from the point of view of the distro image, lk/u-boot is just the platform's firmware which somehow provides the UEFI interface the distro media expects. It is worth pointing out some advantages of a traditional lk->kernel boot chain:
  • booting from USB, network, etc (which lk cannot do)
  • doesn't require kernel packed in custom boot.img partition which is board specific
  • booting installer image (ie. from sd-card or network)
When the kernel starts, in early boot, it is using efifb, just like it would on a PC. (Ie. so you can see what is going on on-screen before hw specific drm driver kernel module is loaded).

There are still a few rough edges. The drm/msm driver and msm clk drivers are a bit surprised when some clks are already enabled when the kernel starts, and the display is already light up.. now we have a good reason to fix some of those issues. And right now we don't have a good way to load a newer device tree binary (dtb) after a distro kernel update (ie. without updating u-boot, aka "the firmware"). (For simple SoC's maybe a pre-baked dtb for the life of the board is sufficient... I have my doubts about that for SoCs as complex as the various snapdragon's, if for no other reason that we haven't even figured out how to model all the features of the existing SoCs in devicetree.) One idea is for u-boot to pass to grub the name of the board dtb file to load via EFI variables. I've sent a very early RFC to add EFI variable support in u-boot. We'll see how this goes, in the mean time there might be more "firmware" upgrades needed than you'd normally expect on a mature platform like x86.

For now, my lk + u-boot work is here:
and prebulit "firmware" is here. For now you will need to edit distro grub.cfg to add 'devicetree' commands to load appropriate dtb since what is included with u-boot.img is a very minimal fdt (ie. just enough for the drivers in u-boot).




26 Jun 2017 3:35pm GMT

Eric Anholt: 2017-06-26

This week I picked up my old vc4-xml branch. This rework was inspired by the Intel driver, where they wrote an XML description of the hardware packets and use that to code-generate the packet packing and debug dumping code. Given that vc4's debug dumping has always been somewhat of a mess (and its code is duplicated between mesa and vc4-gpu-tools), it would be great to do the same thing to vc4. More importantly, the XML-generated pack code easily lets you do things like precompute part of your packed state packet at gallium CSO generation time, and then just memcpy (or OR together two copies) at draw time.

My problem with the branch had been that it bloated the size of the vc4_emit.c code (the draw-time path), which probably meant that it reduced performance compared to my old hand-written packing. I had spent a couple of weeks writing fast paths for things like moving a float into the unaligned CL, or packing a couple of flag bits into the bottom of a 32-bit address, but that only took the bloat from like 20% to 10%. Last week, I decided to stop using the size as a proxy for performance and just test performance, and it turns out that the difference was negligible or slightly positive! Now I need to get an Android build done, and merge.

In the process of doing this draw overhead testing, I turned on a new gallium flag that cut the CPU overhead of draw calls by 5%, which was more than any of my vc4-xml overhead ever was!

I also spent more time on the 7" panel again, trying a rework of the load order in response to review feedback. It turns out that the DSI portion of DRM isn't built to support drivers the way that previous feedback requested, and nobody has a concrete plan for how it would work. I've tried one avenue of fixing it, but that ran into another mess in the DSI subsystem.

Switching Raspbian over to vc4 fkms is currently stalled on Simon rebuilding the packages with the current Mesa patchset. I've fixed a minor issue in the fkms overlay that requested aligned CMA areas despite my having removed that requirement a while back, so hopefully they'll finally enable vc4 on Pi0/1 as well once they get around to updating.

I did another round of review on the piglit series for ANDROID_native_fence and they're ready to land now.

I sent out a rework to make some VC4 NIR lowering code shareable with other drivers. Freedreno and Intel have both wanted it at some point, so hopefully I can get some review on it.

In the kernel, I polished up my BO-labeling code that gives you detailed graphics memory usage information in /debug/dri/0/bo_stats. The cleanup was to effectively eliminate the CPU overhead, unless you choose do labeling from userspace. Adding this userspace interface required adding intel-gpu-tools testcases, so I wrote those. The Mesa side isn't merged yet since we need kernel review first, and will probably only be enabled in debug driver builds. To make it really fancy, I should also hook up glObjectLabel() all the way to the kernel, so that /debug/dri/0/bo_stats can have things like "X11 ARGB glyph cache" instead of "resource 1024x1024@4" for that mystery 4MB buffer you've got.

Finally, I did some cleanup of the VC4 modesetting code, prompted by Boris's recent cleanups. We're much closer to matching the common DRM helpers now, with just our async pageflip code still being special. Once Gustavo's async cursor bits land, we may be able to remove our async pageflip special case as well!

26 Jun 2017 12:30am GMT

25 Jun 2017

feedplanet.freedesktop.org

Nicolai Hähnle: ARB_gl_spirv, NIR linking, and a NIR backend for radeonsi

SPIR-V is the binary shader code representation used by Vulkan, and GL_ARB_gl_spirv is a recent extension that allows it to be used for OpenGL as well. Over the last weeks, I've been exploring how to add support for it in radeonsi.

As a bit of background, here's an overview of the various relevant shader representations that Mesa knows about. There are some others for really old legacy OpenGL features, but we don't care about those. On the left, you see the SPIR-V to LLVM IR path used by radv for Vulkan. On the right is the path from GLSL to LLVM IR, plus a mention of the conversion from GLSL IR to NIR that some other drivers are using (i965, freedreno, and vc4).

For GL_ARB_gl_spirv, we ultimately need to translate SPIR-V to LLVM IR. A path for this exists, but it's in the context of radv, not radeonsi. Still, the idea is to reuse this path.

Most of the differences between radv and radeonsi are in the ABI used by the shaders: the conventions by which the shaders on the GPU know where to load constants and image descriptors from, for example. The existing NIR-to-LLVM code needs to be adjusted to be compatible with radeonsi's ABI. I have mostly completed this work for simple VS-PS shader pipelines, which has the interesting side effect of allowing the GLSL-to-NIR conversion in radeonsi as well. We don't plan to use it soon, but it's nice to be able to compare.

Then there's adding SPIR-V support to the driver-independent mesa/main code. This is non-trivial, because while GL_ARB_gl_spirv has been designed to remove a lot of the cruft of the old GLSL paths, we still need more supporting code than a Vulkan driver. This still needs to be explored a bit; the main issue is that GL_ARB_gl_spirv allows using default-block uniforms, so the whole machinery around glUniform*() calls has to work, which requires setting up all the same internal data structures that are setup for GLSL programs. Oh, and it looks like assigning locations is required, too.

My current plan is to achieve all this by re-using the GLSL linker, giving a final picture that looks like this:

So the canonical path in radeonsi for GLSL remains GLSL -> AST -> IR -> TGSI -> LLVM (with an optional deviation along the IR -> NIR -> LLVM path for testing), while the path for GL_ARB_gl_spirv is SPIR-V -> NIR -> LLVM, with NIR-based linking in between. In radv, the path remains as it is today.

Now, you may rightfully say that the GLSL linker is a huge chunk of subtle code, and quite thoroughly invested in GLSL IR. How could it possibly be used with NIR?

The answer is that huge parts of the linker don't really that much about the code in the shaders that are being linked. They only really care about the variables: uniforms and shader inputs and outputs. True, there are a bunch of linking steps that touch code, but most of them aren't actually needed for SPIR-V. Most notably, GL_ARB_gl_spirv doesn't require intrastage linking, and it explicitly disallows the use of features that only exist in compatibility profiles.

So most of the linker functionality can be preserved simply by converting the relevant variables (shader inputs/outputs, uniforms) from NIR to IR, then performing the linking on those, and finally extracting the linker results and writing them back into NIR. This isn't too much work. Luckily, NIR reuses the GLSL IR type system.

There are still parts that might need to look at the actual shader code, but my hope is that they are few enough that they don't matter.

And by the way, some people might want to move the IR -> NIR translation to before linking, so this work would set a foundation for that as well.

Anyway, I got a ridiculously simple toy VS-PS pipeline working correctly this weekend. The real challenge now is to find actual test cases...

25 Jun 2017 9:10pm GMT