09 Nov 2011

feedplanet.freedesktop.org

Alan Coopersmith: S11 X11: ye olde window system in today's new operating system

Today's the big release for Oracle Solaris 11, after 7 years of development. For me, the Solaris 11 release comes a little more than 11 years after I joined the X11 engineering team at what was then Sun, and finishes off some projects that were started all the way back then.

For instance, when I joined the X team, Sun was finishing off the removal of the old OpenWindows desktop, and we kept getting questions asking about the rest of the stuff being shipped in /usr/openwin, the directory that held both the OpenLook applications and the X Window System software. I wrote up an ARC case at the time to move the X software to /usr/X11, but there were various issues and higher priority work, so we didn't end up starting that move until near the end of the Solaris 10 development cycle several years later. Solaris 10 thus had a mix of the recently added Xorg server and related code delivered in /usr/X11, while most of the existing bits from Sun's proprietary fork of X11R6 were still in /usr/openwin.

During Solaris 11 development, we finished that move, and then jumped again, moving the programs directly into /usr/bin, following the general Solaris 11 strategy of using /usr/bin for most of the programs shipped with the OS, and using other directories, such as /usr/gnu/bin, /usr/xpg4/bin, /usr/sunos/bin, and /usr/ucb for conflicting alternate implementations of the programs shipped in /usr/bin, no longer as a way to segregate out various subsystems to allow the OS to better fit onto the 105Mb hard disks that shipped with Sun workstations back when /usr/openwin was created. However, if for some reason you wanted to build your own set of X binaries, you could put them in /usr/X11R7 (as I do for testing builds of the upstream git master repos), and then put that first in your $PATH, so nothing is really lost here.

The other major project that was started during Solaris 10 development and finished for Solaris 11 was replacing that old proprietary fork of X11R6, including the Xsun server, with the modernized, modularized, open source X11R7.* code base from the new X.Org, including the Xorg server. The final result, included in this Solaris 11 release, is based mostly on the X11R7.6 release, including recent additions such as the XCB API I blogged about last year, though we did include newer versions of modules that had upstream releases since the X11R7.6 katamari, such as Xorg server version 1.10.3.

That said, we do still apply some local patches, configuration options, and other changes, for things from just fitting into the Solaris man page style or adding support for Trusted Extensions labeled desktops. You can see all of those changes in our source repository, which is searchable and browsable via OpenGrok on src.opensolaris.org (or via hgweb on community mirrors such as openindiana.org) and available for anonymous hg cloning as well. That xnv-clone tree is now frozen, a permanent snapshot of the Solaris 11 sources, while we've created a new x-s11-update-clone tree for the Solaris 11 update releases now being developed to follow on from here.

Naturally, when your OS has 7 years between major release cycles, the hardware environment you run on greatly changes in the meantime as well, and as the layer that handles the graphics hardware, there have been changes due to that. Most of the SPARC graphics devices that were supported in Solaris 10 aren't any more, because the platforms they ran in are no longer supported - we still ship a couple SPARC drivers that are supported, the efb driver for the Sun XVR-50, XVR-100, and XVR-300 cards based on the ATI Radeon chipsets, and the astfb driver for the AST2100 remote Keyboard/Video/Mouse/Storage (rKVMS) chipset in the server ILOM devices. On the x86 side, the EOL of 32-bit platforms let us clear out a lot of the older x86 video device drivers for chipsets and cards you wouldn't find in x64 systems - of course, there's still many supported there, due to the wider variety of graphics hardware found in the x64 world, and even some recent updates, such as the addition of Kernel Mode Setting (KMS) support for Intel graphics up through the Sandy Bridge generation.

For those who followed the development as it happened, either via watching our open source code releases or using one of the many development builds and interim releases such as the various Solaris Express trains, much of this is old news to you. For those who didn't, or who want a refresher on the details, you can see last year's summary in my X11 changes in the 2010.11 release blog post. Once again, the detailed change logs for the X11 packages are available, though unfortunately, all the links in them to the bug reports are now broken, so browsing the hg history log is probably more informative.

Since that update, which covered up to the build 151 released as 2010.11, we've continued development and polishing to get this Solaris 11 release finished up. We added a couple more components, including the previously mentioned xcb libraries, the FreeGLUT library, and the Xdmx Distributed Multihead X server. We cleaned up documentation, including the addition of some docs for the Xserver DTrace provider in /usr/share/doc/Xserver/. The packaging was improved, clearing up errors and optimizing the builds to reduce unnecessary updates. A few old and rarely used components were dropped, including the rstart program for starting up X clients remotely (ssh X forwarding replaces this in a more secure fashion) and the xrx plugin for embedding X applications in a web browser page (which hasn't been kept up to date with the rapidly evolving browser environment). Because Solaris 11 only supports 64-bit systems, and most of the upstream X code was already 64-bit clean, the X servers and most of the X applications are now shipped as 64-bit builds, though the libraries of course are delivered in both 32-bit and 64-bit versions for binary compatibility with applications of each flavor. The Solaris auditing system can now record each attempt by a client to connect to the Xorg server and whether or not it succeeded, for sites which need that level of detail.

In total, we recorded 1512 change request id's during Solaris 11 development, from the time we forked the "Nevada" gate from the Solaris 10 release until the final code freeze for todays release - some were one line bug fixes, some were man page updates, some were minor RFE's and some were major projects, but in the end, the result is both very different (and hopefully much better) than what we started with, and yet, still contains the core X11 code base with 24 years of backwards compatibility in the core protocols and APIs.

09 Nov 2011 10:10pm GMT

08 Nov 2011

feedplanet.freedesktop.org

Eugeni Dodonov: And now, on your favorite Intel Linux Graphics news station…

After some period of silence, this blog returns in bringing you the latest and greatest news from the Intel Linux Graphics world.

So if you were sad, depressed and crying in despair without having a chance of reading about what was going on with the Intel Graphics for the past days, rejoice! :)

Those past few days were quite busy on all the projects, and with thousands of emails to keep track of it is hard to select the most relevant news - all of them are! But I'll try to summarize the most interesting stuff that happened for the past few days.

Starting with kernel, as you all already know, we are living in the post-3.1 era now, with the release of Linux 3.2-rc1. It brings lots and lots of fixes and improvements all around, and much more are yet to come.

On Intel Graphics side, the following items caught my attention for the past days:

One particular issue worth highlighting is that a long-standing issue on GL-based applications (among which Unigine Tropics and Sanctuary are probably the most notable examples, among many others) was finally fixed, thanks to an amazing work by Eric Anholt, Kenneth Graunke and Keith Packard. This issue can be described as 'small moving ants on top of image edges' or 'flickering pixels'. So if you have had this issue, make sure to check out the patches!

Going to Mesa, out of hundreds of emails and commits, it is hard to choose the most interesting ones. Work on GL 3.0 support proceeds quickly, and new mesa stability release, 7.11.1 is almost out of the door. Our Q&A team did a full testing of this bugfixing release, and haven't found much issues. So prepare yourself, as in few weeks we'll have MESA 7.11.1 out there. Stay tuned for Ian's announcement in nearby future.

But as for mesa master branch, the following patches called my attention the most:

On Wayland side of the force, lots of patches went in those days. Among those, there was an interesting proposal for the screen locking protocol by Pekka Paalanen, and some bugfixing patches from Juan Zhao.

Going to the other components, we had a release of xorg-xserver 1.11.2 RC2, with several crashes and correctness fixes; and new stable pixman 0.24.0 which brings many performance improvements and usage of architecture-specific instructions to improve overall performance over a number of different operations (such as bilinear scaling for example).

And finally, for the intel-gpu-tools, I was working on a new intel_gpu_analyze application, which I was using to tracing and analyzing CPU and GPU performance data during workloads, and also checking on the corresponding power consumption. This is a very experimental code yet, and it lives at my freedesktop.org git for now. But still, I can already do some nice performance analysis like this one.

08 Nov 2011 10:08pm GMT

07 Nov 2011

feedplanet.freedesktop.org

Lennart Poettering: Kernel Hackers Panel

At LinuxCon Europe/ELCE I had the chance to moderate the kernel hackers panel with Linus Torvalds, Alan Cox, Paul McKenney and Thomas Gleixner on stage. I like to believe it went quite well, but check it out for yourself, as a video recording is now available online:

For me personally I think the most notable topic covered was Control Groups, and the clarification that they are something that is needed even though their implementation right now is in many ways less than perfect. But in the end there is no reasonable way around it, and much like SMP, technology that complicates things substantially but is ultimately unavoidable.

Other videos from ELCE are online now, too.

07 Nov 2011 3:53pm GMT

Bastien Nocera: ObexFTP in GNOME, (non-)update

If you've tried to use ObexFTP browsing (browsing files on mobile phones over Bluetooth) in GNOME in recent times, and didn't get a good experience from it (crashes, or very unreliable browsing), those problems are known, and due to the architecture used to implement the functionality.

If you want to help make ObexFTP browsing good again, please try to convince one of your coder friends to help port the existing code to use the "gobex" library that the obexd D-Bus service uses.

Unless somebody steps up in the GNOME 3.4 timeframe, I will disable the access to the functionality in gnome-bluetooth. The brokenness makes us look very bad, and the files are still available through other (cabled) means in most cases.

07 Nov 2011 3:14pm GMT

04 Nov 2011

feedplanet.freedesktop.org

Olivier Crête: GUPnP 0.18 (and GSSDP 0.12) harmful to VoIP calls

Due to unintentional behavior breakage in the newest versions of GUPnP and GSSDP, the UPnP NAT traversal in all VoIP applications that use Farsight2 is currently broken. This includes Empathy, Pidgin, aMSN, etc. I advise distributors to just stay with the older GUPnP 0.16 (and GSSDP 0.10) releases until this is sorted out. For those who care, the details are on bugzilla.

04 Nov 2011 11:58pm GMT

02 Nov 2011

feedplanet.freedesktop.org

Mateu Batle: Illusions in the Web: a real-time video editor built in HTML5

I am excited to blog about the current project I am working on and deeply enjoying. In Collabora, we have developed a tech demo of a simple video editor application running in the WebKit based web browser using HTML5 video technology. Well to be honest, not just plain HTML5 video, but we will get to that point later on. Keep in mind it is not a fully functional application, just a demo with some basic functionality like:

The main advantage is that all processing is done locally and all the data is kept locally too, so no need to upload all the source material to a server or download the final video from a server, which might take some time specially when working with HD resolutions. Another nice feature is that you can actually preview the final video in real-time, instead of waiting for the server to process it.

Some leftover Halloween (eye) candy

All this stuff was shown by Collabora last week in Prague, during the LinuxCon Europe and GStreamer conference. Thanks Collabora for sponsoring my visit to the conferences by the way.

Let me show a screenshot and a screencast of how the the demo looks like.

You can even try this out if you wish. Install the Ubuntu package named witivi located in zdra/prague-demo, kindly packaged by Xavier Claessens. Additionally, you will need to get some videos to enjoy it. Just contact me if you need instructions on how to make it run.

Current trends in the web

As the title of this blog post might suggest you, all this is not yet possible with current state of HTML5 video spec, so we had to cheat a little bit by adding some extensions to WebKit in order to make it real.
However, there are a lot of efforts around the multimedia on the web nowadays, so evolution could certainly make it possible in a hopefully near future. Some of the current hot spots in WHATWG and W3C seems to be WebRTC, WebAudio API and even Augmented Reality. Things are really moving fast to make web apps more and more powerful specially regarding multimedia features.

Just in case you haven't heard before, let me introduce briefly these technologies. WebRTC aims to provide real-time communication built-in HTML5 without additional plugins, so imagine audio and video calls possible from any html application with a simple Javascript API. WebAudio allows to do sample processing and synthesis from Javascript.

An interesting proposal which I discovered recently is MediaStream API, which is being promoted by Robert O'Callahan from Mozilla, looks like it could lead to have video editing functionality in HTML5 and it tries to integrate nicely with the other existing specs like WebRTC and HTML5 video.

General architecture

There are two main parts in this demo, the extension to WebKit for video editing and the video editor webapp (the user interface). The Webkit extension has been fairly easy to implement by using the Gstreamer editing services (GES) module, so having video editing features seemed like a breeze. The video playback of the media timeline has been accomplished through a special webkit gstreamer sink using a fake url, although we are checking how to do it with blob URLs similarly to how it is done in WebRTC. Even though the current implementation is not really very intrusive into WebKit codebase, it would be better to keep the changes separate from WebKit for now since there is no standard for this yet. We still have to investigate for the best way to achieve this. But it depends on how the whole things evolves.

The video editor web application has been implemented using jQuery, jQuery UI and jQuery layout libraries. Implementing the video demo webapp has not been exactly a bed of roses, but good documentation and lots of examples from the aforementioned libs helped a lot indeed.

Show me the code

Here you have the git repos in case you want to sneak on how we did it:

* GES WebKit. To compile GES WebKit make sure you specify -enable-mediatimeline option on configure. GES WebKit depends on GES and gnonlin.

* Web Video editor demo, also know as Witivi in reference to our beloved pitivi. You can find here all the html, javascript and css magic used to build the user interface.

Future extensions

Here is a list of things that potentially could be done for the following versions:

* Transitions and effects on video clips (in progress).
* Adapt to MediaStream API.
* Move to BLOB URLs.
* Create text and apply them on videos.
* Render to a local file.
* Push video to server.
* Media timeline support for multiple layers.
* Integrate with webrtc to do some collaborative video editing.
* And many many more …

Related work [Update]

Apart of this demo where Gustavo Boiko and I have been working on (with contributions from Alvaro Soliverez and Abner Silva), there are some very cool related demos developed by Collabora, all of them were shown at LinuxCon in Prague. We hope you enjoy them as much as we do:

* IM client running in the Webkit browser. An IM client with chat and video calls running in the browser mostly done in Javascript and HTML using Telepathy framework.

* Video call plugin on Media Explorer. A telepathy plugin to media explorer to be able to call our contacts, integrated nicely in media explorer UI.

Stay tuned.


02 Nov 2011 9:36pm GMT

Alberto Ruiz: Trebuchet diff

It's been a few hectic weeks for me. I managed to get a flat in a really nice area of Las Palmas de Gran Canaria before I left for the Canonical Design sprint and UDS in Orlando.

I wanted to blog about my last project for Codethink which I find quite exciting. Trebuchet Diff, or tbdiff, is a pretty neat tool that Ben Brewer started, I continued and now Richard Maw is maintaining.

Tbdiff is targetted to create update images between two systems. It basically takes two directory trees and creates a binary diff for both content and the metadata. So if you remove a file, change its contents or delete it, you can record that change and apply it.

We did quite a lot of search and surprisingly we couldn't find a similar tool, the closest thing we found was dt, which just reports the differences in a fancy output format. I reckon this is a really useful tool.

Trebuchet is a subproject of the Baserock initiative inside Codethink and it aims to provide a generic framework for fault tolerant atomic/over the air updates, leveraging the BtrFS snapshot and rollback mechanism.

02 Nov 2011 9:05pm GMT

01 Nov 2011

feedplanet.freedesktop.org

Peter Hutterer: The bugzilla attention span

Some bugs are less important than others, and there's always a certain background noise of bugs that aren't complete deal-breakers. And there are always more of those to fix than I have time for.

Every once in a while, I sweep through a list of open bugs and try to address as many as possible. Given the time constraints, I have a limited attention span for each bug. Anything that wastes time reduces the chance of a bug getting fixed. The bugs that tend to get fixed (or at least considered) are the ones with attachments: an xorg.conf, Xorg.log and whichever xorg.conf.d snippets were manually added. Always helpful is an evtest log of the device that's a problem.

If it takes me 10 comments to get useful log from a bug reporter because they insist on philosophical discussions, ego-stroking, blame games, etc., chances are I've used up the allocated time for that bug before doing anything productive. And I've had it happen often enough that reporters eventually attached useful information but I just never went back to that bug. Such is life.

So, in the interest of getting a bug fixed: attach the logs and stay on-topic. If something is interesting or important enough to have a philosophical discussion about it, then we can always start one later.

01 Nov 2011 11:27pm GMT

Eugeni Dodonov: Continuing with the semi-periodic updates…

…lots of things happened in the past days, on all fronts.

It took me longer than I originally expected to write this next post in the series due to some personal problems which kept me out of the virtual world for quite some time, but better late then never.

So, starting with Kernel, as usual, we had lots of updates.

For Mesa, lots of activities on all fronts too. I won't be able to cover all of them, but the main highlights of the past days were:

On Wayland land, we also had quite some interesting changes:

Moving to VAAPI, as I already wrote here, Gwenole has released a new version of both libva and vaapi-driver-intel. Also, speaking on new releases, Eric Anholt has also released libdrm 2.4.27, we had the release of xorg-server 1.11.2 RC2, a new pixman 0.23.8 release, which happens to be a release candidate for the stable 0.24 release, and Chris Wilson has put out xf86-video-intel 2.17 RC1, with several fixes and amazing list of 200+ SNA-related patches.

Also on SNA, I've put out two patches which would allow to activate SNA by means of a config file option, without recompiling. Those patches patches are floating around the intel-gfx mailing list, and make the task of SNA testing amazingly more easy (at least, for me :) ).

So - I guess I'll stop here for now.

See you in the next iteration of The Tales from the Crypt Intel Linux Graphics land :) .

01 Nov 2011 8:41pm GMT

Lennart Poettering: libabc

At the Kernel Summit in Prague last week Kay Sievers and I lead a session on developing shared userspace libraries, for kernel hackers. More and more userspace interfaces of the kernel (for example many which deal with storage, audio, resource management, security, file systems or a number of other subsystems) nowadays rely on a dedicated userspace component. As people who work primarily in the plumbing layer of the Linux OS we noticed over and over again that these libraries written by people who usually are at home on the kernel side of things make the same mistakes repeatedly, thus making life for the users of the libraries unnecessarily difficult. In our session we tried to point out a number of these things, and in particular places where the usual kernel hacking style translates badly into userspace shared library hacking. Our hope is that maybe a few kernel developers have a look at our list of recommendations and consider the points we are raising.

To make things easy we have put together an example skeleton library we dubbed libabc, whose README file includes all our points in terse form. It's available on kernel.org:

The git repository and the README.

This list of recommendations draws inspiration from David Zeuthen's and Ulrich Drepper's well known papers on the topic of writing shared libraries. In the README linked above we try to distill this wealth of information into a terse list of recommendations, with a couple of additions and with a strict focus on a kernel hacker background.

Please have a look, and even if you are not a kernel hacker there might be something useful to know in it, especially if you work on the lower layers of our stack.

If you have any questions or additions, just ping us, or comment below!

01 Nov 2011 12:46am GMT

31 Oct 2011

feedplanet.freedesktop.org

Jesse Barnes: Writing stanalone programs with EGL and KMS (updated)

Dear lazyweb,

Both on dri-devel and at the most recent Kernel Summit, the idea of a KMS based console program came up yet again. So in the interest of facilitating the lazyweb to write one, I thought I'd provide a review of what it takes to write a simple KMS program that ties in with GL, just in case anyone wants to port the VTE widget and give me my VTs on a cube. :) The ideal situation would be one where a distro could set CONFIG_VT=n and have userspace provide all console services, including support for the various types of input and output that exist in the wild today. The only thing that ought to be left to the kernel is spitting out panic information somehow. That means having a very simple text drawing layer remain, and a way for the kernel to flip to the latest kernel console buffer (today these functions are provided by fbcon and the DRM KMS layer, but if a userspace console service existed, we could likely clean them up quite a bit, and remove the need for the full VT and fb layers to be compiled in).

Initialization

One of the first things any DRM based program must do is open the DRI device:

int fd;
fd = open("/dev/dri/card0", O_RDWR);

You'll need to save that fd for use with any later DRM call, including the ones we'll need to get display resources. The path to the device may vary by distribution or configuration (e.g. with a multi-card setup), so take care to report sensible error strings if things go wrong.

The next step is to initialize the GBM library. GBM (for Generic Buffer Management) is a simple library to abstract buffers that will be used with Mesa. We'll use the handle it creates to initialize EGL and to create render target buffers.

struct gbm_device *gbm;
gbm = gbm_create_device(fd);

Then we get EGL into shape:

EGLDisplay dpy;
dpy = eglGetDisplay(gbm);

We'll need to hang on to both the GBM device handle and the EGL display handle as well, since most of the calls we make will need them. What follows is more EGL init.

EGLint major, minor;
const char *ver, *extensions;
eglInitialize(dpy, &major, &minor);
ver = eglQueryString(dpy, EGL_VERSION);
extensions = eglQueryString(dpy, EGL_EXTENSIONS);

Now that we have the extensions string, we need to check and make sure the EGL_KHR_surfaceless_opengl extension is available, since it's what we'll use to avoid the need for a full windowing system, which would normally provide us with pixmaps and windows to turn into EGL surfaces.

if (!strstr(extensions, "EGL_KHR_surfaceless_opengl")) {
    fprintf(stderr, "no surfaceless support, cannot initialize\n");
    exit(-1);
}

And now we're finally ready to see what outputs are available for us to use. There are several good references for KMS output handling: the DDX drivers for radeon, nouveau, and intel, and testdisplay.c in the intel-gpu-tools package. (The below is taken from eglkms.c in the mesa-demos package.)

    drmModeRes *resources;
    drmModeConnector *connector;
    drmModeEncoder *encoder;
    int i;
 
    /* Find the first available connector with modes */
 
    resources = drmModeGetResources(fd);
    if (!resources) {
        fprintf(stderr, "drmModeGetResources failed\n");
        return;
    }
 
    for (i = 0; i < resources->count_connectors; i++) {
        connector = drmModeGetConnector(fd, resources->connectors[i]);
        if (connector == NULL)
            continue;
 
        if (connector->connection == DRM_MODE_CONNECTED &&
            connector->count_modes > 0)
            break;
 
        drmModeFreeConnector(connector);
    }
 
    if (i == resources->count_connectors) {
        fprintf(stderr, "No currently active connector found.\n");
        return;
    }
 
    for (i = 0; i < resources->count_encoders; i++) {
        encoder = drmModeGetEncoder(fd, resources->encoders[i]);
 
        if (encoder == NULL)
         continue;
 
       if (encoder->encoder_id == connector->encoder_id)
         break;
 
       drmModeFreeEncoder(encoder);
    }

At this point we have connector, encoder, and mode structures we'll use to put something on the display later.

Drawing & displaying

We've initialized EGL, but we need to bind an API and create a context. In this case, we'll use OpenGL.

    EGLContext ctx;

    eglBindAPI(EGL_OPENGL_API);
    ctx = eglCreateContext(dpy, NULL, EGL_NO_CONTEXT, NULL);

    eglMakeCurrent(dpy, EGL_NO_SURFACE, EGL_NO_SURFACE, ctx);

Now we have a fully functional KMS and EGL stack and it's time for a little bit of glue magic. We need to create some buffers using GBM and bind them into EGL images that we can use as renderbuffers.

    EGLImageKHR image;
    struct gbm_bo *bo;
    GLuint fb, color_rb, depth_rb;
    uint32_t handle, stride;

    /* Create a full screen buffer for use as a render target */
    bo = gbm_bo_create(gbm, mode.hdisplay, mode.vdisplay, GBM_BO_FORMAT_XRGB888, GBM_BO_USE_SCANOUT | GBM_BO_USE_RENDERING);

    glGenFramebuffers(1, &fb);
    glBindFramebuffer(GL_FRAMEBUFFER_EXT, fb);

    handle = gbm_bo_get_handle(bo).u32;
    stride = gbm_bo_get_pitch(bo);

    image = eglCreateImageKHR(dpy, NULL, EGL_NATIVE_PIXMAP_KHR, bo, NULL);

    /* Set up render buffer... */
    glGenRenderbuffers(1, &color_rb);
    glBindRenderbuffer(GL_RENDERBUFFER_EXT, color_rb);
    glEGLImageTargetRenderbufferStorageOES(GL_RENDERBUFFER, image);
    glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, color_rb);

    /* and depth buffer */
    glGenRenderbuffers(1, &depth_rb);
    glBindRenderbuffer(GL_RENDERBUFFER_EXT, depth_rb);
    glRenderbufferStorage(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, mode.hdisplay, mode.vdisplay);
    glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb);

    if (glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)) != GL_FRAMEBUFFER_COMPLETE) {
        fprintf(stderr, "something bad happened\n");
        exit(-1);
    }

At this point, you're all ready to start rendering using standard GL calls. Note that since we don't have a window system or a window bound, we won't be using eglSwapBuffers() to display the final renderbuffer. But we still need to make sure the rendering is finished before we display it, so we can use glFinish() or some out of band synchronization mechanism (TBD) to ensure our rendering is complete before we display the new buffer.

Assuming we have something presentable, we can now try to display it:

    uint32_t crtc;

    /* Create a KMS framebuffer handle to set a mode with */
    drmModeAddFB(fd, mode.hdisplay, mode.vdisplay, 24, 32, stride, handle, &fb_id);

    drmModeSetCrtc(fd, encoder->crtc_id, fb_id, 0, 0, connector->connector_id, 1, mode);

In this example, we're assuming a 24 bit depth buffer with 32 bits per pixel. That can of course vary, but what's supported will depend on the chipset (24/32 is fairly common however). The drmModeSetCrtc call will also take care to simply flip to the fb in question if the mode timings are already programmed and are the same as the mode timings passed into the call. This allows for a simple page flipping scheme that simply creates two fbs using AddFB (pointing at different buffers) and flips between them with SetCrtc calls in a loop. Alternately, one can simply draw to the buffer that's being displayed, if the visibility of partial rendering isn't a problem.

DRM master privs and authentication

I forgot to mention that any program like this must drop DRM master privileges if another app, like X, wants to start up and use the DRM layer. The calls for this are drmSetMaster(int fd) and drmDropMaster(int fd). If no other DRM master is present, the SetMaster call should succeed. And before switching away due to a dbus call or whatever, any console program using the DRM layer should call drmDropMaster to allow the next client to get access to the DRM. Alternately, the console program could provide a display management API and make X a slave of that interface. That would also mean providing DRM authentication for other apps wanting to do direct rendering (typically master privs are required for display management, and simple auth privs are required to do command submission). Wayland is a good example of a display manager that provides this kind of functionality.

And that's all there is to it, and should be enough to get someone started on kmscon… :) If you have questions or comments, feel free to bring them up at dri-devel@lists.freedesktop.org, in the comments section here, or via email to me.

31 Oct 2011 10:05pm GMT

29 Oct 2011

feedplanet.freedesktop.org

Christian Schaller: Transmageddon 0.20 released

So after a very long 1 year development cycle I finally managed to push a new release of Transmageddon. The main reason for it haven taken this long was because I decided to port to the brand new discoverer and encodebin elements, to greatly simplify the code and improve functionality. The thing was that Edward Hervey when he wrote those elements took the conscious decision to make them assume that all needed GStreamer plugins worked as they should to be perfect GStreamer citizens. As you might imagine, since many of the plugins had never been tested for all such behaviour a lot of things did not work after the port. But over the last Months I have filed bug reports and most of them are now fixed. And with todays new python-gstreamer release (0.10.22) the fix for the binding bug for encodebin is fixed and thus I decided it was time to put out a new release.

I am quite happy with my new feature list:

The new user interface should solve the problems people with small screens used to have with Transmageddon, like many netbook users. Transmageddon now also automatically deinterlaces deinterlaced video clips, I want to improve this feature in a later release, but making it optional to deinterlace and also use Robert Swains new elements that can help detect interlaced files that have been encoded as progressive files (as shown in his presentation in Prague).

Another major feature of this release is that audio only files are now officially supported, and you can also use Transmageddon to easily output just the audio from a audio+video clip.

I also added a HTML5 webm profile, so output to that should be easier than ever. In fact I used that profile when I transcoded 100GB of video that we had recorded at an internal session at Collabora.

My next goal is to port to GStreamer 1.0 and GTK3. That said if there turns out to be some brown paper bag issues in this release I will try to fix them and make a new release, but my guess is that most bugs people might encounter will be because there are issues that are only fixed in GStreamer git yet, so until they are all released not everything works 100% and until those releases are out there might be some small regressions from the previous release.

Anyway, I hope you head over to the Transmageddon website and grab the latest release for a test run. I will try to follow-up on bug reports, but might be a bit slow the next week as I am flying down to Lahore to celebrate my sister-in-laws wedding and also see my beautiful wife again after almost a Month apart.

29 Oct 2011 8:56pm GMT

Christian Schaller: Back from GStreamer Conference in Prague

I am back home for about a day today after spending a week in Prague for the GStreamer Conference and LinuxCon Europe.

Had an absolute blast and I am really happy the GStreamer Conference again turned out to be a big success. A big thanks to the platinum sponsor Collabora, and the two silver sponsors Fluendo and Google who made it all possible. Also a big thanks to Ubicast who was there onsite recording all talks. They aim to have all the talks online within a Month.

While I had to run a bit back and forth to make sure things was running smoothly, I did get to some very interesting talks, like Monty Montgomery from Xiph.org talking about the new Opus audio codec they are working on with the IETF and the strategies they are working on to fend of bogus patent claims.

On a related note I saw that Apple released their lossless audio codec ALAC as free software under the Apache license. Always nice to see such things even if ALAC for the most part has failed to make any headway against the already free FLAC codec. If Apple now would join the effort around WebM things would really start looking great in the codec space.

We did a Collabora booth during the LinuxCon and Embedded Linux days that followed the GStreamer Conference. Our demos showcasing a HTML5 video editing UI using GStreamer and the GStreamer Editing Services and video conferencing using Telepathy through HTML5 was a great success and our big screen TV running the Media Explorer media center combined with Telepathy based video conferencing provided us with a steady stream of people to our booth. For those who missed the conference all the tech demos can be grabbed from this Prague-demo Ubuntu PPA.

So as you might imagine I was quite tired by the time Friday was almost done, but thanks to Tim Bird and Sony I got a really nice end to the week as I won a Sony Tablet S through the Elinux Wiki editing competition. The tablet is really nice and it was the first tablet I ever wanted, so winning one was really great. The feature set is really nice with built in DLNA support, it can function as a TV remote and it has support for certain Playstation 1 titles. The 'folded magazine' shape makes it really nice to hold and I am going to try to use it as an e-book reader as I fly off to Lahore tomorrow morning for my sister-in-laws wedding.

29 Oct 2011 1:43pm GMT

28 Oct 2011

feedplanet.freedesktop.org

Eugeni Dodonov: VAAPI users around the world, rejoice!

Hello media users around the world with Intel cards aboard!

I am glad to bring you the good news - Gwenole has just releases libva and vaapi-driver-intel version 1.0.15 into the wild! Go ahead and grab them here and here!

And now for the nice details of what you should expect from those releases.

Vaapi-driver-intel

This release improves the VC-1 and MPEG-2 decoding, and fixes some memory leaks:

Libva

This release moves the i965 driver into its own subdirectory, improves packaging support and build issues, and comes with some miscellaneous fixes.

As you can see, this is a bugfix-mostly release. Next step is 1.0.16, which will bring some nice features to you in nearby future.

As always, stay tuned :) .

28 Oct 2011 2:01pm GMT

Eugeni Dodonov: And now, time for some numbers

Michael Larabel from Phoronix was kind enough to carry out two different testing for the recently released kernel 3.1 on top of our Intel Linux Graphics cards today (thanks again Michael!). He split his evaluations into two different articles - First one is focused on performance and power usage for stock 3.1 kernel; and Second one specifically enabled RC6 on this kernel to see its effects in action.

His testing confirms what I was pointing out with my latest series of the post (however, it always feels great to have independent confirmation of the results). For performance numbers, kernel 3.1 brings around +20% to +40% improvements on top of the previous release for Sandy Bridge architecture. Yeah, this is nice. So if you haven't updated your kernel to the latest and greatest one (in other words, kernel 3.1), you should really consider doing this.

As for power numbers, we have some good news too. The default kernel power usage on 3.1 is mostly similar to 3.0, with some small variations here and there. However, if one enables RC6 support, the immediate results is that your power usage drops by up to 40% on idle. It is really nice to see less than two-digit watts numbers on current generation of desktop hardware (while it is pretty common to see it on Atom CPUs for some time already, desktop and mobile versions of Intel cards were not that used to such numbers - yet).

One additional bonus Michael has found out is pretty much more interesting one. Apparently, by enabling RC6, the graphics-intensive applications get an additional performance boost of up to 10%. This was a bit unexpected for me, but it is better to have good surprises than bad ones.

From what we have discussed with Jesse Barnes, such improvements could come from several different paths. The first explanation is that, with RC6 enabled, the graphical card can actually consume much less power (down to 0V), so it leaves more room for CPU to use the non-claimed power to do more processing of its own. And the second one is that, thanks to additional thermal bonus which we get from the RC6-provided power economy, the GPU frequency has more room for scaling. So effectively, in both of those case, RC6 allows you to have a better performance, both CPU and GPU-wise - at a cost of somewhat higher power usage coming from such performance.

So, in few words, I would define this situation as battery when you want, performance when you need. When you are on battery, under mostly idle workload (for example, browsing or reading or writing some blog posts), your battery lasts much longer. And when you start some processing-intensive application (say, openarena or angry birds), it gets more horsepower to get the job done, and some extra FPS which could mean life or death - which is specially true in case of said workloads :) . Of course, those additional horses are hungry, so this incurs in more power being used when such additional performance is there - but it is something to be expected in any case.

While writing this description, I actually came to realize that the feature known as GPU turbo, which is present on Intel graphics cards, could be used to define and control such behavior on a much finer granularity. From what I've searched all around, surprisingly, nobody seems to have covered its functionality yet. I guess I'll try to cover how it works, and how one could use and control it from userspace in one of the next posts, so stay tuned! And meanwhile, enjoy all those nice power and performance-related bonuses which the Intel Linux Graphics team brought to you just in time for Halloween :) .

28 Oct 2011 12:51am GMT

26 Oct 2011

feedplanet.freedesktop.org

Ian Romanick: No build breaks!

GIT is a wonderful tool. It enables some really powerful, flexible working models. There is one model that is commonly used when developing a major feature (or large refactor):

There is a danger in this process. While reordering patches it is very easy to create a tree that won't build at intermediate commits. Since the entire group of patches gets pushed at once, this may not seem like a problem. However, having intermediate commits not build makes it impossible to bisect when failures are found later.

Having every commit along a long patch series build is an absolute requirement.

However, it's kind of a pain in the ass to manually build the tree at every step. The emphasis in the previous sentence is manually. In some sense the ideal tool is something like a scripted bisect. This doesn't work, of course, because both ends of the commit sequence, in bisect terminology, are good.

To get around this in Mesa, I wrote a script that builds origin/master and every commit in some range of SHA1s. If any build fails, the script halts. At the end, it logs the change in the compiler warnings from origin/master to each commit (as I write this, it occurs to me that the warning diffs should be between sequential commits). This is useful to prevent new warnings from creeping in.

My script is available. This particular script is very specific to Mesa, but it should be easy to adapt to other projects. The script is run as:

check_all_commits.sh /path/to/source /path/to/build firstSHA..lastSHA

26 Oct 2011 10:35pm GMT