23 Apr 2015

feedplanet.freedesktop.org

Christian Schaller: Thank you for your feedback!

I just wanted to say that you to everyone who commented and provided Feedback on my last blog entry where I asked for feedback on what would make you switch to Fedora Workstation. We will take all that feedback and incorporate it into our Fedora Workstation 23 planning. So a big thanks to all of you!

23 Apr 2015 7:30pm GMT

Luc Verhaegen: The value of writing code and instigating change.

You probably saw this phoronix article which references the log of the #dri-devel channel on freenode. This was an attempt to trash my work on lima and tamil, using my inability to get much in the way of code done, and my unwillingness to throw hackish tamil code over the hedge, against me. Let me take some time to explain some of that from my point of view.

Yes, I have lost traction.

Yes, i am not too motivated to work on cleaning up code at this time. I haven't been too motivated for anything much since FOSDEM 2014. I still have my sunxi kms code to fix up/clean up, and the same is true for the lima driver for mesa. It is also pretty telling that i started to write this blog entry more than two months ago and only now managed to post it.

Under the circumstances, though, everyone else would've given up about 2 to 3 years ago.

Due to my usual combination of stubbornness and actually sitting down and doing the work, I have personally kickstarted the whole open ARM GPU movement. This is the third enormous paradigm shift in Linux graphics that i have caused these past almost 12 years (after modesetting, and pushing ATI to a point of no return with an open source driver). All open ARM GPU projects were at least inspired by this, some actually use some lima code, others would simply not have happened if I hadn't done Lima and/or had kept my mouth shut at important junctions. This was not without sacrifices. Quite the opposite.

From March 2011 on, I have spent an insane amount of time on this. Codethink paid, all-in-all, 6 weeks of my time when i was between customer projects. Some of the very early work was done on Nokia time, as, thanks to our good friend Stephen Elop, operations were winding down severely in late 2011 to the point where mostly constant availability was needed. However, I could've used that extra free time in many other useful ways. When I got a job offer at SuSE in november 2011 (basically getting my old job back due to Matthias Hopf taking up a professorship), I turned it down so I could work on the highly important Lima work instead.

When Codethink ran into a cashflow issue in Oktober 2012 (which apparently was resolved quite successfully, as codethink has grown a lot more since then), I was shown the door. This wasn't too unreasonable a decision given my increasing disappointment with how lima was being treated, the customer projects i was being sent on, and the assumption that a high profile developer like me would have an easy time getting another employer. During the phonecall announcing my dismissal, I did however get told that ARM had already been informed about my dismissal, so that business relations between ARM and codethink could be resumed. I rather doubt that that is standard procedure for dismissing people.

The time after this has been pretty meagre. If you expected at least one of linaro, canonical, mozilla or jolla to support lima, you'd be pretty wrong. Similarly, i have not seen any credible attempt to support lima driver development on a consulting basis. Then there is that fact that applying to companies which use mali and which have no interest in using an open source driver for mali is completely out, as that would mean working on the ARM binary driver, which in turn would mean that i would no longer be allowed to work on lima. All other companies have not gotten with the times with respect to hiring policies, this ranges from demanding that people relocate to interviewing solely about OpenGL spec corners for supposed driver development positions. There was one case in this whole time which had a proper first level interview, and where everything seemed to be working out, only to go dead without a proper explanation after a while. I have since learned from two independent sources that my hiring was stopped due to politics involving ARM. Lima has done wonders making me totally unhirable.

In May 2013 i wrote another proposal to ARM for an open source strategy for Mali (the first was done in Q2 2011 as part of codethink), hoping that in the meantime sentiments had shifted enough and that the absence of an in-between company would make the difference this time round. It got the unofficial backing of some people inside ARM, but they just ended up getting their heads chewed off, and I hope that they didn't suffer too much for trying to do the right thing. The speed with which this proposal was rejected by Jem Davies (ARM MPD VP of Technology), and the undertone of the email reply he sent me, suggests that his mind was made up beforehand and that he wasn't too happy about my actions this time either. This is quite amazing, as one would expect anyone at the VP level of a major company to at least keep his options open. Between the way in which this proposal was rejected and the political pressure ARM seems to apply against Lima, Mr Davies his claims that ARM simply has no customer demand and no budget, and nothing more, are very hard to believe.

So after the polite applause at the end of my FOSDEM2014 talks, it hit me. That applause was the only support i was going to ever get for my work, and i was just going to continue to hit walls everywhere, both with lima and in life. And it would take another year until the next applause, if at all. To be absolutely honest, given the way the modesetting and ATI stories worked out, i should be used to that by now, however hard it still is. But this time round i had been (and of course still am) living with my fantastic and amazingly understanding girlfriend for several years, and she ended up supporting me through Q4 2013 and most of 2014 when my unemployment benefit ran out. This time round, my trailblazing wasn't backfiring on me alone, i was dragging someone else down with me, someone who deserves much better.

I ended up being barely able to write code throughout most of 2014, and focused entirely on linux-sunxi. The best way to block things out was rewriting the linux-sunxi wiki, and that wiki is now lightyears ahead of everything else out there, and an example for everyone else (like linux-exynos and linux-rockchip) for years to come. The few code changes i did make were always tiny and trivial. In august Siarhei Siamashka convinced me that a small display driver for sunxi for u-boot would be a good quick project, and i hoped that it would wet my appetite for proper code again. Sadly though, the fact that sunxi doesn't hide all chip details like the Raspberry Pi does (this was the target device for simplefb) meant that simplefb had to be (generically) extended, and this proved once and for all that simplefb is a total misnomer and that people had been kidding themselves all along (i seem to always run into things like this). The resulting showdown had about 1.5 times the number of emails as simplefb has lines of C code. Needless to say, this did not help my mood either.

In June 2014, a very good friend of mine, who has been a Qt consultant for ages, convinced me to try consulting again. It took until October before something suited came along. This project was pretty hopeless from the start, but it reinstated my faith in my problem solving skills and how valuable those really could be, both for a project, and, for a change, for myself. While i wasn't able to afford traveling to Bordeaux for XDC2014, i was able to afford FOSDEM2015 and I am paying for my own upkeep for the first time in a few years. Finally replacing my incredibly worn and battered probook is still out of the question though.

When i was putting together the FOSDEM Graphics DevRoom schedule around christmas, I noticed that there was a inexcuseably low amount of talks, and my fingers became itchy again. The consulting job had boosted morale sufficiently to do some code again and showing off some initial renders on Mali T series seemed doable within the 5 or so weeks left. It gave the FOSDEM graphics devroom another nice scoop, and filled in one of the many many gaps in the schedule. This time round i didn't kid myself that it would change the world or help boost my profile or anything. This time round, i was just going to prove to myself that i have learned a lot since doing Lima, and that i can do these sort of things still, with ease, while having fun doing so and not exerting myself too much. And that's exactly what I did, nothing more and nothing less.

The vultures are circling.

When Egbert Eich and I were brainstorming about freeing ATI back in April and May of 2007, I stated that I feared that certain "community" members would not accept me doing something this big, and I named 3 individuals by name. While I was initially alone with that view, we did decide that, in light of the shitthrowing contest around XGL and compiz, doing everything right for RadeonHD was absolutely paramount. This is why we did everything in the open, at least from the moment AMD allowed us to do so. Open docs, irc channel, public ml, as short a turn-around on patches as possible given our quality standards and the fact that ATI was not helping at all. I did have a really tough uphill battle convincing everyone that doing native C code was the only way, as, even with all the technical arguments for native code, I knew that my own reputation was at stake. I knew that if I had accepted AtomBIOS for basic modesetting, after only just having convinced the world that BIOS free modesetting was the only way forward, I would've personally been nailed to the cross by those same people.

From the start we knew that ATI was not going to go down without a fight on this one, and I feared that those "community" members would do many things to try to damage this project, but we did not anticipate AMD losing political control over ATI internally, nor did we anticipate the lack of scruples of those individuals. AMD needed this open source driver to be good, as the world hated fglrx to the extent that the ATI reputation was dragging AMDs chips and server market down, and AMD was initially very happy with our work on RadeonHD. Yet to our amazement, some ATI employees sided with those same "community" members, and together they worked on a fork of RadeonHD, but this time doing everything the ATI way, like fglrx.

The amount of misinformation and sometimes even downright slander in this whole story was amazing. Those individuals were actively working on remarketing RadeonHD as an evil Microsoft/Novell conspiracy and while the internet previously was shouting "death to fglrx", it ended up shouting "death to radeonhd". Everyone only seemed to want to join into the shouting, and nobody took a step back to analyze what was really going on. This was ATI & red hat vs AMD & SuSE, and it had absolutely nothing to do with creating the best possible open source driver for ATI hardware.

During all of this I was being internally reminded that SuSE is the good guy and that it doesn't stoop down to the level of the shitthrowers. Being who I am, I didn't always manage to restrain myself, but even today I feel that if I had been more vocal, i could've warded off some of the most inane nonsense that was spewed. When Novell clipped SuSEs wings even further after FOSDEM2009, i was up for the chop too, and I was relieved as i finally could get some distance between myself and this clusterfuck (that once was a free software dream). I also knew that my reputation was in tatters, and that if i had been more at liberty to debunk things, the damage there could've been limited.

The vultures had won, and no-one had called them out for their insidious and shameless behaviour. In fact, they felt empowered, free to use whatever nasty and underhand tactics against anything that doesn't suit them for political or egotistical reasons. This is why the hacking of the RadeonHD repository occured, and why the reaction of the X.org community was aimed against the messenger and victim, and not against the evil-doers (I was pretty certain before I sent that emial that it was either one or the other, i hadn't forseen that it would be both though).

So when the idea of Lima formed in March 2011, I no longer just feared that these people would react. I knew for a fact that they would be there, ready to pounce on the first misstep made. This fed my frustration with how Lima was being treated by Codethink, as I knew that, in the end, I would be the one paying the price again. Similarly, if my code was not good enough, these individuals would think nothing of it to step in, rewrite history, and then trash me for not having done corner X or Y, as that was part of their modus operandi when they destroyed RadeonHD.

So these last few years, i have been caught in a catch-22 situation. ARM GPUs are becoming free, and I have clearly achieved what i set out to achieve. The fact that I so badly misjudged both ARM and the level of support i would get should not take away from the fundamental changes that came from my labour. Logically and pragmatically, it should be pretty hard to fault me in this whole story, but this is not how this game works. No matter what fundamental difference I could've made with lima, no matter how large and clear my contributions and sacrifices would be, these people would still try to distort history and try their utmost best to defame me. And now that moment has come.

But there are a few fundamental differences this time round: I am able to respond as i see fit as there is no corporate structure to shut me up, and I truly have nothing left to lose.

What started this?

To put it short:

12:59 #dri-devel: < Jasper> the alternative is fund an open-source replacement, but touching
                  lima is not an option

Jasper St. Pierre is very vocal on #dri-devel, constantly coming up with questions and asking for help or guidance. But he and his employer Endless Mobile seem to have their mind set on using the ARM Mali DDK, ARMs binary driver. He is working on integrating the Mali DDK with things like wayland, and he is very active in getting help on doing so. He has been doing so for several months now.

The above irc quote is not what i initially responded to though, the following is however:

13:01 #dri-devel: < Jasper> lima is tainted
13:01 #dri-devel: < Jasper> there's still code somewhere on a certain person's hard disk he
                  won't release because he's mad about things and the other project contributors
                  all quit
13:01 #dri-devel: < Jasper> you know that
13:02 #dri-devel: < Jasper> i'm not touching lima with a ten foot pole.

This is pure unadulterated slander.

Lima is not in any way tainted. It is freshly written MIT code and I was never exposed to original ARM code. This statement is an outrageous lie, and it is amazing that anyone in the open source world would lie like that and expect to get away with it.

Secondly, I am not "mad about things". I am severely de-motivated, which is quite different. But even then, apart from me not producing much code, there had been no real indication to Jasper St Pierre until that point that this were the case. Also, "all" the contributors to Lima did not quit. I am still there, albeit not able to produce much in the way of code atm. Connor is in college now and just spent the summer interning for Intel writing a new IR for mesa (and was involved with Khronos SPIR-V). Ben Brewer only worked on Lima on codethink's time, and that was stopped halfway 2012. Those were the only major contributors to Lima. The second line Jasper said there was another baseless lie.

Here is another highly questionable statement from Mr. St Pierre:

13:05 #dri-devel: < Jasper> robclark, i think it's too late tbh -- any money we throw at lima
                  would be tainted i think

What Jasper probably meant is that HE is the one who is tainted by having worked with the ARM Mali DDK directly. But his quick fire statements at 13:01 definitely could've fooled anyone into something entirely different. Anyone who reads that sees "Lima is tainted because libv is a total asshole". This is how things worked during RadeonHD as well. Throw some halftruths out together, and let the internet noise machine do the rest.

Here is an example of what happens then:

13:11 #dri-devel: < EdB> Jasper: what about the open source driver that try to support the next
                  generation (I don't remember the name). Is it also tainted ?
13:19 #dri-devel: < EdB> I never heard of that lima tainted things before and I was surprised

Notice that Jasper made absolutely no effort to properly clarify his statements afterwards. This is where i came in to at least try to defuse this baseless shitthrowing.

When Jasper was confronted, his statements became even more, let's call it, varied. Suddenly, the reason for not using lima is stated here:

13:53 #dri-devel: < Jasper> Lima wasn't good enough in a performance evaluation and since it
                  showed no activity for a few years, we decided to pay for a DDK in order to 
                  ship a product instead of working on Lima instead.

To me, that sounds like a pretty complete reason as to why Endless Mobile chose not to use Lima at the time. It does however not exclude trying to help further Lima at the same time, or later on, like now. But then the following statements came out of Jasper...

13:56 #dri-devel: < Jasper> libv, if we want to contribute to Lima, how can we do so?
13:57 #dri-devel: < Jasper> We'd obviously love an active open-source driver for Mali.
...
13:58 #dri-devel: < Jasper> I didn't see any way to contribute money through http://limadriver.org/
13:58 #dri-devel: < Jasper> Or have consulting done

Good to know... But then...

13:58 #dri-devel: < libv> Jasper: you or your company could've asked.
13:59 #dri-devel: < Jasper> My understanding was that we did.
13:59 #dri-devel: < Jasper> But that happened before I arrived.
13:59 #dri-devel: < libv> definitely not true.
13:59 #dri-devel: < Jasper> I was told we tried to reach out to the Lima project with no response.
                  I don't know how or what happened.
14:00 #dri-devel: < libv> i would remember such an extremely rare event

The supposed reaching out from EndlessM to lima seems pretty weird when taking the earlier statements into account, like at 13:53 and especially at 13:01. Why would anyone make those statements and then turn around and offer to help. What is Jasper hiding?

After that I couldn't keep my mouth shut and voiced my own frustration with how demotivated i am on doing the cleanup and release, and whined about some possible reasons as to why that is so. And then, suddenly...

14:52 #dri-devel: < Jasper> libv, so the reason we didn't contribute to Lima was because we didn't
                  imagine anything would come of it.
14:53 #dri-devel: < Jasper> It seems we were correct.

It seems that now Jasper his memory was refreshed, again, and that EndlessM never did reach out for another reason altogether...

A bit further down, Jasper lets the following slip:

14:57 #dri-devel: < Jasper> libv, I know about the radeonhd story. jrb has told me quite a lot.
14:58 #dri-devel: < Jasper> Jonathan Blandford. My boss at Endless, and my former boss at Red Hat.

And suddenly the whole thing becomes clear. In the private conversation that ensued Jasper explained that he is good friends with exactly the 3 persons named before, and that those 3 persons told him all he wanted to know about the RadeonHD project. It also seems that Jonathan Blandford was, in some way, involved with Red Hats decision to sign a pact with the ATI devil to produce a fork of a proper open source driver for AMD. These people simply never will go near anything I do, and would rather spend their time slandering yours truly than to actually contribute or provide support, and this is exactly what has been happening here too.

This brings us straight back to what was stated before anything else i quoted so far:

12:57 #dri-devel: < EdB> Jasper: I guess lima is to be avoided
...
12:58 #dri-devel: < Jasper> EdB, yeah, but that's not for legal reasons

Not for legal reasons indeed.

But the lies didn't end there. Next to coming up with more reasons to not contribute to lima, in that same private conversation Jasper "kindly" suggested that he'd continue Lima if only i throw my code over the wall. Didn't he state previously that Lima was tainted, with which i think he meant that he was tainted and couldn't work on lima?

Why?

At the time, i knew i was being played and bullshitted, but I couldn't make it stick on a single statement alone in the maelstrom that is IRC. It is only after rereading the irc log and putting Jaspers statements next to eachother that the real story becomes clear. You might feel that i am reading too much into this, and that i have deliberately taken Jaspers statements out of context to make him look bad. But I implore you to verify the above with the irc log, and you will see that i have left very little necessary context out. The above is Jaspers statements, with my commentary, put closely together, removing the noise (which is a really bad description for the rest of the irc discussion that went on at that time) and boosting the signal. Jasper statements are highly erratic, they make no sense when they are considered collectively as truths, and some are plain lies when considered individually.

Jasper has been spending a lot of time asking for help for supporting a binary driver. Nobody in the business does that. I guess that (former) red hat people are special. I still have not understood why it is that Red Hat is able to get away with so much while companies like canonical and SuSE get trashed so easily. But that is besides the point here; no-one questioned why Jasper is wasting so much time of open source developers on a binary driver. Jasper was indirectly asked why he was not using Lima, nothing more.

If Jasper had just stated that Endless Mobile decided to not use Lima because Endless Mobile thought Lima was not far enough along yet, and that Endless Mobile did not have the resources to do or fund all the necessary work still, then i think that that would've been an acceptable answer for EdB. It's not something i like hearing, but i am quite used to this by now. Crucially though, i would not have been in a position to complain.

Jasper instead went straight to trashing me and my work, with statements that can only be described as slander. When questioned, he then gave a series of mutually exclusive statements, which can only be explained as "making things up as he goes along", working very hard to mask the original reasons for not supporting Lima at all.

This is more than just "Lima is not ready", otherwise Jasper would have been comfortable sticking to that story. Jasper, his boss and Endless Mobile did not decide against Lima on any technical or logistical basis. This whole story never was about code at all. These people simply would do anything but support something that involves me, no matter how groundbreaking, necessary or worthy my work is. They would much rather trash and re-invent, and then trash some more...

It of course did not take long for the other vultures to join the frenzy... When the discussion on irc was long over, Daniel Stone made one attempt to revive it, to no avail. Then, a day later, someone anonymously informed phoronix, and Dave Airlie could then clearly be seen to try to stoke the fire, again, with limited success. I guess the fact that most of the phoronix users still only care about desktop hardware played to my advantage here (for once). It does very clearly show how these people work though, and I know that they will continue to try to play this card, trying hard to trigger another misstep from me or a cause a proper shitstorm on the web.

So what now.

Now I have a choice. I can wait until i am in the mood to clean up this code and produce something useful for public consumption, and in the meantime see my name and my work slandered. Or... If i throw my (nasty) code over the wall, someone will rename/rewrite it, mess up a lot of things, and trash me for not having done corner X or Y, so essentially slander me and my work and probably even erase my achievements from history, by using more of my work against me... And in the latter case i give people like Jasper and companies like Endless Mobile what they want, in spite of their contemptible actions and statements. Logically, there is only one conclusion possible in that constellation.

At the end of the day, i am still the author of this code and it was my time that I burned on it, my life that i wasted on it. I have no obligations to anyone, and am free to do with my code what i want. I cannot be forced into anything. I will therefor release my work, under my own terms, when i want to and when i feel ok doing so.

This whole incident did not exactly motivate me to spend time on cleaning code up and releasing it. At best, it confirmed my views on open source software and certain parts of the X.org community.

This game never was about code or about doing The Right Thing, and never will be.

23 Apr 2015 2:30pm GMT

22 Apr 2015

feedplanet.freedesktop.org

Tollef Fog Heen: Temperature monitoring using a Beaglebone Black and 1-wire

I've had a half-broken temperature monitoring setup at home for quite some time. It started out with a Atom-based NAS, a USB-serial adapter and a passive 1-wire adapter. It sometimes worked, then stopped working, then started when poked with a stick. Later, the NAS was moved under the stairs and I put a Beaglebone Black in its old place. The temperature monitoring thereafter never really worked, but I didn't have the time to fix it. Over the last few days, I've managed to get it working again, of course by replacing nearly all the existing components.

I'm using the DS18B20 sensors. They're about USD 1 a piece on Ebay (when buying small quantities) and seems to work quite ok.

My first task was to address the reliability problems: Dropouts and really poor performance. I thought the passive adapter was problematic, in particular with the wire lengths I'm using and I therefore wanted to replace it with something else. The BBB has GPIO support, and various blog posts suggested using that. However, I'm running Debian on my BBB which doesn't have support for DTB overrides, so I needed to patch the kernel DTB. (Apparently, DTB overrides are landing upstream, but obviously not in time for Jessie.)

I've never even looked at Device Tree before, but the structure was reasonably simple and with a sample override from bonebrews it was easy enough to come up with my patch. This uses pin 11 (yes, 11, not 13, read the bonebrews article for explanation on the numbering) on the P8 block. This needs to be compiled into a .dtb. I found the easiest way was just to drop the patched .dts into an unpacked kernel tree and then running make dtbs.

Once this works, you need to compile the w1-gpio kernel module, since Debian hasn't yet enabled that. Run make menuconfig, find it under "Device drivers", "1-wire", "1-wire bus master", build it as a module. I then had to build a full kernel to get the symversions right, then build the modules. I think there is or should be an easier way to do that, but as I cross-built it on a fast AMD64 machine, I didn't investigate too much.

Insmod-ing w1-gpio then works, but for me, it failed to detect any sensors. Reading the data sheet, it looked like a pull-up resistor on the data line was needed. I had enabled the internal pull-up, but apparently that wasn't enough, so I added a 4.7kOhm resistor between pin 3 (VDD_3V3) on P9 and pin (GPIO_45) on P8. With that in place, my sensors showed up in /sys/bus/w1/devices and you can read the values using cat.

In my case, I wanted the data to go into collectd and then to graphite. I first tried using an Exec plugin, but never got it to work properly. Using a [python plugin] worked much better and my graphite installation is now showing me temperatures.

Now I just need to add more probes around the house.

The most useful references were

In addition, various searches for DS18B20 pinout and similar, of course.

22 Apr 2015 8:32am GMT

21 Apr 2015

feedplanet.freedesktop.org

Donnie Berkholz: How to give a great talk, the lazy way

presenter modeGot a talk coming up? Want it to go well? Here's some starting points.

I give a lot of talks. Often I'm paid to give them, and I regularly get very high ratings or even awards. But every time I listen to people speaking in public for the first time, or maybe the first few times, I think of some very easy ways for them to vastly improve their talks.

Here, I wanted to share my top tips to make your life (and, selfishly, my life watching your talks) much better:

  1. Presenter mode is the greatest invention ever. Use it. If you ignore or forget everything else in this post, remember the rainbows and unicorns of presenter mode. This magical invention keeps the current slide showing on the projector while your laptop shows something different - the current slide, a small image of the next slide, and your slide notes. The last bit is the key. What I put on my notes is the main points of the current slide, followed by my transition to the next slide. Presentations look a lot more natural when you say the transition before you move to the next slide rather than after. More than anything else, presenter mode dramatically cut down on my prep time, because suddenly I no longer had to rehearse. I had seamless, invisible crib notes while I was up on stage.
  2. Plan your intro. Starting strong goes a long way, as it turns out that making a good first impression actually matters. It's time very well spent to literally script your first few sentences. It helps you get the flow going and get comfortable, so you can really focus on what you're saying instead of how nervous you are. Avoid jokes unless most of your friends think you're funny almost all the time. (Hint: they don't, and you aren't.)
  3. No bullet points. Ever. (Unless you're an expert, and you probably aren't.) We've been trained by too many years of boring, sleep-inducing PowerPoint presentations that bullet points equal naptime. Remember presenter mode? Put the bullet points in the slide notes that only you see. If for some reason you think you're the sole exception to this, at a minimum use visual advances/transitions. (And the only good transition is an instant appear. None of that fading crap.) That makes each point appear on-demand rather than all of them showing up at once.
  4. Avoid text-filled slides. When you put a bunch of text in slides, people inevitably read it. And they read it at a different pace than you're reading it. Because you probably are reading it, which is incredibly boring to listen to. The two different paces mean they can't really focus on either the words on the slide or the words coming out of your mouth, and your attendees consequently leave having learned less than either of those options alone would've left them with.
  5. Use lots of really large images. Each slide should be a single concept with very little text, and images are a good way to force yourself to do so. Unless there's a very good reason, your images should be full-bleed. That means they go past the edges of the slide on all sides. My favorite place to find images is a Flickr advanced search for Creative Commons licenses. Google also has this capability within Search Tools. Sometimes images are code samples, and that's fine as long as you remember to illustrate only one concept - highlight the important part.
  6. Look natural. Get out from behind the podium, so you don't look like a statue or give the classic podium death-grip (one hand on each side). You'll want to pick up a wireless slide advancer and make sure you have a wireless lavalier mic, so you can wander around the stage. Remember to work your way back regularly to check on your slide notes, unless you're fortunate enough to have them on extra monitors around the stage. Talk to a few people in the audience beforehand, if possible, to get yourself comfortable and get a few anecdotes of why people are there and what their background is.
  7. Don't go over time. You can go under, even a lot under, and that's OK. One of the best talks I ever gave took 22 minutes of a 45-minute slot, and the rest filled up with Q&A. Nobody's going to mind at all if you use up 30 minutes of that slot, but cutting into their bathroom or coffee break, on the other hand, is incredibly disrespectful to every attendee. This is what watches, and the timer in presenter mode, and clocks, are for. If you don't have any of those, ask a friend or make a new friend in the front row.
  8. You're the centerpiece. The slides are a prop. If people are always looking at the slides rather than you, chances are you've made a mistake. Remember, the focus should be on you, the speaker. If they're only watching the slides, why didn't you just post a link to Slideshare or Speakerdeck and call it a day?

I've given enough talks that I have a good feel on how long my slides will take, and I'm able to adjust on the fly. But if you aren't sure of that, it might make sense to rehearse. I generally don't rehearse, because after all, this is the lazy way.

If you can manage to do those 8 things, you've already come a long way. Good luck!


Tagged: communication, gentoo

21 Apr 2015 3:42pm GMT

Julien Danjou: Gnocchi 1.0: storing metrics and resources at scale

A few months ago, I wrote a long post about what I called back then the "Gnocchi experiment". Time passed and we - me and the rest of the Gnocchi team - continued to work on that project, finalizing it.

It's with a great pleasure that we are going to release our first 1.0 version this month, roughly at the same time that the integrated OpenStack projects release their Kilo milestone. The first release candidate numbered 1.0.0rc1 has been released this morning!

The problem to solve

Before I dive into Gnocchi details, it's important to have a good view of what problems Gnocchi is trying to solve.

Most of the IT infrastructures out there consists of a set of resources. These resources have properties: some of them are simple attributes whereas others might be measurable quantities (also known as metrics).

And in this context, the cloud infrastructures make no exception. We talk about instances, volumes, networks… which are all different kind of resources. The problems that are arising with the cloud trend is the scalability of storing all this data and being able to request them later, for whatever usage.

What Gnocchi provides is a REST API that allows the user to manipulate resources (CRUD) and their attributes, while preserving the history of those resources and their attributes.

Gnocchi is fully documented and the documentation is available online. We are the first OpenStack project to require patches to integrate the documentation. We want to raise the bar, so we took a stand on that. That's part of our policy, the same way it's part of the OpenStack policy to require unit tests.

I'm not going to paraphrase the whole Gnocchi documentation, which covers things like installation (super easy), but I'll guide you through some basics of the features provided by the REST API. I will show you some example so you can have a better understanding of what you could leverage using Gnocchi!

Handling metrics

Gnocchi provides a full REST API to manipulate time-series that are called metrics. You can easily create a metric using a simple HTTP request:

POST /v1/metric HTTP/1.1
Content-Type: application/json

{
"archive_policy_name": "low"
}

HTTP/1.1 201 Created
Location: http://localhost/v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a
Content-Type: application/json; charset=UTF-8

{
"archive_policy": {
"aggregation_methods": [
"std",
"sum",
"mean",
"count",
"max",
"median",
"min",
"95pct"
],
"back_window": 0,
"definition": [
{
"granularity": "0:00:01",
"points": 3600,
"timespan": "1:00:00"
},
{
"granularity": "0:30:00",
"points": 48,
"timespan": "1 day, 0:00:00"
}
],
"name": "low"
},
"created_by_project_id": "e8afeeb3-4ae6-4888-96f8-2fae69d24c01",
"created_by_user_id": "c10829c6-48e2-4d14-ac2b-bfba3b17216a",
"id": "387101dc-e4b1-4602-8f40-e7be9f0ed46a",
"name": null,
"resource_id": null
}


The archive_policy_name parameter defines how the measures that are being sent are going to be aggregated. You can also define archive policies using the API and specify what kind of aggregation period and granularity you want. In that case , the low archive policy keeps 1 hour of data aggregated over 1 second and 1 day of data aggregated to 30 minutes. The functions used for aggregations are the mathematical functions standard deviation, minimum, maximum, … and even 95th percentile. All of that is obviously customizable and you can create your own archive policies.

If you don't want to specify the archive policy manually for each metric, you can also create archive policy rule, that will apply a specific archive policy based on the metric name, e.g. metrics matching disk.* will be high resolution metrics so they will use the high archive policy.

It's also worth noting Gnocchi is precise up to the nanosecond and is not tied to the current time. You can manipulate and inject measures that are years old and precise to the nanosecond. You can also inject points with old timestamps (i.e. old compared to the most recent one in the timeseries) with an archive policy allowing it (see back_window parameter).

It's then possible to send measures to this metric:

POST /v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a/measures HTTP/1.1
Content-Type: application/json

[
{
"timestamp": "2014-10-06T14:33:57",
"value": 43.1
},
{
"timestamp": "2014-10-06T14:34:12",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:20",
"value": 2
}
]

HTTP/1.1 204 No Content


These measures are synchronously aggregated and stored into the configured storage backend. Our most scalable storage drivers for now are either based on Swift or Ceph which are both scalable storage objects systems.

It's then possible to retrieve these values:

GET /v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a/measures HTTP/1.1

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8

[
[
"2014-10-06T14:30:00.000000Z",
1800.0,
19.033333333333335
],
[
"2014-10-06T14:33:57.000000Z",
1.0,
43.1
],
[
"2014-10-06T14:34:12.000000Z",
1.0,
12.0
],
[
"2014-10-06T14:34:20.000000Z",
1.0,
2.0
]
]


As older Ceilometer users might notice here, metrics are only storing points and values, nothing fancy such as metadata anymore.

By default, values eagerly aggregated using mean are returned for all supported granularities. You can obviously specify a time range or a different aggregation function using the aggregation, start and stop query parameter.

Gnocchi also supports doing aggregation across aggregated metrics:

GET /v1/aggregation/metric?metric=65071775-52a8-4d2e-abb3-1377c2fe5c55&metric=9ccdd0d6-f56a-4bba-93dc-154980b6e69a&start=2014-10-06T14:34&aggregation=mean HTTP/1.1

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8

[
[
"2014-10-06T14:34:12.000000Z",
1.0,
12.25
],
[
"2014-10-06T14:34:20.000000Z",
1.0,
11.6
]
]


This computes the mean of mean for the metric 65071775-52a8-4d2e-abb3-1377c2fe5c55 and 9ccdd0d6-f56a-4bba-93dc-154980b6e69a starting on 6th October 2014 at 14:34 UTC.

Indexing your resources

Another object and concept that Gnocchi provides is the ability to manipulate resources. There is a basic type of resource, called generic, which has very few attributes. You can extend this type to specialize it, and that's what Gnocchi does by default by providing resource types known for OpenStack such as instance, volume, network or even image.

POST /v1/resource/generic HTTP/1.1

Content-Type: application/json

{
"id": "75C44741-CC60-4033-804E-2D3098C7D2E9",
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D"
}

HTTP/1.1 201 Created
Location: http://localhost/v1/resource/generic/75c44741-cc60-4033-804e-2d3098c7d2e9
ETag: "e3acd0681d73d85bfb8d180a7ecac75fce45a0dd"
Last-Modified: Fri, 17 Apr 2015 11:18:48 GMT
Content-Type: application/json; charset=UTF-8

{
"created_by_project_id": "ec181da1-25dd-4a55-aa18-109b19e7df3a",
"created_by_user_id": "4543aa2a-6ebf-4edd-9ee0-f81abe6bb742",
"ended_at": null,
"id": "75c44741-cc60-4033-804e-2d3098c7d2e9",
"metrics": {},
"project_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d",
"revision_end": null,
"revision_start": "2015-04-17T11:18:48.696288Z",
"started_at": "2015-04-17T11:18:48.696275Z",
"type": "generic",
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}


The resource is created with the UUID provided by the user. Gnocchi handles the history of the resource, and that's what the revision_start and revision_end fields are for. They indicates the lifetime of this revision of the resource. The ETag and Last-Modified headers are also unique to this resource revision and can be used in a subsequent request using If-Match or If-Not-Match header, for example:

GET /v1/resource/generic/75c44741-cc60-4033-804e-2d3098c7d2e9 HTTP/1.1
If-Not-Match: "e3acd0681d73d85bfb8d180a7ecac75fce45a0dd"

HTTP/1.1 304 Not Modified


Which is useful to synchronize and update any view of the resources you might have in your application.

You can use the PATCH HTTP method to modify properties of the resource, which will create a new revision of the resource. The history of the resources are available via the REST API obviously.

The metrics properties of the resource allow you to link metrics to a resource. You can link existing metrics or create new ones dynamically:

POST /v1/resource/generic HTTP/1.1
Content-Type: application/json

{
"id": "AB68DA77-FA82-4E67-ABA9-270C5A98CBCB",
"metrics": {
"temperature": {
"archive_policy_name": "low"
}
},
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D"
}

HTTP/1.1 201 Created
Location: http://localhost/v1/resource/generic/ab68da77-fa82-4e67-aba9-270c5a98cbcb
ETag: "9f64c8890989565514eb50c5517ff01816d12ff6"
Last-Modified: Fri, 17 Apr 2015 14:39:22 GMT
Content-Type: application/json; charset=UTF-8

{
"created_by_project_id": "cfa2ebb5-bbf9-448f-8b65-2087fbecf6ad",
"created_by_user_id": "6aadfc0a-da22-4e69-b614-4e1699d9e8eb",
"ended_at": null,
"id": "ab68da77-fa82-4e67-aba9-270c5a98cbcb",
"metrics": {
"temperature": "ad53cf29-6d23-48c5-87c1-f3bf5e8bb4a0"
},
"project_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d",
"revision_end": null,
"revision_start": "2015-04-17T14:39:22.181615Z",
"started_at": "2015-04-17T14:39:22.181601Z",
"type": "generic",
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}


Haystack, needle? Find!

With such a system, it becomes very easy to index all your resources, meter them and retrieve this data. What's even more interesting is to query the system to find and list the resources you are interested in!

You can search for a resource based on any field, for example:

POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json

{
"=": {
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}
}


That query will return a list of all resources owned by the user_id bd3a1e52-1c62-44cb-bf04-660bd88cd74d.

You can do fancier queries such as retrieving all the instances started by a user this month:

POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
Content-Length: 113

{
"and": [
{
"=": {
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}
},
{
">=": {
"started_at": "2015-04-01"
}
}
]
}


And you can even do fancier queries than the fancier ones (still following?). What if we wanted to retrieve all the instances that were on host foobar the 15th April and who had already 30 minutes of uptime? Let's ask Gnocchi to look in the history!

POST /v1/search/resource/instance?history=true HTTP/1.1
Content-Type: application/json
Content-Length: 113

{
"and": [
{
"=": {
"host": "foobar"
}
},
{
">=": {
"lifespan": "1 hour"
}
},
{
"<=": {
"revision_start": "2015-04-15"
}
}

]
}


I could also mention the fact that you can search for value in metrics. One feature that I will very likely include in Gnocchi 1.1 is the ability to search for resource whose specific metrics matches some value. For example, having the ability to search for instances whose CPU consumption was over 80% during a month.

Cherries on the cake

While Gnocchi is well integrated and based on common OpenStack technology, please do note that it is completely able to function without any other OpenStack component and is pretty straight-forward to deploy.

Gnocchi also implements a full RBAC system based on the OpenStack standard oslo.policy and which allows pretty fine grained control of permissions.

There is also some work ongoing to have HTML rendering when browsing the API using a Web browser. While still simple, we'd like to have a minimal Web interface served on top of the API for the same price!

Ceilometer alarm subsystem supports Gnocchi with the Kilo release, meaning you can use it to trigger actions when a metric value crosses some threshold. And OpenStack Heat also supports auto-scaling your instances based on Ceilometer+Gnocchi alarms.

And there are a few more API calls that I didn't talk about here, so don't hesitate to take a peek at the full documentation!

Towards Gnocchi 1.1!

Gnocchi is a different beast in the OpenStack community. It is under the umbrella of the Ceilometer program, but it's one of the first projects that is not part of the (old) integrated release. Therefore we decided to have a release schedule not directly linked to the OpenStack and we'll release more often that the rest of the old OpenStack components - probably once every 2 months or the like.

What's coming next is a close integration with Ceilometer (e.g. moving the dispatcher code from Gnocchi to Ceilometer) and probably more features as we have more requests from our users. We are also exploring different backends such as InfluxDB (storage) or MongoDB (indexer).

Stay tuned, and happy hacking!

21 Apr 2015 3:00pm GMT

20 Apr 2015

feedplanet.freedesktop.org

Christian Schaller: Fedora Workstation: More than the sum of its parts

So I came across this very positive review of Fedora Workstation on linux.com, although it was billed as a review of GNOME 3.16. Which is of course correct, in the sense that the desktop we are going to ship on Fedora Workstation 22 is GNOME 3.16. But I wanted to take the opportunity to highlight that when you look at Fedora Workstation it is a complete and integrated operating system. As I mentioned in a blog post about Fedora Workstation last April the core idea for Fedora Workstation is to stop treating the operating system as a bag of parts and instead look at it as a whole, because if we are to drain the swamp here we need to solve the major issues that people face with their desktop regardless of if that issue is in the kernel, the graphics drivers, glibc or elsewhere in your system. We can not just look at the small subset of packages that provides the chrome for your user interface in isolation.

This is why we are working on reliable firmware upgrades for your UEFI motherboard by participating in the UEFI working group and adding functionality in GNOME Software to handle doing firmware updates.

This is why we recently joined Khronos to make sure the standards for doing 3D on Linux are good and open source friendly.

This is why we been working so hard on improving coverage of Appdata metadata coverage, well beyond the confines of 'GNOME' software.

This is why we have Richard Hughes and Owen Taylor working on how we can improve battery life when running
Fedora or RHEL on laptops.

This is why we created dnf to replace yum, to get a fast and efficient package update system.

This is why we are working on an Adwaita theme for Qt

And this is why we are pushing hard forward with a lot of other efforts like Wayland, libinput, Fleet Commander, Boxes and more.

So when you look at the user experience you get on Fedora Workstation, remember that it is not just a question of which version of GNOME we are shipping, but it is the fact that we are working hard on putting together a tightly vertically integrated and tested system from the kernel up to core desktop applications.

Anyone who has been using Fedora for a long while knows that this change was a major change in philosophy and approach for the project, as Fedora up to the 21 release of the 3 new products was very much defined by the opposite, being all about the lego blocks, which contributed to the image of Fedora being a bleeding edge system where you should be prepared to do a lot of bleeding and where you probably wanted to keep your toolbox with you at all times in case something broke. So I have to say that I am mightily impressed by how the Fedora community has taken to this major change where we now are instead focusing our efforts on our 3 core products and are putting a lot of effort into creating stuff that is polished and reliable, and which aims to be leading edge instead of bleeding edge.

So with all this in mind I was a little disappointed when the reviewer writing the article in question ended his review by saying he was now waiting for GNOME 3.16 to appear in Ubuntu GNOME, because there is no guarantees that he would get the same overall user experience in Ubuntu GNOME that we have developed for Fedora Workstation, which is the user experience his review reflects.

Anyway, I thought this could be a good opportunity to actually ask the wider community a question, especially if you are using GNOME on another distribution than Fedora, what are we still missing at this point for you to consider making a switch to Fedora Workstation? I know that for some of you the answer might be as simple as 'worn in shoes fits the best', but anything you might have beyond that would be great to hear.
I can't promise that we will be able to implement every suggestion you add to this blog post, but I do promise that we will review and consider every suggestion you provide and try to see how it can fit into development plans going forward.

20 Apr 2015 2:49pm GMT

Daniel Vetter: Neat drm/i915 stuff for 4.1

With Linux kernel v3.20^W v4.0 already out there door my overview of what's in 4.1 for drm/i915 is way overdue.

First looking at the modeset side of the driver the big overall thing is all the work to convert i915 to atomic. In this release there's code from Ander Conselvan de Oliveira to have a struct drm_atomic_state allocated for all the legacy modeset code paths in the driver. With that we can switch the internals to start using atomic state objects and gradually convert over everything to atomic on the modeset side. Matt Roper on the other hand was busy to prepare the plane code to land the atomic watermark update code. Damien has reworked the initial plane configuration code used for fastboot, which also needs to be adapted to the atomic world.


For more specific feature work there's the DRRS (dynamic refresh rate switching) from Sonika, Vandana and more people, which is now enabled where supported. The idea is to reduce the refresh rate of the panel to save power when nothing changes on the screen. And Paulo Zanoni has provided patches to improve the FBC code, hopefully we can enable that by default soon too. Under the hood Ville has refactored the DP link rate computation and the sprite color key handling, both to prepare for future work and platform enabling. Intermediate link rate support for eDP 1.4 from Sonika built on top of this. Imre Deak has also reworked the Baytrail/Braswell DPLL code to prepare for Broxton.

Speaking of platforms, Skyleigh has gained runtime PM support from Damien, and RPS (render turbo and sleep states) from Akash. Another SKL exclusive is support for scanout of Y-tiled buffers and scanning out buffers rotated by 90°/270° (instead of just normal and rotated by 180°) from Tvrtko and Damien. Well the rotation support didn't quite land yet, but Tvrtko's support for the special pagetable binding needed for that feature in the form of rotated GGTT views. Finally Nick Hoath and Damien also submitted a lot of workaround patches for SKL.

Moving on to Braswell/Cherryview there have been tons of fixes to the DPLL and watermark code from Vijay and Ville, and BSW left the preliminary hw support. And also for the SoC platforms Chris Wilson has supplied a pile of patches to tune the rps code and bring it more in-line with the big core platforms.

On the GT side the big ongoing work is dyanmic pagetable allocations Michel Thierry based upon patches from Ben Widawsky. With per-process address spaces and even more so with the big address spaces gen8+ supports it would be wasteful if not impossible to allocate pagetables for the entire address space upfront. But changing the code to handle another possible memory allocation failure point needed a lot of work. Most of that has landed now, but the benefits of enabling bigger address spaces haven't made it into 4.1.

Another big work is XenGT client-side support fromYu Zhang and team. This is paravirtualization to allow virtual machines to tap into the render engines without requiring exclusive access, but also with a lot less overhead than full virtual hardware like vmware or virgil would provide. The host-side code is also submitted already, but needs a bit more work still to integrate cleanly into the driver.

And of course there's been lots of other smaller work all over, as usual. Internal documentation for the shrinker, more dead UMS code removed, the vblank interrupt code cleaned up and more.

20 Apr 2015 3:57am GMT

18 Apr 2015

feedplanet.freedesktop.org

Christian Schaller: Congratulations to Endless Computer

So the Endless Computer kickstarter just succeeded in their funding goal of 100K USD. A big heartfelt congratulations to the whole team and I am looking forward to receiving my system. For everyone else out there I strongly recommend getting in on their kickstarter, not only do you get a cool looking computer with a really nice Linux desktop, you are helping a company forward that has the potential to take the linux dektop to the next level. And the be aware that the computer is a standard computer (yet very cool looking) at the end of the day, so if you want to install Fedora Workstation on it you can :)

18 Apr 2015 3:46pm GMT

17 Apr 2015

feedplanet.freedesktop.org

Christian Schaller: Red Hat joins Khronos

So Red Hat are now formally a member of the Khronos Groups who many of probably know as the shepherds of the OpenGL standard. We haven't gotten all the little bits sorted yet, like getting our logo on the Khronos website, but our engineers are signing up for the various Khronos working groups etc. as we speak.

So the reason we are joining is because of all the important changes that are happening in Graphics and GPU compute these days and our wish to have more direct input of the direction of some of these technologies. Our efforts are likely to focus on improving the OpenGL specification by proposing some new extensions to OpenGL, and of course providing input and help with moving the new Vulkan standard forward.

So well known Red Hat engineers such as Dave Airlie, Adam Jackson, Rob Clark and others will from now on play a much more direct role in helping shape the future of 3D Graphics standards. And we really look forward to working with our friends and colleagues at Nvidia, AMD, Intel, Valve and more inside Khronos.

17 Apr 2015 7:19pm GMT

16 Apr 2015

feedplanet.freedesktop.org

Christian Schaller: kdbus discussion

I am following the discussion caused by Greg Kroah-Hartman requesting that kdbus be pulled into the next kernel release. First of all my hat of to Greg for his persistence and staying civil. There has already been quite a few posts to the thread at coming close to attempts at character assassination and a lot of emails just adding more noise, but no signal.

One point I feel is missing from the discussion here though is the question of not making the perfect the enemy of the good. A lot of the posters are saying that 'hey, you should write something perfect here instead of what you have currently'. Which sounds reasonable on some level, but when you think of it is more a deflection strategy than a real suggestion. First of all the is no such thing as perfect. And secondly if there was how long would it take to provide the theoretical perfect thing? 2 more years, 4 years, 10 years?

Speaking as someone involved in making an operating system I can say that we would have liked to have kdbus 5 years ago and would much prefer to get kdbus in the next few Months, than getting something 'perfect' in 5 years from now.

So you can say 'hey, you are creating a strawman here, nobody used the word 'perfect" in the discussion. Sure, nobody used that word, but a lot of messages was written about how 'something better' should be created here instead. Yet, based on from where these people seemed to be coming the question I ask then is: Better for who? Better for the developers who are already using dbus in the applications or desktops? Better for a kernel developer who is never going to use it? Better for someone doing code review? Better for someone who doesn't need the features of dbus, but who would want something else?

And what is 'better' anyway? Greg keeps calling for concrete technical feedback, but at the end of the day there is a lot of cases where the 'best' technical solution, to the degree you can measure that objectively, isn't actually 'the best'. I mean if I came up with a new format for storing multimedia on an optical disk, one which from a technical perspective is 'better' than the current Blu-Ray spec, that doesn't mean it is actually better for the general public. Getting a non-standard optical disc that will not play in your home Blu-Ray player isn't better for 99.999% of people, regardless of the technical merit of the on-disc data format.

Something can actually be 'better' just because it is based on something that already exists, something which have a lot of people already using it, lets people quickly extend what they already are doing with the new functionality without needing a re-write and something which is available 'now' as opposed to some undefined time in the future. And that is where I feel a lot of the debaters on the lkml are dropping the ball in this discussion; they just keeping asking for a 'better solution' to the challenges of a space they often don't have any personal experience with developing in, because kdbus doesn't conform to how they would implement a kernel IPC mechanism for the kind of problems they are used to trying to solve.

Also there has been a lot of arguments about the 'design' of kdbus and dbus. A lot of lacking concreteness to it and mostly coming from people very far removed from working on the desktop and other relevant parts of userspace. Which at the end of the day boils down to trying to make the lithmus test 'you have to prove to me that making a better design is impossible', and I think that anyone should be able to agree that if that was the test for adding anything to the Linux kernel or elsewhere then very little software would ever get added anywhere. In fact if we where to hold to that kind of argumentation we might as well leave software development behind and move to an online religion discussion forum tossing the good ol' "prove to me that God doesn't exist' into the ring.

So in the end I hope Linus, who is the final arbiter here, doesn't end up making the 'perfect' the enemy of the good.

16 Apr 2015 6:55pm GMT

14 Apr 2015

feedplanet.freedesktop.org

Christian Schaller: Hating on the Internet

So George R.R. Martin has spent quite a bit of time over the last week, and even a more energy, on responding to this thing called Puppygate. Which not directly related got some commonalities with last years gamergate story.

Yet, what strikes me skimming through a plethora of online discussions is that we seemed to have perfected name calling as an argumentation technique on the Internet. And since the terminology jungle can be a bit confusing I thought I should break it down by putting together this list of common names, so that you know which one you are ;) Trying for a tongue in cheek summary here, but if I fail at least you know what names to call me!

I am sure there are a lots more, these are just a collection of some I seen recently. So while I think there are real issues behind many of these examples I think sadly the Internet seems best for turning anything into a binary question, which surprisingly turns out to not be a super method for finding common ground, and as it turns out if you can associate a name to these things you can turn them into binary questions even faster!

Not that these things are specifically tied to the Internet, long time since I last saw any kind of political debate that felt like there was any time or interest in bringing out the nuances into the discussion; We do live in the age of bumper sticker politics after all.

Anyway, enough musing about the sad state of public discourse :)

14 Apr 2015 6:57pm GMT

07 Apr 2015

feedplanet.freedesktop.org

Matt Turner: Combining constants in i965 fragment shaders

On Intel's Gen graphics, three source instructions like MAD and LRP cannot have constants as arguments. When support for MAD instructions was introduced with Sandybridge, we assumed the choice between a MOV+MAD and a MUL+ADD sequence was inconsequential, so we chose to perform the multiply and add operations separately. Revisiting that assumption has uncovered some interesting things about the hardware and has lead us to some pretty nice performance improvements.

On Gen 7 hardware (Ivybridge, Haswell, Baytrail), multiplies and adds without immediate value arguments can be co-issued, meaning that multiple instructions can be issued from the same execution unit in the same cycle. MADs, never having immediates as sources, can always be co-issued. Considering that, we should prefer MADs, but a typical vec4 * vec4 + vec4(constant) pattern would lead to three duplicate (four total) MOV imm instructions.

mov(8)  g10<1>F    1.0F
mov(8)  g11<1>F    1.0F
mov(8)  g12<1>F    1.0F
mov(8)  g13<1>F    1.0F
mad(8)  g40<1>F    g10<8,8,1>F   g20<8,8,1>F   g30<8,8,1>F
mad(8)  g41<1>F    g11<8,8,1>F   g21<8,8,1>F   g31<8,8,1>F
mad(8)  g42<1>F    g12<8,8,1>F   g22<8,8,1>F   g32<8,8,1>F
mad(8)  g43<1>F    g13<8,8,1>F   g23<8,8,1>F   g33<8,8,1>F

Should be easy to clean up, right? We should simply combine those 1.0F MOVs and modify the MAD instructions to access the same register. Well, conceptually yes, but in practice not quite.

Since the i965 driver's fragment shader backend doesn't use static single assignment form (it's on our TODO list), our common subexpression elimination pass has to emit a MOV instruction when combining instructions. As a result, performing common subexpression elimination on immediate MOVs would undo constant propagation and the compiler's optimizer would go into an infinite loop. Not what you wanted.

Instead, I wrote a pass that scans the instruction list after the main optimization loop and creates a list of immediate values that are used. If an immediate value is used by a 3-source instruction (a MAD or a LRP) or at least four times by an instruction that can co-issue (ADD, MUL, CMP, MOV) then it's put into a register and sourced from there.

But there's still room for improvement. Each general register can store 8 floats, and instead of storing 8 separate constants in each, we're storing a single constant 8 times (and on SIMD16, 16 times!). Fixing that wasn't hard, and it significantly reduces register usage - we now only use one register for each 8 immediate values. Using a special vector-float immediate type we can even load four floating-point values in a single instruction.

With that in place, we can now always emit MAD instructions.

I'm pretty pleased with the results. Without using the New Intermediate Representation (NIR), the shader-db results are:

total instructions in shared programs: 5895414 -> 5747578 (-2.51%)
instructions in affected programs: 3618111 -> 3470275 (-4.09%)

And with NIR (that already unconditionally emits MAD instructions):

total instructions in shared programs: 7992936 -> 7772474 (-2.76%)
instructions in affected programs: 3738730 -> 3518268 (-5.90%)

Effects on a WebGL microbenchmark

In December, I checked what effect my constant combining pass would have on a WebGL procedural noise demo. The demo generates an effect ("noise") that looks like a ball of fire. Its fragment shader contains a ton of instructions but no texturing operations. We're currently able to compile the program in SIMD8 without spilling any registers, but at a cost of scheduling the instructions very badly.

Noise Demo in action

The effects the constant combining pass has on this demo are really interesting, and it actually gives me evidence that some of the ideas I had for the pass are valid, namely that co-issuing instructions is worth a little extra register pressure.

  1. 1.00x FPS of baseline - 3123 instructions - baseline
  2. 1.09x FPS of baseline - 2841 instructions - after promoting constants only if used by more than 2 MADs

Going from no-constant-combining to restricted-constant-combining gives us a 9% increase in frames per second for a 9% instruction count reduction. We're totally limited by fragment shader performance.

  1. 1.46x FPS of baseline - 2841 instructions - after promote any constant used by a MAD

Going from step 2 to 3 though is interesting. The instruction count doesn't change, but we reduced register pressure sufficiently that we can now schedule instructions better without spilling (SCHEDULE_PRE, instead of SCHEDULE_PRE_NON_LIFO) - a 33% speed up just by rearranging instructions.

  1. 1.62x FPS of baseline - 2852 instructions - after promoting constants used by at least 4 co-issueable instructions

I was worried that we weren't going to be able to measure any performance difference from pulling constants out of co-issueable instructions, but we can definitely get a nice improvement here, of about 10% increase in frames per second.

As an aside, I did an experiment to see what would happen if we used SCHEDULE_PRE and spilled registers anyway (I added a couple of extra instructions to increase register pressure over the threshold). I changed the window size to 2048x2048 and rendered a fixed number of frames.

So there's some good evidence that the cure is worse than the disease. Of course this demo doesn't do any texturing, so memory bandwidth is not at a premium.

  1. 1.76x FPS of baseline - 2609 instructions - ???

I ran the demo to see if we'd made any changes in the last two months and was pleasantly surprised to find that we'd cut another 9% of instructions. I have no idea what caused it, but I'll take it! Combined with everything else, we're up to a 76% performance improvement.

Where's the code

The Mesa patches that implement the constant combining pass were committed (commit bb33a31c) and will be in the next major release (presumably version 10.6).

If any of this sounds interesting enough that you'd like to do it for a living, feel free to contact me. My team at Intel is responsible for the open source 3D driver in Mesa and is looking for new talent.

07 Apr 2015 4:00am GMT

02 Apr 2015

feedplanet.freedesktop.org

Bastien Nocera: JdLL 2015

Presentation and conferencing

Last week-end, in the Salle des Rancy in Lyon, GNOME folks (Fred Peters, Mathieu Bridon and myself) set up our booth at the top of the stairs, the space graciously offered by Ubuntu-FR and Fedora being a tad bit small. The JdLL were starting.

We gave away a few GNOME 3.14 Live and install DVDs (more on that later), discussed much-loved features, and hated bugs, and how to report them. A very pleasant experience all-in-all.



On Sunday afternoon, I did a small presentation about GNOME's 15 years. Talking about the upheaval, dragging kernel drivers and OS components kicking and screaming to work as their APIs say they should, presenting GNOME 3.16 new features and teasing about upcoming GNOME 3.18 ones.

During the Q&A, we had a few folks more than interested in support for tablets and convertible devices (such as the Microsoft Surface, and Asus T100). Hopefully, we'll be able to make the OS support good enough for people to be able to use any Linux distribution on those.

Sideshow with the Events box

Due to scheduling errors on my part, we ended up with the "v1" events box for our booth. I made a few changes to the box before we used it:

The events box is still in Lyon, until I receive some replacement hardware.

The machine is 7 years-old (nearly 8!) and only had 512MB of RAM, after the 1GB upgrade, the machine was usable, and many people were impressed by the speed of GNOME on a legacy machine like that (probably more so than a brand new one stuttering because of a driver bug, for example).

This makes you wonder what the use for "lightweight" desktop environments is, when a lot of the features are either punted to helpers that GNOME doesn't need or not implemented at all (old CPU and no 3D driver is pretty much the only use case for those).

I'll be putting it in a small SSD into the demo machine, to give it another speed boost. We'll also be needing a new padlock, after an emergency metal saw attack was necessary on Sunday morning. Five different folks tried to open the lock with the code read off my email, to no avail. Did we accidentally change the combination? We'll never know.

New project, ish

For demo machines, especially newly installed ones, you'll need some content to demo applications. This is my first attempt at uniting GNOME's demo content for release notes screenshots, with some additional content that's free to re-distribute. The repository will eventually move to gnome.org, obviously.

Thanks

The new keyboard and mouse, monitor, padlock, and SSD (and my time) were graciously sponsored by Red Hat.

02 Apr 2015 11:58am GMT

Daniel Vetter: Community Code of Conduct for intel-gfx

[This is a cross-post from the mail I just sent out to intel-gfx.]

Code of conducts seem to be in the news a bit recently, and I realized that I've never really documented how we run things. It's different from the kernel's overall CodeOfConflict and also differs from the official Intel/OTC one in small details about handling issues. And for completeness there's also the Xorg Foundation event policy. Anyway, I think this is worth clarifying and here it goes.

It's simple: Be respectful, open and excellent to each another.

Which doesn't mean we want to sacrifice quality to be nice. Striving for technical excellence very much doesn't exclude being excellent to someone else, and in our experience it tends to go hand in hand.

Unfortunately things go south occasionally. So if you feel threatened, personally abused or otherwise uncomfortable, even and especially when you didn't participate in a discussion yourself, then please raise this in private with the drm/i915 maintainers (currently Daniel Vetter and Jani Nikula, see MAINTAINERS for contact information). And the "in private" part is important: Humans screw up, disciplining minor fumbles by tarnishing someones google-able track record forever is out of proportion.

Still there are some teeth to this code of conduct:

1. First time around minor issues will be raised in private.

2. On repeat cases a public reply in the discussion will enforce that respectful behavior is expected.

3. We'll ban people who don't get it.

And severe cases will be escalated much quicker.

This applies to all community communication channels (irc, mailing list and bugzilla). And as mentioned this really just is a public clarification of the rules already in place - you can't see that though since we never had to go further than step 1.

Let's keep it at that.

And in case you have a problem with an individual drm/i915 maintainer and don't want to raise it with the other one there's the Xorg BoD, linux foundation TAB and the drm upstream maintainer Dave Airlie.

02 Apr 2015 7:53am GMT

30 Mar 2015

feedplanet.freedesktop.org

Matthias Klumpp: Limba Project: Another progress report

And once again, it's time for another Limba blogpost :-)limba-small

Limba is a solution to install 3rd-party software on Linux, without interfering with the distribution's native package manager. It can be useful to try out different software versions, use newer software on a stable OS release or simply to obtain software which does not yet exist for your distribution.

Limba works distribution-independent, so software authors only need to publish their software once for all Linux distributions.

I recently released version 0.4, with which all most important features you would expect from a software manager are complete. This includes installing & removing packages, GPG-signing of packages, package repositories, package updates etc. Using Limba is still a bit rough, but most things work pretty well already.

So, it's time for another progress report. Since a FAQ-like list is easier to digest. compared to a long blogpost, I go with this again. So, let's address one important general question first:

How does Limba relate to the GNOME Sandboxing approach?

(If you don't know about GNOMEs sandboxes, take a look at the GNOME Wiki - Alexander Larsson also blogged about it recently)

First of all: There is no rivalry here and no NIH syndrome involved. Limba and GNOMEs Sandboxes (XdgApp) are different concepts, which both have their place.

The main difference between both projects is the handling of runtimes. A runtime is the shared libraries and other shared ressources applications use. This includes libraries like GTK+/Qt5/SDL/libpulse etc. XdgApp applications have one big runtime they can use, built with OSTree. This runtime is static and will not change, it will only receive critical security updates. A runtime in XdgApp is provided by a vendor like GNOME as a compilation of multiple single libraries.

Limba, on the other hand, generates runtimes on the target system on-the-fly out of several subcomponents with dependency-relations between them. Each component can be updated independently, as long as the dependencies are satisfied. The individual components are intended to be provided by the respective upstream projects.

Both projects have their individual up and downsides: While the static runtime of XdgApp projects makes testing simple, it is also harder to extend and more difficult to update. If something you need is not provided by the mega-runtime, you will have to provide it by yourself (e.g. we will have some applications ship smaller shared libraries with their binaries, as they are not part of the big runtime).

Limba does not have this issue, but instead, with its dynamic runtimes, relies on upstreams behaving nice and not breaking ABIs in security updates, so existing applications continue to be working even with newer software components.

Obviously, I like the Limba approach more, since it is incredibly flexible, and even allows to mimic the behaviour of GNOMEs XdgApp by using absolute dependencies on components.

Do you have an example of a Limba-distributed application?

Yes! I recently created a set of package for Neverball - Alexander Larsson also created a XdgApp bundle for it, and due to the low amount of stuff Neverball depends on, it was a perfect test subject.

One of the main things I want to achieve with Limba is to integrate it well with continuous integration systems, so you can automatically get a Limba package built for your application and have it tested with the current set of dependencies. Also, building packages should be very easy, and as failsafe as possible.

You can find the current Neverball test in the Limba-Neverball repository on Github. All you need (after installing Limba and the build dependencies of all components) is to run the make_all.sh script.

Later, I also want to provide helper tools to automatically build the software in a chroot environment, and to allow building against the exact version depended on in the Limba package.

Creating a Limba package is trivial, it boils down to creating a simple "control" file describing the dependencies of the package, and to write an AppStream metadata file. If you feel adventurous, you can also add automatic build instructions as a YAML file (which uses a subset of the Travis build config schema)

This is the Neverball Limba package, built on Tanglu 3, run on Fedora 21:

Limba-installed Neverball

Which kernel do I need to run Limba?

The Limba build tools run on any Linux version, but to run applications installed with Limba, you need at least Linux 3.18 (for Limba 0.4.2). I plan to bump the minimum version requirement to Linux 4.0+ very soon, since this release contains some improvements in OverlayFS and a few other kernel features I am thinking about making use of.

Linux 3.18 is included in most Linux distributions released in 2015 (and of course any rolling release distribution and Fedora have it).

Building all these little Limba packages and keeping them up-to-date is annoying…

Yes indeed. I expect that we will see some "bigger" Limba packages bundling a few dependencies, but in general this is a pretty annoying property of Limba currently, since there are so few packages available you can reuse. But I plan to address this. Behind the scenes, I am working on a webservice, which will allow developers to upload Limba packages.

This central ressource can then be used by other developers to obtain dependencies. We can also perform some QA on the received packages, map the available software with CVE databases to see if a component is vulnerable and publish that information, etc.

All of this is currently planned, and I can't say a lot more yet. Stay tuned! (As always: If you want to help, please contact me)

Are the Limba interfaces stable? Can I use it already?

The Limba package format should be stable by now - since Limba is still Alpha software, I will however, make breaking changes in case there is a huge flaw which makes it reasonable to break the IPK package format. I don't think that this will happen though, as the Limba packages are designed to be easily backward- and forward compatible.

For the Limba repository format, I might make some more changes though (less invasive, but you might need to rebuilt the repository).

tl;dr: Yes! Plase use Limba and report bugs, but keep in mind that Limba is still in an early stage of development, and we need bug reports!

Will there be integration into GNOME-Software and Muon?

From the GNOME-Software side, there were positive signals about that, but some technical obstancles need to be resolved first. I did not yet get in contact with the Muon crew - they are just implementing AppStream, which is a prerequisite for having any support for Limba[1].

Since PackageKit dropped the support for plugins, every software manager needs to implement support for Limba.


So, thanks for reading this (again too long) blogpost :) There are some more exciting things coming soon, especially regarding AppStream on Debian/Ubuntu!

[1]: And I should actually help with the AppStream support, but currently I can not allocate enough time to take that additional project as well - this might change in a few weeks. Also, Muon does pretty well already!

30 Mar 2015 7:46pm GMT

25 Mar 2015

feedplanet.freedesktop.org

Bastien Nocera: GNOME 3.16 is out!

Did you see?

It will obviously be in Fedora 22 Beta very shortly.


What happened since 3.14? Quite a bit, and a number of unfinished projects will hopefully come to fruition in the coming months.

Hardware support

After quite a bit of back and forth, automatic rotation for tablets will not be included directly in systemd/udev, but instead in a separate D-Bus daemon. The daemon has support for other sensor types, Ambient Light Sensors (ColorHug ALS amongst others) being the first ones. I hope we have compass support soon too.

Support for the Onda v975w's touchscreen and accelerometer are now upstream. Work is on-going for the Wi-Fi driver.

I've started some work on supporting the much hated Adaptive keyboard on the X1 Carbon 2nd generation.

Technical debt

In the last cycle, I've worked on triaging gnome-screensaver, gnome-shell and gdk-pixbuf bugs.

The first got merged into the second, the second got plenty of outdated bugs closed, and priorities re-evaluated as a result.

I wrangled old patches and cleaned up gdk-pixbuf. We still have architectural problems in the library for huge images, but at least we're up to a state where we know what the problems are, not being buried in Bugzilla.

Foundation building

A couple of projects got started that didn't reached maturation yet. I'm pretty happy that we're able to use gnome-books (part of gnome-documents) today to read Comic books. ePub support is coming!



Grilo saw plenty of activity. The oft requested "properties" page in Totem is closer than ever, so is series grouping.

In December, Allan and I met with the ABRT team, and we've landed some changes we discussed there, including a simple "Report bugs" toggle in the Privacy settings, with a link to the OS' privacy policy. The gnome-abrt application had a facelift, but we got somewhat stuck on technical problems, which should get solved in the next cycle. The notifications were also streamlined and simplified.



I'm a fan

Of the new overlay scrollbars, and the new gnome-shell notification handling. And I'm cheering on co-new app in 3.16, GNOME Calendar.

There's plenty more new and interesting stuff in the release, but I would just be duplicating much of the GNOME 3.16 release notes.

25 Mar 2015 3:23pm GMT