31 Oct 2014

feedPlanet Debian

Richard Hartmann: Release Critical Bug report for Week 44

The UDD bugs interface currently knows about the following release critical bugs:

Graphical overview of bug stats thanks to azhag:

31 Oct 2014 11:37pm GMT

Russell Coker: Links October 2014

The Verge has an interesting article about Tim Cook (Apple CEO) coming out [1]. Tim says "if hearing that the CEO of Apple is gay can help someone struggling to come to terms with who he or she is, or bring comfort to anyone who feels alone, or inspire people to insist on their equality, then it's worth the trade-off with my own privacy".

Graydon2 wrote an insightful article about the right-wing libertarian sock-puppets of silicon valley [2].

George Monbiot wrote an insightful article for The Guardian about the way that double-speak facilitates killing people [3]. He is correct that the media should hold government accountable for such use of language instead of perpetuating it.

Anne Thériault wrote an insightful article for Vice about the presumption of innocence and sex crimes [4].

Dr Nerdlove wrote an interesting article about Gamergate as the "extinction burst" of "gamer culture" [5], we can only hope.

Shweta Narayan wrote an insightful article about Category Structure and Oppression [6]. I can't summarise it because it's a complex concept, read the article.

Some Debian users who don't like Systemd have started a "Debian Fork" project [7], which so far just has a web site and nothing else. I expect that they will never write any code. But it would be good if they did, they would learn about how an OS works and maybe they wouldn't disagree so much with the people who have experience in developing system software.

A GamerGate terrorist in Utah forces Anita Sarkeesian to cancel a lecture [8]. I expect that the reaction will be different when (not if) an Islamic group tries to get a lecture cancelled in a similar manner.

Model View Culture has an insightful article by Erika Lynn Abigail about Autistics in Silicon Valley [9].

Katie McDonough wrote an interesting article for Salon about Ed Champion and what to do about men who abuse women [10]. It's worth reading that while thinking about the FOSS community…

Related posts:

  1. Links September 2014 Matt Palmer wrote a short but informative post about enabling...
  2. Links July 2014 Dave Johnson wrote an interesting article for Salon about companies...
  3. Links August 2014 Matt Palmer wrote a good overview of DNSSEC [1]. Sociological...

31 Oct 2014 1:55pm GMT

Russell Coker: Samsung Galaxy Note 3

In June last year I bought a Samsung Galaxy Note 2 [1]. Generally I was very happy with that phone, one problem I had is that less than a year after purchasing it the Ingress menus burned into the screen [2].

2 weeks ago I bought a new Galaxy Note 3. One of the reasons for getting it is the higher resolution screen, I never realised the benefits of a 1920*1080 screen on a phone until my wife got a Nexus 5 [3]. I had been idly considering a Galaxy Note 4, but $1000 is a lot of money to pay for a phone and I'm not sure that a 2560*1440 screen will offer much benefit in that size. Also the Note 3 and Note 4 both have 3G of RAM, as some applications use more RAM when you have a higher resolution screen the Note 4 will effectively have less usable RAM than the Note 3.

My first laptop cost me $3,800 in 1998, that's probably more than $6,000 in today's money. The benefits that I receive now from an Android phone are in many ways greater than I received from that laptop and that laptop was definitely good value for money for me. If the cheapest Android phone cost $6,000 then I'd pay that, but given that the Note 3 is only $550 (including postage) there's no reason for me to buy something more expensive.

Another reason for getting a new phone is the limited storage space in the Note 2. 16G of internal storage is a limit when you have some big games installed. Also the recent Android update which prevented apps from writing to the SD card meant that it was no longer convenient to put TV shows on my SD card. 32G of internal storage in the Note 3 allows me to fit everything I want (including the music video collection I downloaded with youtube-dl). The Note 2 has 16G of internal storage and an 8G SD card (that I couldn't fully use due to Android limitations) while the Note 3 has 32G (the 64G version wasn't on sale at any of the cheap online stores). Also the Note 3 supports an SD card which will be good for my music video collection at some future time, this is a significant benefit over the Nexus 5.

In the past I've written about Android service life and concluded that storage is the main issue [4]. So it is a bit unfortunate that I couldn't get a phone with 64G of storage at a reasonable price. But the upside is that getting a cheaper phone allows me to buy another one sooner and give the old phone to a relative who has less demanding requirements.

In the past I wrote about the warranty support for my wife's Nexus 5 [5]. I should have followed up on that before, 3 days after that post we received a replacement phone. One good thing that Google does is to reserve money on a credit card to buy the new phone and then send you the new phone before you send the old one back. So if the customer doesn't end up sending the broken phone they just get billed for the new phone, that avoids excessive delays in getting a replacement phone. So overall the process of Google warranty support is really good, if 2 products are equal in other ways then it would be best to buy from Google to get that level of support.

I considered getting a Nexus 5 as the hardware is reasonably good (not the greatest but quite good enough) and the price is also reasonably good. But one thing I really hate is the way they do the buttons. Having the home button appear on the main part of the display is really annoying. I much prefer the Samsung approach of having a hardware button for home and touch-screen buttons outside the viewable area for settings and back. Also the stylus on the Note devices is convenient on occasion.

The Note 3 has a fake-leather back. The concept of making fake leather is tacky, I believe that it's much better to make honest plastic that doesn't pretend to be something that it isn't. However the texture of the back improves the grip. Also the fake stitches around the edge help with the grip too. It's tacky but utilitarian.

The Note 3 is slightly smaller and lighter than the Note 2. This is a good technical achievement, but I'd rather they just gave it a bigger battery.

Update USB 3

One thing I initially forgot to mention is that the Note 3 has USB 3. This means that it has a larger socket which is less convenient when you try and plug it in at night. USB 3 seems unlikely to provide any benefit for me as I've never had any of my other phones transfer data at rates more than about 5MB/s. If the Note 3 happens to have storage that can handle speeds greater than the 32MB/s a typical USB 2 storage device can handle then I'm still not going to gain much benefit. USB 2 speeds would allow me to transfer the entire contents of a Note 3 in less than 20 minutes (if I needed to copy the entire storage contents). I can't imagine myself having a real-world benefit from that.

The larger socket means more fumbling when charging my phone at night and it also means that the Note 3 cable can't be used in any other phone I own. In a year or two my wife will have a phone with USB 3 support and that cable can be used for charging 2 phones. But at the moment the USB 3 cable isn't useful as I don't need to have a phone charger that can only charge one phone.

Conclusion

The Note 3 basically does everything I expected of it. It's just like the Note 2 but a bit faster and with more storage. I'm happy with it.

Related posts:

  1. Samsung Galaxy Note 2 A few weeks ago I bought a new Samsung Galaxy...
  2. Samsung Galaxy S3 First Review with Power Case My new Samsung Galaxy S3 arrived a couple of days...
  3. Samsung Galaxy Camera - a Quick Review I recently had a chance to briefly play with the...

31 Oct 2014 1:40pm GMT

Konstantinos Margaritis: SIMD on javascript, MHO

Just read the Mozilla announcement on SIMD.js and I can say I got mixed feelings about this.

I don't really comment other news/blogs/announcements, but this is an exception.

On one hand, I definitely welcome more SIMD use everywhere, being a SIMD advocate and enthusiast for many years (since 2004 actually). So seeing more of it and by someone such as Mozilla, that's even better! On the other hand, wait, that's SIMD in Javascript?!? Really? Why? As if we already covered native coverage of SIMD in every other part, including the browser itself (No browser uses SIMD extensively in its core, though that would prove to be of actual benefit, the only SIMD code I know of is in the media playback code, which is usually some external library like ffmpeg/x264/etc that already has SIMD optimized parts anyway.

So, instead of using resources to optimize the core browser with SIMD -I'm sure there will be plenty of opportunities in the codebase for such optimizations- so that every web application, including Javascript ones will be optimized, or even the JS JITs themselves, yet Mozilla wants to push the effort to the web app developer to use their SIMD.js to do the equivalent of what SIMD coders have been doing to native apps for a long time now, only for JS apps.

Ok, so what's the gain? I read the PDF presentation that shows mandebrot.js to go from 9 FPS to 37FPS using SIMD.js. Admittedly that's impressive. But it also proves that the whole buzz about lower energy footprint computers, power efficiency is just useless. Why is that? For comparison running Xaos (fractal/mandelbrot program) on my very low end PC (2-core Athlon X2, AM2 socket, so DDR2) gives me ~250FPS, and I'm not even sure it's using SSE at all (from a simple check it doesn't). Zooming is realtime and at full detail. In the same talk, there was a benchmark of LLVM Javascript being as fast as C++ or 1.5x the native running time. I admit haven't tried the tests listed, but the mandelbrot test was using asm.js and 9 is definetely not 250/1.5. But I guess I'm just picky.

So, the latest trend of moving everything to the browser and JS,means that instead of optimizing my apps to run great on native, instead of making stuff running faster on my 5Wt big.Little 8-core ARM SoC, I have to get a much more power-hungry CPU to see the same performance. I'm totally against that. I want my newer CPUs, who are more energy-efficient and faster to actually feel like THAT. I don't want to upgrade just to experience the performance of a 486 20 years ago!

The talk mentioned HTML5 (and hence javascript) overtaking all other platforms for application development everywhere, including the smartphones. I certainly hope that's not the case, and I know of many people who also don't feel that way. We're not buying the "Everything on the web/cloud" paradigm, but I guess we're just a minority.

I could go on for a long time, but I have an actual SIMD-related bug to fix, cheers.

Note: I used to have comments enabled on my blog, but moderating spam was too time consuming, even with CAPTCHAs, so I disabled them entirely, if anyone could suggest of a better method, I'd gladly take advice -have been thinking about disqus, not sure if it's actually a good solution).

31 Oct 2014 10:54am GMT

30 Oct 2014

feedPlanet Debian

Chris Lamb: Are you building an internet fridge?

Mikkel Rasmussen:

If you look at the idea of "The Kitchen of Tomorrow" as IKEA thought about it is the core idea is that cooking is slavery.

It's the idea that technology can free us from making food. It can do it for us. It can recognise who we are, we don't have to be tied to the kitchen all day, we don't have to think about it.

Now if you're an anthropologist, they would tell you that cooking is perhaps one of the most complicated things you can think about when it comes to the human condition. If you think about your own cooking habits they probably come from your childhood, the nation you're from, the region you're from. It takes a lot of skill to cook. It's not so easy.

And actually, it's quite fun to cook. there's also a lot of improvisation. I don't know if you ever tried to come home to a fridge and you just look into the fridge: oh, there's a carrot and some milk and some white wine and you figure it out. That's what cooking is like - it's a very human thing to do.

https://chris-lamb.co.uk/wp-content/2014/fridge.jpg

The physical version of your smart recipe site?


Therefore, if you think about it, having anything that automates this for you or decides for you or improvises for you is actually not doing anything to help you with what you want to do, which is that it's nice to cook.

More generally, if you make technology-for example-that has at its core the idea that cooking is slavery and that idea is wrong, then your technology will fail. Not because of the technology, but because it simply gets people wrong.

This happens all the time. You cannot swing a cat these days without hitting one of those refrigerator companies that make smart fridges. I don't know you've ever seen them, like a "intelligent fridge". There's so many of them that there is actually a website called "Fuck your internet fridge" by a guy who tracks failed prototypes on intelligent fridges.

Why? Because the idea is wrong. Not the technology, but the idea about who we are - that we do not want the kitchen to be automated for us.

We want to cook. We want Japanese knives. We want complicated cooking. And so what we are saying here is not that technology is wrong as such. It's just you need to base it-especially when you are innovating really big ideas-on something that's a true human insight. And cooking as slavery is not a true human insight and therefore the prototypes will fail.

(I hereby nominate "internet fridge" as the term to describe products or ideas that-whilst technologically sound-is based on fundamentally flawed anthropology.)

Hearing "I hate X" and thinking that simply removing X will provide real value to your users is short-sighted, especially when you don't really understand why humans are doing X in the first place.

30 Oct 2014 6:00pm GMT

Matthew Garrett: Hacker News metrics (first rough approach)

I'm not a huge fan of Hacker News[1]. My impression continues to be that it ends up promoting stories that align with the Silicon Valley narrative of meritocracy, technology will fix everything, regulation is the cancer killing agile startups, and discouraging stories that suggest that the world of technology is, broadly speaking, awful and we should all be ashamed of ourselves.

But as a good data-driven person[2], wouldn't it be nice to have numbers rather than just handwaving? In the absence of a good public dataset, I scraped Hacker Slide to get just over two months of data in the form of hourly snapshots of stories, their age, their score and their position. I then applied a trivial test:

  1. If the story is younger than any other story
  2. and the story has a higher score than that other story
  3. and the story has a worse ranking than that other story
  4. and at least one of these two stories is on the front page

then the story is considered to have been penalised.

(note: "penalised" can have several meanings. It may be due to explicit flagging, or it may be due to an automated system deciding that the story is controversial or appears to be supported by a voting ring. There may be other reasons. I haven't attempted to separate them, because for my purposes it doesn't matter. The algorithm is discussed here.)

Now, ideally I'd classify my dataset based on manual analysis and classification of stories, but I'm lazy (see [2]) and so just tried some keyword analysis:

Keyword Penalised Unpenalised
Women 13 4
Harass 2 0
Female 5 1
Intel 2 3
x86 3 4
ARM 3 4
Airplane 1 2
Startup 46 26


A few things to note:

  1. Lots of stories are penalised. Of the front page stories in my dataset, I count 3240 stories that have some kind of penalty applied, against 2848 that don't. The default seems to be that some kind of detection will kick in.
  2. Stories containing keywords that suggest they refer to issues around social justice appear more likely to be penalised than stories that refer to technical matters
  3. There are other topics that are also disproportionately likely to be penalised. That's interesting, but not really relevant - I'm not necessarily arguing that social issues are penalised out of an active desire to make them go away, merely that the existing ranking system tends to result in it happening anyway.


This clearly isn't an especially rigorous analysis, and in future I hope to do a better job. But for now the evidence appears consistent with my innate prejudice - the Hacker News ranking algorithm tends to penalise stories that address social issues. An interesting next step would be to attempt to infer whether the reasons for the penalties are similar between different categories of penalised stories[3], but I'm not sure how practical that is with the publicly available data.

(Raw data is here, penalised stories are here, unpenalised stories are here)


[1] Moving to San Francisco has resulted in it making more sense, but really that just makes me even more depressed.
[2] Ha ha like fuck my PhD's in biology
[3] Perhaps stories about startups tend to get penalised because of voter ring detection from people trying to promote their startup, while stories about social issues tend to get penalised because of controversy detection?

comment count unavailable comments

30 Oct 2014 3:19pm GMT

EvolvisForge blog: Tip of the day: bind tomcat7 to loopback i/f only

We already edit /etc/tomcat7/server.xml after installing the tomcat7 Debian package, to get it to talk AJP instead of HTTP (so we can use libapache2-mod-jk to put it behind an Apache 2 httpd, which also terminates SSL):

We already comment out the block…

    <Connector port="8080" protocol="HTTP/1.1"  
               connectionTimeout="20000"
               URIEncoding="UTF-8"
               redirectPort="8443" />

… and remove the comment chars around the line…

    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />

… so all we need to do is edit that line to make it look like…

    <Connector address="127.0.0.1" port="8009" protocol="AJP/1.3" redirectPort="8443" />

… and we're all set.

(Your apache2 vhost needs a line

JkMount /?* ajp13_worker

and everything Just Works™ with the default configuration.)

Now, tomcat7 is only accessible from localhost (Legacy IP), and we don't need to firewall the AJP (or HTTP/8080) port. Do make sure your Apache 2 access configuration works, though ☺

30 Oct 2014 2:17pm GMT

Alessio Treglia: Handling identities in distributed Linux cloud instances

I've many distributed Linux instances across several clouds, be them global, such as Amazon or Digital Ocean, or regional clouds such as TeutoStack or Enter.

Probably many of you are facing the same issue: having a consistent UNIX identity across all multiple instances. While in an ideal world LDAP would be a perfect choice, letting LDAP open to the wild Internet is not a great idea.

So, how to solve this issue, while being secure? The trick is to use the new NSS module for SecurePass.

While SecurePass has been traditionally used into the operating system just as a two factor authentication, the new beta release is capable of holding "extended attributes", i.e. arbitrary information for each user profile.

We will use SecurePass to authenticate users and store Unix information with this new capability. In detail, we will:

SecurePass and extended attributes

The next generation of SecurePass (currently in beta) is capable of storing arbitrary data for each profile. This is called "Extended Attributes" (or xattrs) and -as you can imagine- is organized as key/value pair.

You will need the SecurePass tools to be able to modify users' extended attributes. The new releases of Debian Jessie and Ubuntu Vivid Vervet have a package for it, just:

# apt-get install securepass-tools

ERRATA CORRIGE: securepass-tools hasn't been uploaded to Debian yet, Alessio is working hard to make the package available in time for Jessie though.

For other distributions or previous releases, there's a python package (PIP) available. Make sure that you have pycurl installed and then:

# pip install securepass-tools

While SecurePass tools allow local configuration file, we highly recommend for this tutorial to create a global /etc/securepass.conf, so that it will be useful for the NSS module. The configuration file looks like:

[default]
app_id = xxxxx
app_secret = xxxx
endpoint = https://beta.secure-pass.net/

Where app_id and app_secrets are valid API keys to access SecurePass beta.

Through the command line, we will be able to set UID, GID and all the required Unix attributes for each user:

# sp-user-xattrs user@domain.net set posixuid 1000

While posixuid is the bare minimum attribute to have a Unix login, the following attributes are valid:

Install and Configure NSS SecurePass

In a similar way to the tools, Debian Jessie and Ubuntu Vivid Vervet have native package for SecurePass:

# apt-get install libnss-securepass

For previous releases of Debian and Ubuntu can still run the NSS module, as well as CentOS and RHEL. Download the sources from:

https://github.com/garlsecurity/nss_securepass

Then:

./configure
make
make install (Debian/Ubuntu Only)

For CentOS/RHEL/Fedora you will need to copy files in the right place:

/usr/bin/install -c -o root -g root libnss_sp.so.2 /usr/lib64/libnss_sp.so.2
ln -sf libnss_sp.so.2 /usr/lib64/libnss_sp.so

The /etc/securepass.conf configuration file should be extended to hold defaults for NSS by creating an [nss] section as follows:

[nss]
realm = company.net
default_gid = 100
default_home = "/home"
default_shell = "/bin/bash"

This will create defaults in case values other than posixuid are not being used. We need to configure the Name Service Switch (NSS) to use SecurePass. We will change the /etc/nsswitch.conf by adding "sp" to the passwd entry as follows:

$ grep sp /etc/nsswitch.conf
 passwd:     files sp

Double check that NSS is picking up our new SecurePass configuration by querying the passwd entries as follows:

$ getent passwd user
 user:x:1000:100:My User:/home/user:/bin/bash
$ id user
 uid=1000(user)  gid=100(users) groups=100(users)

Using this setup by itself wouldn't allow users to login to a system because the password is missing. We will use SecurePass' authentication to access the remote machine.

Configure PAM for SecurePass

On Debian/Ubuntu, install the RADIUS PAM module with:

# apt-get install libpam-radius-auth

If you are using CentOS or RHEL, you need to have the EPEL repository configured. In order to activate EPEL, follow the instructions on http://fedoraproject.org/wiki/EPEL

Be aware that this has not being tested with SE-Linux enabled (check off or permissive).

On CentOS/RHEL, install the RADIUS PAM module with:

# yum -y install pam_radius

Note: as per the time of writing, EPEL 7 is still in beta and does not contain the Radius PAM module. A request has been filed through RedHat's Bugzilla to include this package also in EPEL 7

Configure SecurePass with your RADIUS device. We only need to set the public IP Address of the server, a fully qualified domain name (FQDN), and the secret password for the radius authentication. In case of the server being under NAT, specify the public IP address that will be translated into it. After completion we get a small recap of the already created device. For the sake of example, we use "secret" as our secret password.

Configure the RADIUS PAM module accordingly, i.e. open /etc/pam_radius.conf and add the following lines:

radius1.secure-pass.net secret 3
radius2.secure-pass.net secret 3

Of course the "secret" is the same we have set up on the SecurePass administration interface. Beyond this point we need to configure the PAM to correct manage the authentication.

In CentOS, open the configuration file /etc/pam.d/password-auth-ac; in Debian/Ubuntu open the /etc/pam.d/common-auth configuration and make sure that pam_radius_auth.so is in the list.

auth required   pam_env.so
auth sufficient pam_radius_auth.so try_first_pass
auth sufficient pam_unix.so nullok try_first_pass
auth requisite  pam_succeed_if.so uid >= 500 quiet
auth required   pam_deny.so

Conclusions

Handling many distributed Linux poses several challenges, from software updates to identity management and central logging. In a cloud scenario, it is not always applicable to use traditional enterprise solutions, but new tools might become very handy.

To freely subscribe to securepass beta, join SecurePass on: http://www.secure-pass.net/open
And then send an e-mail to info@garl.ch requesting beta access.

30 Oct 2014 12:55pm GMT

Juliana Louback: JSCommunicator at xTupleCon 2014

Two weeks ago I left NYC for a day trip to Norfolk, Virginia, to attend xTupleCon 2014. For those who don't know, xTuple is an incredibly reknown open source Enterprise Resource Planning (ERP) software. If you've been following this blog, you might recall that during my participation in the Google Summer of Code 2014, I wrote a beta JSCommunicator extension for xTuple (to see how I went about doing that, look up Kickstarting the JSCommunicator-xTuple extension).

Now, I wasn't flying down to Virginia on the eve of my first grad school midterms (gasp!) for fun - although I will admit, I enjoyed myself a fair amount. My GSoC mentor, Daniel Pocock was giving a talk about WebRTC and JSCommunicator at xTupleCon and invited me to participate. So that was the first perk of going to xTupleCon, I got to finally meet my GSoC mentor in person!

During the presentation, Daniel provided a high level expanation of WebRTC and how it works. WebRTC (Web Real Time Communications) enables real time transmission of audio, video and data streams through browser-to-browser applications. This is one of the many perks of WebRTC; it doesn't require installation of plugins, making its use less vulnerable to security breaches. Websockets are used for the signalling channel and the SIP or XMPP can be used as the signaling protocol - in JSCommunicator, we used SIP. What was also done in JSCommunicator and can be done in other applications is to use a library to implement the signaling and SIP stack. JSCommunicator uses (and I highly reccomend) JsSIP.

The basic SIP architecture is comprised of peers and a SIP Proxy Server that supports SIP over Websockets transport:

sipArch

If you have users behind NAT, you'll also need a TURN server for relay:

sipArch2

In both cases, setup is not too difficult, particularly if using reSIProcate which offers both the SIP proxy and the TURN server. Daniel Pocock has an excellent post on how to setup and configure your SIP proxy and TURN server.

With regard to JSCommunicator, it is a generic telephone in HTML5 which can easily be embedded in any web site or web app. Almost every aspect of JSCommunicator is easily customizable. More about JSCommunicator setup and architecture is detailed in a previous post.

The JSCommunicator-xTuple extension can be installed in xTuple as an npm package (xtuple-jscommunicator). It is still at a very beta - or even pre-beta - stage and there are various limitations; the configuration must be hard-coded and dialing is done manually as opposed to clicking on a contact. Some of these 'limitations' are features are on the wish list for future work. For example, some ideas for the next version of the extension are a click to dial from CRM records functionality and to bring up CRM records for an incomming call. Additionaly, the SIP proxy could be automatically installed with the xTuple server installation if desired.

We closed the presentation with a live demo during which I made a video call from a JSCommunicator instance embedded in freephonebox.net to the JSCommunicator xTuple extension running on Daniel's laptop. Despite the occasionally iffy hotel Wifi, the demo was a hit - at one point I even left the conference room and took a quick walk around the hotel and invited other xTupleCon attendees to say hello to those in the room. The audience's reception was more enthusiastic than I anticipated, giving way to a pretty extensive Q&A session. It's great to see more and more people interested in WebRTC, I can't emphasize enough what a useful and versatile tool it is.

Here's an 'action shot' of part of the xTuple WebRTC presentation: xtuple

30 Oct 2014 10:06am GMT

Keith Packard: Glamor cleanup

Glamor Cleanup

Before I start really digging in to reworking the Render support in Glamor, I wanted to take a stab at cleaning up some cruft which has accumulated in Glamor over the years. Here's what I've done so far.

Get rid of the Intel fallback paths

I think it's my fault, and I'm sorry.

The original Intel Glamor code has Glamor implement accelerated operations using GL, and when those fail, the Intel driver would fall back to its existing code, either UXA acceleration or software. Note that it wasn't Glamor doing these fallbacks, instead the Intel driver had a complete wrapper around every rendering API, calling special Glamor entry points which would return FALSE if GL couldn't accelerate the specified operation.

The thinking was that when GL couldn't do something, it would be far faster to take advantage of the existing UXA paths than to have Glamor fall back to pulling the bits out of GL, drawing to temporary images with software, and pushing the bits back to GL.

And, that may well be true, but what we've managed to prove is that there really aren't any interesting rendering paths which GL can't do directly. For core X, the only fallbacks we have today are for operations using a weird planemask, and some CopyPlane operations. For Render, essentially everything can be accelerated with the GPU.

At this point, the old Intel Glamor implementation is a lot of ugly code in Glamor without any use. I posted patches to the Intel driver several months ago which fix the Glamor bits there, but they haven't seen any review yet and so they haven't been merged, although I've been running them since 1.16 was released...

Getting rid of this support let me eliminate all of the _nf functions exported from Glamor, along with the GLAMOR_USE_SCREEN and GLAMOR_USE_PICTURE_SCREEN parameters, along with the GLAMOR_SEPARATE_TEXTURE pixmap type.

Force all pixmaps to have exact allocations

Glamor has a cache of recently used textures that it uses to avoid allocating and de-allocating GL textures rapidly. For pixmaps small enough to fit in a single texture, Glamor would use a cache texture that was larger than the pixmap.

I disabled this when I rewrote the Glamor rendering code for core X; that code used texture repeat modes for tiles and stipples; if the texture wasn't the same size as the pixmap, then texturing would fail.

On the Render side, Glamor would actually reallocate pixmaps used as repeating texture sources. I could have fixed up the core rendering code to use this, but I decided instead to just simplify things and eliminate the ability to use larger textures for pixmaps everywhere.

Remove redundant pixmap and screen private pointers

Every Glamor pixmap private structure had a pointer back to the pixmap it was allocated for, along with a pointer to the the Glamor screen private structure for the related screen. There's no particularly good reason for this, other than making it possible to pass just the Glamor pixmap private around a lot of places. So, I removed those pointers and fixed up the functions to take the necessary extra or replaced parameters.

Similarly, every Glamor fbo had a pointer back to the Glamor screen private too; I removed that and now pass the Glamor screen private parameter as needed.

Reducing pixmap private complexity

Glamor had three separate kinds of pixmap private structures, one for 'normal' pixmaps (those allocated by them selves in a single FBO), one for 'large' pixmaps, where the pixmap was tiled across many FBOs, and a third for 'atlas' pixmaps, which presumably would be a single FBO holding multiple pixmaps.

The 'atlas' form was never actually implemented, so it was pretty easy to get rid of that.

For large vs normal pixmaps, the solution was to move the extra data needed by large pixmaps into the same structure as that used by normal pixmaps and simply initialize those elements correctly in all cases. Now, most code can ignore the difference and simply walk the array of FBOs as necessary.

The other thing I did was to shrink the number of possible pixmap types from 8 down to three. Glamor now exposes just these possible pixmap types:

Future Work

30 Oct 2014 7:51am GMT

Matthew Garrett: On joining the FSF board

I joined the board of directors of the Free Software Foundation a couple of weeks ago. I've been travelling a bunch since then, so haven't really had time to write about it. But since I'm currently waiting for a test job to finish, why not?

It's impossible to overstate how important free software is. A movement that began with a quest to work around a faulty printer is now our greatest defence against a world full of hostile actors. Without the ability to examine software, we can have no real faith that we haven't been put at risk by backdoors introduced through incompetence or malice. Without the freedom to modify software, we have no chance of updating it to deal with the new challenges that we face on a daily basis. Without the freedom to pass that modified software on to others, we are unable to help people who don't have the technical skills to protect themselves.

Free software isn't sufficient for building a trustworthy computing environment, one that not merely protects the user but respects the user. But it is necessary for that, and that's why I continue to evangelise on its behalf at every opportunity.

However.

Free software has a problem. It's natural to write software to satisfy our own needs, but in doing so we write software that doesn't provide as much benefit to people who have different needs. We need to listen to others, improve our knowledge of their requirements and ensure that they are in a position to benefit from the freedoms we espouse. And that means building diverse communities, communities that are inclusive regardless of people's race, gender, sexuality or economic background. Free software that ends up designed primarily to meet the needs of well-off white men is a failure. We do not improve the world by ignoring the majority of people in it. To do that, we need to listen to others. And to do that, we need to ensure that our community is accessible to everybody.

That's not the case right now. We are a community that is disproportionately male, disproportionately white, disproportionately rich. This is made strikingly obvious by looking at the composition of the FSF board, a body made up entirely of white men. In joining the board, I have perpetuated this. I do not bring new experiences. I do not bring an understanding of an entirely different set of problems. I do not serve as an inspiration to groups currently under-represented in our communities. I am, in short, a hypocrite.

So why did I do it? Why have I joined an organisation whose founder I publicly criticised for making sexist jokes in a conference presentation? I'm afraid that my answer may not seem convincing, but in the end it boils down to feeling that I can make more of a difference from within than from outside. I am now in a position to ensure that the board never forgets to consider diversity when making decisions. I am in a position to advocate for programs that build us stronger, more representative communities. I am in a position to take responsibility for our failings and try to do better in future.

People can justifiably conclude that I'm making excuses, and I can make no argument against that other than to be asked to be judged by my actions. I hope to be able to look back at my time with the FSF and believe that I helped make a positive difference. But maybe this is hubris. Maybe I am just perpetuating the status quo. If so, I absolutely deserve criticism for my choices. We'll find out in a few years.

comment count unavailable comments

30 Oct 2014 12:45am GMT

29 Oct 2014

feedPlanet Debian

Gunnar Wolf: Guests in the classroom: @chemaserralde talks about real time scheduling

Last Wednesday I had the pleasure and honor to have a great guest again at my class: José María Serralde, talking about real time scheduling. I like inviting different people to present interesting topics to my students a couple of times each semester, and I was very happy to have Chema come again.

Chema is a professional musician (formally, a pianist, although he has far more skills than what a title would confer to him - Skills that go way beyond just music), and he had to learn the details on scheduling due to errors that appear when recording and performing.

The audio could use some cleaning, and my main camera (the only one that lasted for the whole duration) was by a long shot not professional grade, but the video works and is IMO quite interesting and well explained.

So, here is the full video (also available at The Internet archive), all two hours and 500MB of it for you to learn and enjoy!

29 Oct 2014 8:47pm GMT

Rhonda D'Vine: Feminist Year

If someone would have told me that I would visit three feminist events this year I would have slowly nodded at them and responded with "yeah, sure..." not believing it. But sometimes things take their own turns.

It all started with the Debian Women Mini-Debconf in Barcelona. The organizers did ask me how they have to word the call for papers so that I would feel invited to give a speech, which felt very welcoming and nice. So we settled for "people who identify themselves as female". Due to private circumstances I didn't prepare well for my talk, but I hope it was still worth it. The next interesting part though happened later when there were lightning talks. Someone on IRC asked why there are male people in the lightning talks, which was explicitly allowed for them only. This also felt very very nice, to be honest, that my talk wasn't questioned. Those are amongst the reasons why I wrote My place is here, my home is Debconf.

Second event I went to was the FemCamp Wien. It was my first event that was a barcamp, I didn't know what to expect organization wise. Topic-wise it was set about Queer Feminism. And it was the first event that I went to which had a policy. Granted, there was an extremely silly written part in it, which naturally ended up in a shit storm on twitter (which people from both sides did manage very badly, which disappointed me). Denying that there is sexism against cis-males is just a bad idea, but the background of it was that this wasn't the topic of this event. The background of the policy was that usually barcamps but events in general aren't considered that save of a place for certain people, and that this barcamp wanted to make it clear that people usually shying away from such events in the fear of harassment can feel at home there.
And what can I say, this absolutely was the right thing to do. I never felt any more welcomed and included in any event, including Debian events-sorry to say that so frankly. Making it clear through the policy that everyone is on the same boat with addressing each other respectfully totally managed to do exactly that. The first session of the event about dominant talk patterns and how to work around or against them also made sure that the rest of the event was giving shy people a chance to speak up and feel comfortable, too. And the range of the sessions that were held was simply great. This was the event that I came up with the pattern that I have to define the quality of an event on the sessions that I'm unable to attend. The thing that hurt me most in the afterthought was that I couldn't attend the session about minorities within minorities. :/

Last but not least I attended AdaCamp Berlin. This was a small unconference/barcamp dedicated to increase women's participation in open technology and culture named after Ada Lovelace who is considered the first programmer. It was a small event with only 50 slots for people who identify as women. So I was totally hyper when I received the mail that was accepted. It was another event with a policy, and at first reading it looked strange. But given that there are people who are allergic to ingredients of scents, it made sense to raise awareness of that topic. And given that women are facing a fair amount of harassment in the IT and at events, it also makes sense to remind people to behave. After all it was a general policy for all AdaCamps, not for this specific one with only women.
I enjoyed the event. Totally. And that's not only because I was able to meet up with a dear friend who I haven't talked to in years, literally. I enjoyed the environment, and the sessions that were going on. And quite similar to the FemCamp, it started off with a session that helped a lot for the rest of the event. This time it was about the Impostor Syndrome which is extremely common for women in IT. And what can I say, I found myself in one of the slides, given that I just tweeted the day before that I doubted to belong there. Frankly spoken, it even crossed my mind that I was only accepted so that at least one trans person is there. Which is pretty much what the impostor syndrome is all about, isn't it. But when I was there, it did feel right. And we had great sessions that I truly enjoyed. And I have to thank one lady once again for her great definition on feminism that she brought up during one session, which is roughly that feminism for her isn't about gender but equality of all people regardless their sexes or gender definition. It's about dropping this whole binary thinking. I couldn't agree more.

All in all, I totally enjoyed these events, and hope that I'll be able to attend more next year. From what I grasped all three of them think of doing it again, the FemCamp Vienna already has the date announced at the end of this year's event, so I am looking forward to meet most of these fine ladies again, if faith permits. And keep in mind, there will always be critics and haters out there, but given that thy wouldn't think of attending such an event anyway in the first place, don't get wound up about it. They just try to talk you down.

P.S.: Ah, almost forgot about one thing to mention, which also helps a lot to reduce some barrier for people to attend: The catering during the day and for lunch both at FemCamp and AdaCamp (there was no organized catering at the Debian Women Mini-Debconf) did take off the need for people to ask about whether there could be food without meat and dairy products by offering mostly Vegan food in the first place, even without having to query the participants. Often enough people otherwise choose to go out of the event or bring their own food instead of asking for it, so this is an extremely welcoming move, too. Way to go!

/personal | permanent link | Comments: 0 | Flattr this

29 Oct 2014 7:47pm GMT

Steve Kemp: A brief introduction to freebsd

I've spent the past thirty minutes installing FreeBSD as a KVM guest. This mostly involved fetching the ISO (I chose the latest stable release 10.0), and accepting all the defaults. A pleasant experience.

As I'm running KVM inside screen I wanted to see the boot prompt, etc, via the serial console, which took two distinct steps:

To configure boot messages to display via the serial console, issue the following command as the superuser:

 # echo 'console="comconsole"' >> /boot/loader.conf

To get a login: prompt you'll want to edit /etc/ttys and change "off" to "on" and "dialup" to "vt100" for the ttyu0 entry. Once you've done that reload init via:

 # kill -HUP 1

Enable remote root logins, if you're brave, or disable PAM and password authentication if you're sensible:

 vi /etc/ssh/sshd_config
 /etc/rc.d/sshd restart

Configure the system to allow binary package-installation - to be honest I was hazy on why this was required, but I ran the two command and it all worked out:

 pkg
 pkg2ng

Now you may install a package via a simple command such as:

 pkg add screen

Removing packages you no longer want is as simple as using the delete option:

 pkg delete curl

You can see installed packages via "pkg info", and there are more options to be found via "pkg help". In the future you can apply updates via:

 pkg update && pkg upgrade

Finally I've installed 10.0-RELEASE which can be upgraded in the future via "freebsd-update" - This seems to boil down to "freebsd-update fetch" and "freebsd-update install" but I'm hazy on that just yet. For the moment you can see your installed version via:

 uname -a ; freebsd-version

Expect my future CPAN releases, etc, to be tested on FreeBSD too now :)

29 Oct 2014 6:37pm GMT

Patrick Matthäi: geoip and geoip-database news!

Hi,

geoip version 1.6.2-2 and geoip-database version 20141027-1 are now available in Debian unstable/sid, with some news of more free databases available :)

geoip changes:

   * Add patch for geoip-csv-to-dat to add support for building GeoIP city DB.
     Many thanks to Andrew Moise for contributing!
   * Add and install geoip-generator-asn, which is able to build the ASN DB. It
     is a modified version from the original geoip-generator. Much thanks for
     contributing also to Aaron Gibson!
   * Bump Standards-Version to 3.9.6 (no changes required).

geoip-database changes:

   * New upstream release.
   * Add new databases GeoLite city and GeoLite ASN to the new package
     geoip-database-extra. Also bump build depends on geoip to 1.6.2-2.
   * Switch to xz compression for the orig tarball.

So much thanks to both contributors!

29 Oct 2014 3:43pm GMT

Mike Gabriel: Join us at "X2Go: The Gathering 2014"

TL;DR; Those of you who are not able to join "X2Go: The Gathering 2014"... Join us on IRC (#x2go on Freenode) over the coming weekend. We will provide information, URLs to our TinyPads, etc. there. Spontaneous visitors are welcome during the working sessions (please let us know if you plan to come around), but we don't have spare beds anymore for accomodation. (We are still trying hard to set up some sort of video coverage--may it be life streaming or recorded sessions, this is still open, people who can offer help, see below).

Our event "X2Go: The Gathering 2014" is approaching quickly. We will meet with a group of 13-15 people (number of people is still slightly fluctuating) at Linux Hotel, Essen. Thanks to the generous offerings of the Linux Hotel [1] to FLOSS community projects, costs of food and accommodation could be kept really low and affordable to many people.

We are very happy that people from outside Germany are coming to that meeting (Michael DePaulo from the U.S., Kjetil Fleten (http://fleten.net) from Denmark / Norway). And we are also proud that Martin Wimpress (Mr. Ubuntu MATE Remix) will join our gathering.

In advance, I want to send a big THANK YOU to all people who will sponsor our weekend, either by sending gift items, covering travel expenses or providing help and knowledge to make this event a success for the X2Go project and its community around.

read more

29 Oct 2014 11:27am GMT