16 Apr 2015

feedPlanet Gentoo

Bernard Cafarelli: Removing old NX packages from tree

I already sent the last rites announce a few days ago, but here is a more detailed post on the coming up removal of "old" NX packages. Long story short: migrate to X2Go if possible, or use the NX overlay ("best-effort" support provided)

Affected packages

Basically, all NX clients and servers except x2go and nxplayer! Here is the complete list with some specific last rites reasons:

Continue using these packages on Gentoo

These packages will be dropped from the main tree by the end of this month (2015/04), and then only available in the NX overlay. They will still be supported there in a "best effort" way (no guarantee how long some of these packages will work with current systems).

So, if one of these packages still works better for you, or you need to keep them around before migrating, just run:
# layman -a nx

Alternatives

While it is not a direct drop-in replacement, x2go is the most complete solution currently in Gentoo tree (and my recommendation), with a lot of possible advanced features, active upstream development, … You can connect to net-misc/x2goserver with net-misc/x2goclient, net-misc/pyhoca-gui, or net-misc/pyhoca-cli.

If you want to try NoMachine's (the company that created NX) new system, well the client is available in Portage as net-misc/nxplayer. The server itself is not packaged yet, if you are interested in it, this is bug #488334

16 Apr 2015 9:30pm GMT

11 Apr 2015

feedPlanet Gentoo

Sebastian Pipping: Firefox: You may want to update to 37.0.1

I was pointed to this Mozilla Security Advisory:

Certificate verification bypass through the HTTP/2 Alt-Svc header
https://www.mozilla.org/en-US/security/advisories/mfsa2015-44/

While it doesn't say if all versions prior to 37.0.1 are affected, it does say that sending a certain server response header disabled warnings of invalid SSL certificates for that domain. Ooops.

I'm not sure how relevant HTTP/2 is by now.

11 Apr 2015 7:23pm GMT

Paweł Hajdan, Jr.: Tricks for resolving slot conflicts and blockers

Slot conflicts can be annoying. It's worse when an attempt to fix them leads to an even bigger mess. I hope this post helps you with some cases - and that portage will keep getting smarter about resolving them automatically.

Read more »

11 Apr 2015 4:09pm GMT

Sebastian Pipping: “Your browser fingerprint appears to be unique among the 5,198,585 tested so far”. What?!

While https://panopticlick.eff.org/ is not really new, I learned about that site only recently.
And while I knew that browser self-identification would reduce my anonymity on the Internet, I didn't expect this result:

Your browser fingerprint appears to be unique among the 5,198,585 tested so far.

Wow. Why? Let's try one of the others browsers I use. "Appears to be unique", again (where Flash is enabled).

What's so unique about my setup? The two reports say about my setup:

Characteristic One in x browsers have this value
Browser Firefox
36.0.1
Google Chrome
42.0.2311.68
Chromium
41.0.2272.76
User Agent 2,030.70 472,599.36 16,576.56
HTTP_ACCEPT Headers 12.66 5477.97 5,477.97
Browser Plugin Details 577,620.56 259,929.65 7,351.75
Time Zone 6.51 6.51 6.51
Screen Size and Color Depth 13.72 13.72 13.72
System Fonts 5,198,585.00 5,198,585.00 5.10
(Flash and Java disabled)
Are Cookies Enabled? 1.35 1.35 1.35
Limited supercookie test 1.83 1.83 1.83

User agent and browser plug-ins hurt, fonts alone kill me altogether. Ouch.

Update:

Thoughts on fixing this issue:

I'm not sure about how I want to fix this myself. Faking popular values (in a popular combination to not fire back) could work using a local proxy, a browser patch, a browser plug-in maybe. Obtaining true popular value combinations is another question. Fake values can reduce the quality of the content I am presented, e.g. I would not fake my screen resolution or be sure to not deviate by much, probably.

11 Apr 2015 1:18pm GMT

Patrick Lauer: Almost quiet dataloss

Some harddisk manufacturers have interesting ideas ... using some old Samsung disks in a RAID5 config:

[15343.451517] ata3.00: exception Emask 0x0 SAct 0x40008410 SErr 0x0 action 0x6 frozen
[15343.451522] ata3.00: failed command: WRITE FPDMA QUEUED
[15343.451527] ata3.00: cmd 61/20:20:d8:7d:6c/01:00:07:00:00/40 tag 4 ncq 147456 out
                        res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)                                                                                                                                                                                            
[15343.451530] ata3.00: status: { DRDY }
[15343.451532] ata3.00: failed command: WRITE FPDMA QUEUED
[15343.451536] ata3.00: cmd 61/30:50:d0:2f:40/00:00:0d:00:00/40 tag 10 ncq 24576 out
                        res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)                                                                                                                                                                                            
[15343.451538] ata3.00: status: { DRDY }
[15343.451540] ata3.00: failed command: WRITE FPDMA QUEUED
[15343.451544] ata3.00: cmd 61/a8:78:90:be:da/00:00:0b:00:00/40 tag 15 ncq 86016 out
                        res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)                                                                                                                                                                                            
[15343.451546] ata3.00: status: { DRDY }
[15343.451549] ata3.00: failed command: READ FPDMA QUEUED
[15343.451552] ata3.00: cmd 60/38:f0:c0:2b:d6/00:00:0e:00:00/40 tag 30 ncq 28672 in
                        res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)                                                                                                                                                                                            
[15343.451555] ata3.00: status: { DRDY }
[15343.451557] ata3: hard resetting link
[15343.911891] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[15344.062112] ata3.00: configured for UDMA/133
[15344.062130] ata3.00: device reported invalid CHS sector 0
[15344.062139] ata3.00: device reported invalid CHS sector 0
[15344.062146] ata3.00: device reported invalid CHS sector 0
[15344.062153] ata3.00: device reported invalid CHS sector 0
[15344.062169] ata3: EH complete

Hmm, that doesn't look too good ... but mdadm still believes the RAID is functional.

And a while later things like this happen:

[ 2968.701999] XFS (md4): Metadata corruption detected at xfs_dir3_data_read_verify+0x72/0x77 [xfs], block 0x36900a0
[ 2968.702004] XFS (md4): Unmount and run xfs_repair
[ 2968.702007] XFS (md4): First 64 bytes of corrupted metadata buffer:
[ 2968.702011] ffff8802ab5cf000: 04 00 00 00 99 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702015] ffff8802ab5cf010: 03 00 00 00 00 00 00 00 02 00 00 00 9e 00 00 00  ................
[ 2968.702018] ffff8802ab5cf020: 0c 00 00 00 00 00 00 00 13 00 00 00 00 00 00 00  ................
[ 2968.702021] ffff8802ab5cf030: 04 00 00 00 82 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702048] XFS (md4): metadata I/O error: block 0x36900a0 ("xfs_trans_read_buf_map") error 117 numblks 8
[ 2968.702476] XFS (md4): Metadata corruption detected at xfs_dir3_data_reada_verify+0x69/0x6d [xfs], block 0x36900a0
[ 2968.702491] XFS (md4): Unmount and run xfs_repair
[ 2968.702494] XFS (md4): First 64 bytes of corrupted metadata buffer:
[ 2968.702498] ffff8802ab5cf000: 04 00 00 00 99 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702501] ffff8802ab5cf010: 03 00 00 00 00 00 00 00 02 00 00 00 9e 00 00 00  ................
[ 2968.702505] ffff8802ab5cf020: 0c 00 00 00 00 00 00 00 13 00 00 00 00 00 00 00  ................
[ 2968.702508] ffff8802ab5cf030: 04 00 00 00 82 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702825] XFS (md4): Metadata corruption detected at xfs_dir3_data_read_verify+0x72/0x77 [xfs], block 0x36900a0
[ 2968.702831] XFS (md4): Unmount and run xfs_repair
[ 2968.702834] XFS (md4): First 64 bytes of corrupted metadata buffer:
[ 2968.702839] ffff8802ab5cf000: 04 00 00 00 99 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702842] ffff8802ab5cf010: 03 00 00 00 00 00 00 00 02 00 00 00 9e 00 00 00  ................
[ 2968.702866] ffff8802ab5cf020: 0c 00 00 00 00 00 00 00 13 00 00 00 00 00 00 00  ................
[ 2968.702871] ffff8802ab5cf030: 04 00 00 00 82 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702888] XFS (md4): metadata I/O error: block 0x36900a0 ("xfs_trans_read_buf_map") error 117 numblks 8

fsck finds quite a lot of data not being where it should be.
I'm not sure who to blame here - the kernel should actively punch out any harddisk that is fish-on-land flopping around like that, the md layer should hate on any device that even looks weirdly, but somehow "just doing a link reset" is considered enough.

I'm not really upset that an old cheap disk that is now ~9 years old decides to have dementia, but I'm quite unhappy with the firmware programming that doesn't seem to consider data loss as a problem ... (but at least it's not Seagate!)

11 Apr 2015 11:06am GMT

07 Apr 2015

feedPlanet Gentoo

Hanno Böck: How Heartbleed could've been found

Heartbleedtl;dr With a reasonably simple fuzzing setup I was able to rediscover the Heartbleed bug. This uses state-of-the-art fuzzing and memory protection technology (american fuzzy lop and Address Sanitizer), but it doesn't require any prior knowledge about specifics of the Heartbleed bug or the TLS Heartbeat extension. We can learn from this to find similar bugs in the future.

Exactly one year ago a bug in the OpenSSL library became public that is one of the most well-known security bug of all time: Heartbleed. It is a bug in the code of a TLS extension that up until then was rarely known by anybody. A read buffer overflow allowed an attacker to extract parts of the memory of every server using OpenSSL.

Can we find Heartbleed with fuzzing?

Heartbleed was introduced in OpenSSL 1.0.1, which was released in March 2012, two years earlier. Many people wondered how it could've been hidden there for so long. David A. Wheeler wrote an essay discussing how fuzzing and memory protection technologies could've detected Heartbleed. It covers many aspects in detail, but in the end he only offers speculation on whether or not fuzzing would have found Heartbleed. So I wanted to try it out.

Of course it is easy to find a bug if you know what you're looking for. As best as reasonably possible I tried not to use any specific information I had about Heartbleed. I created a setup that's reasonably simple and similar to what someone would also try it without knowing anything about the specifics of Heartbleed.

Heartbleed is a read buffer overflow. What that means is that an application is reading outside the boundaries of a buffer. For example, imagine an application has a space in memory that's 10 bytes long. If the software tries to read 20 bytes from that buffer, you have a read buffer overflow. It will read whatever is in the memory located after the 10 bytes. These bugs are fairly common and the basic concept of exploiting buffer overflows is pretty old. Just to give you an idea how old: Recently the Chaos Computer Club celebrated the 30th anniversary of a hack of the German BtX-System, an early online service. They used a buffer overflow that was in many aspects very similar to the Heartbleed bug. (It is actually disputed if this is really what happened, but it seems reasonably plausible to me.)

Fuzzing is a widely used strategy to find security issues and bugs in software. The basic idea is simple: Give the software lots of inputs with small errors and see what happens. If the software crashes you likely found a bug.

When buffer overflows happen an application doesn't always crash. Often it will just read (or write if it is a write overflow) to the memory that happens to be there. Whether it crashes depends on a lot of circumstances. Most of the time read overflows won't crash your application. That's also the case with Heartbleed. There are a couple of technologies that improve the detection of memory access errors like buffer overflows. An old and well-known one is the debugging tool Valgrind. However Valgrind slows down applications a lot (around 20 times slower), so it is not really well suited for fuzzing, where you want to run an application millions of times on different inputs.

Address Sanitizer finds more bug

A better tool for our purpose is Address Sanitizer. David A. Wheeler calls it "nothing short of amazing", and I want to reiterate that. I think it should be a tool that every C/C++ software developer should know and should use for testing.

Address Sanitizer is part of the C compiler and has been included into the two most common compilers in the free software world, gcc and llvm. To use Address Sanitizer one has to recompile the software with the command line parameter -fsanitize=address . It slows down applications, but only by a relatively small amount. According to their own numbers an application using Address Sanitizer is around 1.8 times slower. This makes it feasible for fuzzing tasks.

For the fuzzing itself a tool that recently gained a lot of popularity is american fuzzy lop (afl). This was developed by Michal Zalewski from the Google security team, who is also known by his nick name lcamtuf. As far as I'm aware the approach of afl is unique. It adds instructions to an application during the compilation that allow the fuzzer to detect new code paths while running the fuzzing tasks. If a new interesting code path is found then the sample that created this code path is used as the starting point for further fuzzing.

Currently afl only uses file inputs and cannot directly fuzz network input. OpenSSL has a command line tool that allows all kinds of file inputs, so you can use it for example to fuzz the certificate parser. But this approach does not allow us to directly fuzz the TLS connection, because that only happens on the network layer. By fuzzing various file inputs I recently found two issues in OpenSSL, but both had been found by Brian Carpenter before, who at the same time was also fuzzing OpenSSL.

Let OpenSSL talk to itself

So to fuzz the TLS network connection I had to create a workaround. I wrote a small application that creates two instances of OpenSSL that talk to each other. This application doesn't do any real networking, it is just passing buffers back and forth and thus doing a TLS handshake between a server and a client. Each message packet is written down to a file. It will result in six files, but the last two are just empty, because at that point the handshake is finished and no more data is transmitted. So we have four files that contain actual data from a TLS handshake. If you want to dig into this, a good description of a TLS handshake is provided by the developers of OCaml-TLS and MirageOS.

Then I added the possibility of switching out parts of the handshake messages by files I pass on the command line. By calling my test application selftls with a number and a filename a handshake message gets replaced by this file. So to test just the first part of the server handshake I'd call the test application, take the output file packed-1 and pass it back again to the application by running selftls 1 packet-1. Now we have all the pieces we need to use american fuzzy lop and fuzz the TLS handshake.

I compiled OpenSSL 1.0.1f, the last version that was vulnerable to Heartbleed, with american fuzzy lop. This can be done by calling ./config and then replacing gcc in the Makefile with afl-gcc. Also we want to use Address Sanitizer, to do so we have to set the environment variable AFL_USE_ASAN to 1.

There are some issues when using Address Sanitizer with american fuzzy lop. Address Sanitizer needs a lot of virtual memory (many Terabytes). American fuzzy lop limits the amount of memory an application may use. It is not trivially possible to only limit the real amount of memory an application uses and not the virtual amount, therefore american fuzzy lop cannot handle this flawlessly. Different solutions for this problem have been proposed and are currently developed. I usually go with the simplest solution: I just disable the memory limit of afl (parameter -m -1). This poses a small risk: A fuzzed input may lead an application to a state where it will use all available memory and thereby will cause other applications on the same system to malfuction. Based on my experience this is very rare, so I usually just ignore that potential problem.

After having compiled OpenSSL 1.0.1f we have two files libssl.a and libcrypto.a. These are static versions of OpenSSL and we will use them for our test application. We now also use the afl-gcc to compile our test application:

AFL_USE_ASAN=1 afl-gcc selftls.c -o selftls libssl.a libcrypto.a -ldl

Now we run the application. It needs a dummy certificate. I have put one in the repo. To make things faster I'm using a 512 bit RSA key. This is completely insecure, but as we don't want any security here - we just want to find bugs - this is fine, because a smaller key makes things faster. However if you want to try fuzzing the latest OpenSSL development code you need to create a larger key, because it'll refuse to accept such small keys.

The application will give us six packet files, however the last two will be empty. We only want to fuzz the very first step of the handshake, so we're interested in the first packet. We will create an input directory for american fuzzy lop called in and place packet-1 in it. Then we can run our fuzzing job:

afl-fuzz -i in -o out -m -1 -t 5000 ./selftls 1 @@

american fuzzy lop screenshot

We pass the input and output directory, disable the memory limit and increase the timeout value, because TLS handshakes are slower than common fuzzing tasks. On my test machine around 6 hours later afl found the first crash. Now we can manually pass our output to the test application and will get a stack trace by Address Sanitizer:

==2268==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x629000013748 at pc 0x7f228f5f0cfa bp 0x7fffe8dbd590 sp 0x7fffe8dbcd38
READ of size 32768 at 0x629000013748 thread T0
#0 0x7f228f5f0cf9 (/usr/lib/gcc/x86_64-pc-linux-gnu/4.9.2/libasan.so.1+0x2fcf9)
#1 0x43d075 in memcpy /usr/include/bits/string3.h:51
#2 0x43d075 in tls1_process_heartbeat /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/ssl/t1_lib.c:2586
#3 0x50e498 in ssl3_read_bytes /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/ssl/s3_pkt.c:1092
#4 0x51895c in ssl3_get_message /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/ssl/s3_both.c:457
#5 0x4ad90b in ssl3_get_client_hello /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/ssl/s3_srvr.c:941
#6 0x4c831a in ssl3_accept /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/ssl/s3_srvr.c:357
#7 0x412431 in main /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/selfs.c:85
#8 0x7f228f03ff9f in __libc_start_main (/lib64/libc.so.6+0x1ff9f)
#9 0x4252a1 (/data/openssl/openssl-handshake/openssl-1.0.1f-nobreakrng-afl-asan-fuzz/selfs+0x4252a1)

0x629000013748 is located 0 bytes to the right of 17736-byte region [0x62900000f200,0x629000013748)
allocated by thread T0 here:
#0 0x7f228f6186f7 in malloc (/usr/lib/gcc/x86_64-pc-linux-gnu/4.9.2/libasan.so.1+0x576f7)
#1 0x57f026 in CRYPTO_malloc /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/crypto/mem.c:308


We can see here that the crash is a heap buffer overflow doing an invalid read access of around 32 Kilobytes in the function tls1_process_heartbeat(). It is the Heartbleed bug. We found it.

I want to mention a couple of things that I found out while trying this. I did some things that I thought were necessary, but later it turned out that they weren't. After Heartbleed broke the news a number of reports stated that Heartbleed was partly the fault of OpenSSL's memory management. A mail by Theo De Raadt claiming that OpenSSL has "exploit mitigation countermeasures" was widely quoted. I was aware of that, so I first tried to compile OpenSSL without its own memory management. That can be done by calling ./config with the option no-buf-freelist.

But it turns out although OpenSSL uses its own memory management that doesn't defeat Address Sanitizer. I could replicate my fuzzing finding with OpenSSL compiled with its default options. Although it does its own allocation management, it will still do a call to the system's normal malloc() function for every new memory allocation. A blog post by Chris Rohlf digs into the details of the OpenSSL memory allocator.

Breaking random numbers for deterministic behaviour

When fuzzing the TLS handshake american fuzzy lop will report a red number counting variable runs of the application. The reason for that is that a TLS handshake uses random numbers to create the master secret that's later used to derive cryptographic keys. Also the RSA functions will use random numbers. I wrote a patch to OpenSSL to deliberately break the random number generator and let it only output ones (it didn't work with zeros, because OpenSSL will wait for non-zero random numbers in the RSA function).

During my tests this had no noticeable impact on the time it took afl to find Heartbleed. Still I think it is a good idea to remove nondeterministic behavior when fuzzing cryptographic applications. Later in the handshake there are also timestamps used, this can be circumvented with libfaketime, but for the initial handshake processing that I fuzzed to find Heartbleed that doesn't matter.

Conclusion

You may ask now what the point of all this is. Of course we already know where Heartbleed is, it has been patched, fixes have been deployed and it is mostly history. It's been analyzed thoroughly.

The question has been asked if Heartbleed could've been found by fuzzing. I'm confident to say the answer is yes. One thing I should mention here however: American fuzzy lop was already available back then, but it was barely known. It only received major attention later in 2014, after Michal Zalewski used it to find two variants of the Shellshock bug. Earlier versions of afl were much less handy to use, e. g. they didn't have 64 bit support out of the box. I remember that I failed to use an earlier version of afl with Address Sanitizer, it was only possible after a couple of issues were fixed. A lot of other things have been improved in afl, so at the time Heartbleed was found american fuzzy lop probably wasn't in a state that would've allowed to find it in an easy, straightforward way.

I think the takeaway message is this: We have powerful tools freely available that are capable of finding bugs like Heartbleed. We should use them and look for the other Heartbleeds that are still lingering in our software. Take a look at the Fuzzing Project if you're interested in further fuzzing work. There are beginner tutorials that I wrote with the idea in mind to show people that fuzzing is an easy way to find bugs and improve software quality.

I already used my sample application to fuzz the latest OpenSSL code. Nothing was found yet, but of course this could be further tweaked by trying different protocol versions, extensions and other variations in the handshake.

I also wrote a German article about this finding for the IT news webpage Golem.de.

Update:

I want to point out some feedback I got that I think is noteworthy.

On Twitter it was mentioned that Codenomicon actually found Heartbleed via fuzzing. There's a Youtube video from Codenomicon's Antti Karjalainen explaining the details. However the way they did this was quite different, they built a protocol specific fuzzer. The remarkable feature of afl is that it is very powerful without knowing anything specific about the used protocol. Also it should be noted that Heartbleed was found twice, the first one was Neel Mehta from Google.

Kostya Serebryany mailed me that he was able to replicate my findings with his own fuzzer which is part of LLVM, and it was even faster.

In the comments Michele Spagnuolo mentions that by compiling OpenSSL with -DOPENSSL_TLS_SECURITY_LEVEL=0 one can use very short and insecure RSA keys even in the latest version. Of course this shouldn't be done in production, but it is helpful for fuzzing and other testing efforts.

07 Apr 2015 1:23pm GMT

05 Apr 2015

feedPlanet Gentoo

Denis Dupeyron: Why we do not have nor want R packages in the tree

Here's a bug and my response to it which both deserve a little bit more visibility than being buried under some random bug number. I'm actually surprised nobody complained about that before.

GNU R supports to run
> install.packages('ggplot2′)
in the R console as user. The library ggplot2 will then be installed in users home. Most distros like debian and the like provide a package per library.

First, thank you for pointing out that it is possible to install and maintain your own packages in your $HOME. It didn't use to work, and the reason why it now does is a little further down but I will not spoil it.

Here's my response:

Please, do not ever add R packages to the tree. There are thousands of them and they are mostly very badly written, to be polite. If you look at other distros you will see that they give an illusion of having some R packages, but almost all of them (if not all) are seriously lagging behind their respective upstream or simply unmaintained. The reason is that it's a massive amount of very frustrating and pointless work.

Upstream recommends maintaining your packages yourself in your $HOME and we'll go with that. I have sent patches a couple of years ago to fix the way this works, and now (as you can obviously see) it does work correctly on all distros, not just Gentoo. Also, real scientists usually like to lock down the exact versions of packages they use, which is not possible when they are maintained by a third party.

If you want to live on the edge then feel free to ask Benda Xu (heroxbd) for an access to the R overlay repository. It serves tens of thousands of ebuilds for R packages automatically converted from a number of sources. It mostly works, and helps in preserving a seemingly low but nonetheless functional level of mental sanity of your beloved volunteer developers.

That, or you maintain your own overlay of packages and have it added to layman.

While we are on that subject, I would like to publicly thank André Erdmann for the fantastic work he has done over the past few years. He wrote, and still occasionally updates, the magical software which runs behind the R overlay server. Thank you, André.

05 Apr 2015 7:09am GMT

31 Mar 2015

feedPlanet Gentoo

Alexys Jacob: py3status v2.4

I'm very pleased to announce this new release of py3status because it is by far the most contributed one with a total of 33 files changed, 1625 insertions and 509 deletions !

I'll start by thanking this release's contributors with a special mention for Federico Ceratto for his precious insights, his CLI idea and implementation and other modules contributions.

Thank you

IMPORTANT

In order to keep a clean and efficient code base, this is the last version of py3status supporting the legacy modules loading and ordering, this behavior will be dropped on the next 2.5 version !

CLI commands

py3status now supports some CLI commands which allows you to get information about all the available modules and their documentation.

if you specify your own inclusion folder(s) with the -i parameter, your modules will be listed too !

$ py3status modules list
Available modules:
  battery_level          Display the battery level.
  bitcoin_price          Display bitcoin prices using bitcoincharts.com.
  bluetooth              Display bluetooth status.
  clementine             Display the current "artist - title" playing in Clementine.
  dpms                   Activate or deactivate DPMS and screen blanking.
  glpi                   Display the total number of open tickets from GLPI.
  imap                   Display the unread messages count from your IMAP account.
  keyboard_layout        Display the current keyboard layout.
  mpd_status             Display information from mpd.
  net_rate               Display the current network transfer rate.
  netdata                Display network speed and bandwidth usage.
  ns_checker             Display DNS resolution success on a configured domain.
  online_status          Display if a connection to the internet is established.
  pingdom                Display the latest response time of the configured Pingdom checks.
  player_control         Control music/video players.
  pomodoro               Display and control a Pomodoro countdown.
  scratchpad_counter     Display the amount of windows in your i3 scratchpad.
  spaceapi               Display if your favorite hackerspace is open or not.
  spotify                Display information about the current song playing on Spotify.
  sysdata                Display system RAM and CPU utilization.
  vnstat                 Display vnstat statistics.
  weather_yahoo          Display Yahoo! Weather forecast as icons.
  whoami                 Display the currently logged in user.
  window_title           Display the current window title.
  xrandr                 Control your screen(s) layout easily.
$ py3status modules details
Available modules:
  battery_level          Display the battery level.
                         
                         Configuration parameters:
                             - color_* : None means - get it from i3status config
                             - format : text with "text" mode. percentage with % replaces {}
                             - hide_when_full : hide any information when battery is fully charged
                             - mode : for primitive-one-char bar, or "text" for text percentage ouput
                         
                         Requires:
                             - the 'acpi' command line
                         
                         @author shadowprince, AdamBSteele
                         @license Eclipse Public License
                         ---
[...]

Modules changelog

Other highlights

Full changelog here.

31 Mar 2015 1:53pm GMT

Gentoo News: Gentoo announces total website makeover (not an April Fool's)

Thank you for participating in Gentoo's 2015 April Fools' joke!

Now that April 1 has passed, we shed a tear as we say goodbye CGA Web™ but also to our website. Our previous website, that is, that has been with us for more than a decade. Until all contents are migrated, you can find the previous version on wwwold.gentoo.org, please note that the contents found there are not maintained any longer.

As this is indeed a major change, we're still working out some rough edges and would appreciate your feedback via email to www@gentoo.org or on IRC in #gentoo-www.

We hope you appreciate the new look and had a great time finding out how terrible you are at Pong and are looking forward to seeing your reactions once again when we celebrate the launch of the new Gentoo Disk™ set.

As for Alex, Robin, and all co-conspirators, thank you again for your participation!

The original April 1 news item is still available on the single news display page.


Old April Fools' day announcement

Gentoo Linux today announced the launch of its new totally revamped and more inclusive website which was built to conform to the CGA Web™ graphics standards.

"Our previous website served the community superbly well for the past 10 years but was frankly not as inclusive as we would have liked as it could not be viewed by aspiring community members who did not have access to the latest hardware," said a Gentoo Linux Council Member who asked not to be named.

"Dedicated community members worked all hours for many months to get the new site ready for its launch today. We are proud of their efforts and are convinced that the new site will be way more inclusive than ever and thereby deepen the sense of community felt by all," they said.

"Gentoo Linux's seven-person council determined that the interests of the community were not being served by the previous site and decided that it had to be made more inclusive," said Web project lead Alex Legler (a3li). The new site is was also available via Gopher (gopher://gopher.gentoo.org/).

"What's the use of putting millions of colours out there when so many in the world cannot appreciate them and who, indeed, may even feel disappointed by their less capable hardware platforms," he said.

"We accept that members in more fortunate circumstances may feel that a site with a 16-colour palette and an optimal screen resolution of 640 x 200 pixels is not the best fit for their needs but we urge such members to keep the greater good in mind. The vast majority of potential new Gentoo Linux users are still using IBM XT computers, storing their information on 5.25-inch floppy disks and communicating via dial-up BBS," said Roy Bamford (neddyseagoon), a Foundation trustee.

"These people will be touched and grateful that their needs are now being taken in account and that they will be able to view the Gentoo Linux site comfortably on whatever hardware they have available."

"The explosion of gratitude will ensure other leading firms such as Microsoft and Apple begin to move into conformance with CGA Web™ and it is hoped it will help bring knowledge to more and more informationally-disadvantaged people every year," said Daniel Robbins (drobbins), former Gentoo founder.

Several teams participated in the early development of the new website and would like to showcase their work:

Phase II

The second phase of the project to get Gentoo Linux to a wider user base will involve the creation of floppy disk sets containing a compact version of the operating system and a selection of software essentials. It is estimated that sets could be created using less than 700 disks each and sponsorship is currently being sought. The launch of Gentoo Disk™ can be expected in about a year.

Media release prepared by A Jackson.

Editorial inquiries: PR team.

Interviews, photography and screen shots available on request.

31 Mar 2015 12:00am GMT

30 Mar 2015

feedPlanet Gentoo

Sebastian Pipping: Sending e-mail on successful SSH login / detecting SSH log-ins

I found

Send email on SSH login using PAM

to be a great guide for setting up e-mail delivery for any successful log-in through SSH.

My current script:

#! /bin/bash
if [ "$PAM_TYPE" != "open_session" ]; then
  exit 0
fi

cat <<-BODY | mailx -s "Log-in to ${PAM_USER:-???}@$(hostname -f) \
(${PAM_SERVICE:-???}) detected" mail@example.org
        # $(LC_ALL=C date +'%Y-%m-%d %H:%M (UTC%z)')
        $(env | grep '^PAM_' | sort)
BODY

exit 0

30 Mar 2015 3:22pm GMT

27 Mar 2015

feedPlanet Gentoo

Luca Barbato: Again on assert()

Since apparently there are still people not reading the fine man page.

If the macro NDEBUG was defined at the moment was last included, the macro assert() generates no code, and hence does nothing at all.
Otherwise, the macro assert() prints an error message to standard error and terminates the program by calling abort(3) if expression is false (i.e., compares equal to zero).
The purpose of this macro is to help the programmer find bugs in his program. The message "assertion failed in file foo.c, function do_bar(), line 1287″ is of no help at all to a user.

I guess it is time to return on security and expand a bit which are good practices and which are misguided ideas that should be eradicated to reduce the amount of Deny Of Service waiting to happen.

Security issues

The term "Security issue" covers a lot of different kind of situations. Usually unhanded paths in the code lead to memory corruption, memory leaks, crashes and other less evident problems such as information leaks.

I'm focusing on crashes today, assume the others are usually more annoying or dangerous, it might be true or not depending on the scenarios:

If you are watching a movie and you have a glitch in the bitstream that makes the application leak some memory you would not care at all as long you can enjoy your movie. If the same glitch makes VLC to close suddenly a second before you get to see who is the mastermind behind a really twisted plot… I guess you'll scream at whoever thought was a good idea to crash there.

If a glitch might get an attacker to run arbitrary code while you are watching your movie probably you'd like better to have your player to just crash instead.

It is a false dichotomy since what you want is to have the glitch handled properly, and keep watching the rest of the movie w/out having VLC crashing w/out any meaningful information for you to know.

Errors must be handled, trading a crash for something else you consider worse is just being naive.

What is assert exactly?

assert is a debugging facility mandated by POSIX and C89 and C99, it is a macro that more or less looks like this

#define assert()                                       \
    if (condition) {                                   \
        do_nothing();                                  \
    } else {                                           \
       fprintf(stderr, "%s %s", __LINE__, __func__);   \
       abort();                                        \
    }

If the condition does not happen crash, here the real-life version from musl

#define assert(x) ((void)((x) || (__assert_fail(#x, __FILE__, __LINE__, __func__),0)))

How to use it

Assert should be use to verify assumptions. While developing they help you to verify if your
assumptions meet reality. If not they tell you that should investigate because something is
clearly wrong. They are not intended to be used in release builds.
- some wise Federico while talking about another language asserts

Usually when you write some code you might do something like this to make sure you aren't doing anything wrong, you start with

int my_function_doing_difficult_computations(Structure *s)
{
   a = some_computation(s);
   ....
   b = other_operations(a, s);
   ....
   c = some_input(s, b);
   ...
   idx = some_operation(a, b, c);

   return some_lut[idx];
}

Where idx in a signed integer, and so a, b, c are with some ranges that might or not depend on some external input.

You do not want to have idx to be outside the range of the lookup table array some_lut and you are not so sure. How to check that you aren't getting outside the array?

When you write the code usually you iteratively improve a prototype, you can add tests to make sure every function is returning values within the expected range and you can use assert() as a poor-man C version of proper unit-testing.

If some function depends on values outside your control (e.g. an input file), you usually do validation over them and cleanly error out there. Leaving external inputs unaccounted or, even worse, put an assert() there is really bad.

Unit testing and assert()

We want to make sure our function works fine, let's make a really tiny test.

void test_some_computation(void)
{
    Structure *s = NULL;
    int i;
    while (input_generator(&s, i)) {
       int a = some_computation(s);
       assert(a > 0 && a <10);
    }
}

It is compact and you can then run your test under gdb and inspect a bit around. Quite good if you are refactoring the innards of some_computation() and you want to be sure you did not consider some corner case.

Here assert() is quite nice since we can pack in a single line the testcase and have a simple report if something went wrong. We could do better since assert does not tell use the value or how we ended up there though.

You might not be that thorough and you can just decide to put the same assert in your function and check there, assuming you cover all the input space properly using regression tests.

To crash or not to crash

The people that consider OK crashing on runtime (remember the sad user that cannot watch his wonderful movie till the end?) suggest to leave the assert enabled at runtime.

If you consider the example above, would be better to crash than to read a random value from the memory? Again this is a false dichotomy!

You can expect failures, e.g. broken bitstreams and you want to just check and return a proper failure message.

In our case some_input() return value should be checked for failures and the return value forwarder further up till the library user that then will decide what to do.

Now remains the access to the lookup table. If you didn't check sufficiently the other functions you might get a bogus index and if you get a bogus index you will read from random memory (crashing or not depending if the random memory is on an address mapped to the program outside). Do you want to have an assert() there? Or you'd rather ad another normal check with a normal failure path?

An correct answer is to test your code enough so you do not need to add yet another check and, in fact, if the problem arises is wrong to add a check there, or, even worse an assert(), you should just go up in the execution path and fix the problem where it is: a non validated input, a wrong "optimization" or something sillier.

There is open debate on if having assert() enabled is a good or bad practice when talking about defensive design. In C, in my opinion, it is a complete misuse. You if you want to litter your release code with tons of branches you can also spend time to implement something better and make sure to clean up correctly. Calling abort() leaves your input and output possibly in severely inconsistent state.

How to use it the wrong way

I want to trade a crash anytime the alternative is memory corruption
- some misguided guy

Assume you have something like that

int size = some_computation(s);
uint8_t *p;
uint8_t *buf = p = malloc(size);


while (some_related_computations(s)) {
   do_stuff_(s, p);
   p += 4;
}

assert(p - buf == size);

If some_computation() and some_related_computation(s) do not agree, you might write over the allocated buffer! The naive person above starts talking about how the memory is corrupted by do_stuff() and horrible things (e.g. foreign code execution) could happen without the assert() and how even calling return at that point is terrible and would lead to horrible horrible things.

Ok, NO. Stop NOW. Go up and look at how assert is implemented. If you check at that point that something went wrong, you have the corruption already. No matter what you do, somebody could exploit it depending on how naive you had been or unlucky.

Remember: assert() does do I/O, allocates memory, raises a signal and calls functions. All that you would rather not do when your memory is corrupted is done by assert().

You can be less naive.

int size = some_computation(s);
uint8_t *p;
uint8_t *buf = p = malloc(size);

while (some_related_computations(s) && size > 4) {
   do_stuff_(s, p);
   p    += 4;
   size -= 4;
}
assert(size != 0);

But then, instead of the assert you can just add

if (size != 0) {
    msg("Something went really wrong!");
    log("The state is %p", s->some_state);
    cleanup(s);
    goto fail;
}

This way when the "impossible" happens the user gets a proper notification and you can recover cleanly and no memory corruption ever happened.

Better than assert

Albeit being easy to use and portable assert() does not provide that much information, there are plenty of tools that can be leveraged to get better reporting.

In Closing

assert() is a really nice debugging tool and it helps a lot to make sure some state remains invariant while refactoring.

Leaving asserts in release code, on the other hand, is quite wrong, it does not give you any additional safety. Please do not buy the fairly tale that assert() saves you from the scary memory corruption issues, it does NOT.

27 Mar 2015 12:25pm GMT

26 Mar 2015

feedPlanet Gentoo

Alex Legler: On Secunia’s Vulnerability Review 2015

Today, Secunia have released their Vulnerability Review 2015, including various statistics on security issues fixed in the last year.

If you don't know about Secunia's services: They aggregate security issues from various sources into a single stream, or as they call it: they provide vulnerability intelligence.
In the past, this intelligence was available to anyone in a free newsletter or on their website. Recent changes however caused much of the useful information to go behind login and/or pay walls. This circumstance has also forced us at the Gentoo Security team to cease using their reports as references when initiating package updates due to security issues.

Coming back to their recently published document, there is one statistic that is of particular interest: Gentoo is listed as having the third largest number of vulnerabilities in a product in 2014.

from Secunia: Secunia Vulnerability Review 2015 (http://secunia.com/resources/reports/vr2015/)from Secunia: Secunia Vulnerability Review 2015
(http://secunia.com/resources/reports/vr2015/)

Looking at the whole table, you'd expect at least one other Linux distribution with a similarly large pool of available packages, but you won't find any.

So is Gentoo less secure than other distros? tl;dr: No.

As Secunia's website does not let me see the actual "vulnerabilities" they have counted for Gentoo in 2014, there's no way to actually find out how these numbers came into place. What I can see though are "Secunia advisories" which seem to be issued more or less for every GLSA we send. Comparing the number of posted Secunia advisories for Gentoo to those available for Debian 6 and 7 tells me something is rotten in the state of Denmark (scnr):
While there were 203 Secunia advisories posted for Gentoo in the last year, Debian 6 and 7 had 304, yet Debian would have to have fixed less than 105 vulnerabilities in (55+249=) 304 advisories to be at least rank 21 and thus not included in the table above. That doesn't make much sense. Maybe issues in Gentoo's packages are counted for the distribution as well-no idea.

That aside, 2014 was a good year in terms of security for Gentoo: The huge backlog of issues waiting for an advisory was heavily reduced as our awesome team managed to clean up old issues and make them known to glsa-check in three wrap-up advisories-and then we also issued 239 others, more than ever since 2007. Thanks to everyone involved!

26 Mar 2015 7:44pm GMT

18 Mar 2015

feedPlanet Gentoo

Jan Kundrát: How to persuade us that you're going to be a good GSoC student

It is that time of the year again, and people are applying for Google Summer of Code positions. It's great to see a big crowd of newcomers. This article explains what sort of students are welcome in GSoC from the point of view of Trojitá, a fast Qt IMAP e-mail client. I suspect that many other projects within KDE share my views, but it's best to ask them. Hopefully, this post will help students understand what we are looking for, and assist in deciding what project to work for.

Finding a motivation

As a mentor, my motivation in GSoC is pretty simple - I want to attract new contributors to the project I maintain. This means that I value long-term sustainability above fancy features. If you are going to apply with us, make sure that you actually want to stick around. What happens when GSoC terminates? What happens when GSoC terminates and the work you've been doing is not ready yet? Do you see yourself continuing the work you've done so far? Or is it going to become an abandonware, with some cash in your pocket being your only reward? Who is going to maintain the code which you worked hard to create?

Selecting an area of work

This is probably the most important aspect of your GSoC involvement. You're going to spend three months of full time activity on some project, a project you might have not heard about before. Why are you doing this - is it only about the money, or do you already have a connection to the project you've selected? Is the project trying to solve a problem that you find interesting? Would you use the results of that project even without the GSoC?

My experience shows that it's best to find a project which fills a niche that you find interesting. Do you have a digital camera, and do you think that a random photo editor's interface sucks? Work on that, make the interface better. Do you love listening to music? Maybe your favorite music player has some annoying bug that you could fix. Maybe you could add a feature to, say, synchronize the playlist with your cell phone (this is just an example, of course). Do you like 3D printing? Help improve an existing software for 3D printing, then. Are you a database buff? Is there something you find lacking in, e.g., PostgreSQL?

Either way, it is probably a good idea to select something which you need to use, or want to use for some reason. It's of course fine to e.g. spend your GSoC term working on an astronomy tool even though you haven't used one before, but unless you really like astronomy, then you should probably choose something else. In case of Trojitá, if you have been using GMail's web interface for the past five years and you think that it's the best thing since sliced bread, well, chances are that you won't enjoy working on a desktop e-mail client.

Pick something you like, something which you enjoy working with.

Making a proposal

An excellent idea is to make yourself known in advance. This does not happen by joining the IRC channel and saying "I want to work on GSoC", or mailing us to let us know about this. A much better way of getting involved is through showing your dedication.

Try to play with the application you are about to apply for. Do you see some annoying bug? Fix it! Does it work well? Use the application more; you will find bugs. Look at the project's bug tracker, maybe there are some issues which people are hitting. Do you think that you can fix it? Diving into bug fixing is an excellent opportunity to get yourself familiar with the project's code base, and to make sure that our mentors know the style and pace of your work.

Now that you have some familiarity with the code, maybe you can already see opportunities for work besides what's already described on the GSoC ideas wiki page. That's fine - the best proposals usually come from students who have found them on their own. The list of ideas is just that, a list of ideas, not an exhaustive cookbook. There's usually much more what can be done during the course of the GSoC. What would be most interesting area for you? How does it fit into the bigger picture?

After you've thought about the area to work on, now it's time to write your proposal. Start early, and make sure that you talk about your ideas with your prospective mentors before you spend three hours preparing a detailed roadmap. Define the goals that you want to achieve, and talk with your mentors about them. Make sure that the work fits well with the length and style of the GSoC.

And finally, be sure that you stay open and honest with your mentoring team. Remember, this is not a contest of writing a best project proposal. For me, GSoC is all about finding people who are interested in working on, say, Trojitá. What I'm looking for are honest, fair-behaving people who demonstrate willingness to learn new stuff. On top of that, I like to accept people with whom I have already worked. Hearing about you for the first time when I read your GSoC proposal is not a perfect way of introducing yourself. Make yourself known in advance, and show us how you can help us make our project better. Show us that you want to become a part of that "we".

18 Mar 2015 6:02pm GMT

Patrick Lauer: Upgrading ThunderBird

With the recent update from the LongTimeSuffering / ExtendedSufferingRelease of Thunderbird from 24 to 31 we encountered some serious badness.

The best description of the symptoms would be "IMAP doesn't work at all"
On some machines the existing accounts would be disappeared, on others they would just be inert and never receive updates.

After some digging I was finally able to find the cause of this:
Too old config file.

Uhm ... what? Well - some of these accounts have been around since TB2. Some newer ones were enhanced by copying the prefs.js from existing accounts. And so there's a weird TB bugreport that is mostly triggered by some bits being rewritten around Firefox 30, and the config parser screwing up with translating 'old' into 'new', and ... effectively ... IMAP being not-whitelisted, thus by default blacklisted, and hilarity happens.

Should you encounter this bug you "just" need to revert to a prefs.js from before the update (sigh) and then remove all lines involving "capability.policy".
Then update and ... things work. Whew.

Why not just remove profile and start with a clean one you say? Well ... for one TB gets brutally unusably slow if you have emails. So just re-reading the mailbox content from a local fast IMAP server will take ~8h and TB will not respond to user input during that time.
And then you manually have to go into eeeevery single subfolder so that TB remembers it is there and actually updates it. That's about one work-day per user lost to idiocy, so sed'ing the config file into compliance is the easy way out.
Thank you, Mozilla, for keeping our lives exciting!

18 Mar 2015 1:35am GMT

17 Mar 2015

feedPlanet Gentoo

Alexys Jacob: mongoDB 3.0.1

This is a quite awaited version bump coming to portage and I'm glad to announce it's made its way to the tree today !

I'll right away thank a lot Tomas Mozes and Darko Luketic for their amazing help, feedback and patience !

mongodb-3.0.1

I introduced quite some changes in this ebuild which I wanted to share with you and warn you about. MongoDB upstream have stripped quite a bunch of things out of the main mongo core repository which I have in turn split into ebuilds.

Major changes :

app-admin/mongo-tools

The new tools USE flag allows you to pull a new ebuild named app-admin/mongo-tools which installs the commands listed below. Obviously, you can now just install this package if you only need those tools on your machine.

app-admin/mms-agent

The MMS agent has now some real version numbers and I don't have to host their source on Gentoo's infra woodpecker. At the moment there is only the monitoring agent available, shall anyone request the backup one, I'll be glad to add its support too.

dev-libs/mongo-c(xx)-driver

I took this opportunity to add the dev-libs/mongo-cxx-driver to the tree and bump the mongo-c-driver one. Thank you to Balint SZENTE for his insight on this.

17 Mar 2015 1:46pm GMT

15 Mar 2015

feedPlanet Gentoo

Sebastian Pipping: (German) Gentoo auf den Chemnitzer Linux-Tagen 2015

Wir unterbrechen für eine kurze Durchsage:

Gentoo Linux ist bei den Chemnitzer Linux-Tagen am

Samstag 21. und
Sonntag 22. März 2015

mit einem Stand vertreten.

https://chemnitzer.linux-tage.de/2015/de

Es gibt unter anderem Gentoo-T-Shirts, Lanyards und Buttons zum selbst kompilieren.

15 Mar 2015 10:31pm GMT