13 Dec 2018

feedPlanet Debian

Molly de Blanc: The OSD and user freedom

Some background reading

The relationship between open source and free software is fraught with people arguing about meanings and value. In spite of all the things we've built up around open source and free software, they reduce down to both being about software freedom.

Open source is about software freedom. It has been the case since "open source" was created.

In 1986 the Four Freedoms of Free Software (4Fs) were written. In 1998 Netscape set its source code free. Later that year a group of people got together and Christine Peterson suggested that, to avoid ambiguity, there was a "need for a better name" than free software. She suggested open source after open source intelligence. The name stuck and 20 years later we argue about whether software freedom matters to open source, because too many global users of the term have forgotten (or never knew) that some people just wanted another way to say software that ensures the 4Fs.

Once there was a term, the term needed a formal definition: how to we describe what open source is? That's where the Open Source Definition (OSD) comes in.

The OSD is a set of ten points that describe what an open source license looks like. The OSD came from the Debian Free Software Guidelines. The DFSG themselves were created to "determine if a work is free" and ought to be considered a way of describing the 4Fs.

Back to the present

I believe that the OSD is about user freedom. This is an abstraction from "open source is about free software." As I eluded to earlier, this is an intuition I have, a thing I believe, and an argument I'm have a very hard time trying to make.

I think of free software as software that exhibits or embodies software freedom - it's software created using licenses that ensure the things attached to them protect the 4Fs. This is all a tool, a useful tool, for protecting user freedom.

The line that connects the OSD and user freedom is not a short one: the OSD defines open source -> open source is about software freedom -> software freedom is a tool to protect user freedom. I think this is, however, a very valuable reduction we can make. The OSD is another tool in our tool box when we're trying to protect the freedom of users of computers and computing technology.

Why does this matter (now)?

I would argue that this has always mattered, and we've done a bad job of talking about it. I want to talk about this now because its become increasingly clear that people simply never understood (or even heard of) the connection between user freedom and open source.

I've been meaning to write about this for a while, and I think it's important context for everything else I say and write about in relation to the philosophy behind free and open source software (FOSS).

FOSS is a tool. It's not a tool about developmental models or corporate enablement - though some people and projects have benefited from the kinds of development made possible through sharing source code, and some companies have created very financially successful models based on it as well. In both historical and contemporary contexts, software freedom is at the heart of open source. It's not about corporate benefit, it's not about money, and it's not even really about development. Methods of development are tools being used to protect software freedom, which in turn is a tool to protect user freedom. User freedom, and what we get from that, is what's valuable.

Side note

At some future point, I'll address why user freedom matters, but in the mean time, here are some talks I gave (with Karen Sandler) on the topic.

13 Dec 2018 7:50pm GMT

Joachim Breitner: Thoughts on bootstrapping GHC

I am returning from the reproducible builds summit 2018 in Paris. The latest hottest thing within the reproducible-builds project seems to be bootstrapping: How can we build a whole operating system from just and only source code, using very little, or even no, binary seeds or auto-generated files. This is actually concern that is somewhat orthogonal to reproducibility: Bootstrappable builds help me in trusting programs that I built, while reproducible builds help me in trusting programs that others built.

And while they make good progress bootstrapping a full system from just a C compiler written in Scheme, and a Scheme interpreter written in C, that can build each other (Janneke's mes project), and there are plans to build that on top of stage0, which starts with a 280 bytes of binary, the situation looks pretty bad when it comes to Haskell.

Unreachable GHC

The problem is that contemporary Haskell has only one viable implementation, GHC. And GHC, written in contemporary Haskell, needs GHC to be build. So essentially everybody out there either just downloads a binary distribution of GHC. Or they build GHC from source, using a possibly older (but not much older) version of GHC that they already have. Even distributions like Debian do nothing different: When they build the GHC package, the builders use, well, the GHC package.

There are other Haskell implementations out there. But if they are mature and active developed, then they are implemented in Haskell themselves, often even using advanced features that only GHC provides. And even those are insufficient to build GHC itself, let alone the some old and abandoned Haskell implementations.

In all these cases, at some point an untrusted binary is used. This is very unsatisfying. What can we do? I don't have the answers, but please allow me to outline some venues of attack.

Retracing history

Obviously, even GHC does not exist since the beginning of time, and the first versions surely were built using something else than GHC. The oldest version of GHC for which we can find a release on the GHC web page is version 0.29 from July 1996. But the installation instructions write:

GHC 0.26 doesn't build with HBC. (It could, but we haven't put in the effort to maintain it.)

GHC 0.26 is best built with itself, GHC 0.26. We heartily recommend it. GHC 0.26 can certainly be built with GHC 0.23 or 0.24, and with some earlier versions, with some effort.

GHC has never been built with compilers other than GHC and HBC.

HBC is a Haskell compiler where we find the sources of one random version only thanks to archive.org. It is written in C, so that should be the solution: Compile HBC, use it to compile GHC-0.29, and then step for step build every (major) version of GHC until today.

The problem is that it is non-trivial to build software from the 90s using today's compilers. I briefly looked at the HBC code base, and had to change some files from using varargs.h to stdargs.v, and this is surely just one of many similar stumbling blocks trying to build that tools. Oh, and even the hbc source state

# To get everything done: make universe
# It is impossible to make from scratch.
# You must have a running lmlc, to
# recompile it (of course).

At this point I ran out of time.

Going back, but doing it differently

Another approach is to go back in time, to some old version of GHC, but maybe not all the way to the beginning, and then try to use another, officially unsupported, Haskell compiler to build GHC. This is what rekado tried to do in 2017: He use the most contemporary implementation of Haskell in C, the Hugs interpreter. Using this, he compiled nhc98 (yet another abandoned Haskell implementation), with the hope of building GHC with nhc98. He made impressive progress back then, but ran into a problem where the runtime crashed. Maybe someone is interested in picking up up from there?

Removing, simplifying, extending, in the present.

Both approaches so far focus on building an old version of GHC. This adds complexity: other tools (the shell, make, yacc etc.) may behave different now in a way that causes hard to debug problems. So maybe it is more fun and more rewarding to focus on today's GHC? (At this point I am starting to hypothesize).

I said before that no other existing Haskell implementation can compile today's GHC code base, because of features like mutually recursive modules, the foreign function interface etc. And also other existing Haskell implementations often come with a different, smaller set of standard libraries, but GHC assumes base, so we would have to build that as well...

But we don't need to build it all. Surely there is much code in base that is not used by GHC. Also, much code in GHC that we do not need to build GHC, and . So by removing that, we reduce the amount of Haskell code that we need to feed to the other implementation.

The remaining code might use some features that are not supported by our bootstrapping implementation. Mutually recursive module could be manually merged. GADTs that are only used for additional type safety could be replaced by normal ones, which might make some pattern matches incomplete. Syntactic sugar can be desugared. By simplifying the code base in that way, one might be able a fork of GHC that is within reach of the likes of Hugs or nhc98.

And if there are features that are hard to remove, maybe we can extend the bootstrapping compiler or interpreter to support them? For example, it was mostly trivial to extend Hugs with support for the # symbol in names -- and we can be pragmatic and just allow it always, since we don't need a standards conforming implementation, but merely one that works on the GHC code base. But how much would we have to implement? Probably this will be more fun in Haskell than in C, so maybe extending nhc98 would be more viable?

Help from beyond Haskell?

Or maybe it is time to create a new Haskell compiler from scratch, written in something other than Haskell? Maybe some other language that is reasonably pleasant to write a compiler in (Ocaml? Scala?), but that has the bootstrappability story already sorted out somehow.

But in the end, all variants come down to the same problem: Writing a Haskell compiler for full, contemporary Haskell as used by GHC is hard and really a lot of work -- if it were not, there would at least be implementations in Haskell out there. And as long as nobody comes along and does that work, I fear that we will continue to be unable to build our nice Haskell ecosystem from scratch. Which I find somewhat dissatisfying.

13 Dec 2018 1:02pm GMT

Junichi Uekawa: Already December.

Already December. Nice. I tried using tramp for a while but I am back to mosh. tramp is not usable when ssh connection is not reliable.

13 Dec 2018 11:39am GMT

Keith Packard: newt

Newt: A Tiny Embeddable Python Subset

I've been helping teach robotics programming to students in grades 5 and 6 for a number of years. The class uses Lego models for the mechanical bits, and a variety of development environments, including Robolab and Lego Logo on both Apple ][ and older Macintosh systems. Those environments are quite good, but when the Apple ][ equipment died, I decided to try exposing the students to an Arduino environment so that they could get another view of programming languages.

The Arduino environment has produced mixed results. The general nature of a full C++ compiler and the standard Arduino libraries means that building even simple robots requires a considerable typing, including a lot of punctuation and upper case letters. Further, the edit/compile/test process is quite long making fixing errors slow. On the positive side, many of the students have gone on to use Arduinos in science research projects for middle and upper school (grades 7-12).

In other environments, I've seen Python used as an effective teaching language; the direct interactive nature invites exploration and provides rapid feedback for the students. It seems like a pretty good language to consider for early education -- "real" enough to be useful in other projects, but simpler than C++/Arduino has been. However, I haven't found a version of Python that seems suitable for the smaller microcontrollers I'm comfortable building hardware with.

How Much Python Do We Need?

Python is a pretty large language in embedded terms, but there's actually very little I want to try and present to the students in our short class (about 6 hours of language introduction and another 30 hours or so of project work). In particular, all we're using on the Arduino are:

Remembering my childhood Z-80 machine with its BASIC interpreter, I decided to think along those lines in terms of capabilities. I think I can afford more than 8kB of memory for the implementation, and I really do want to have "real" functions, including lexical scoping and recursion.

I'd love to make this work on our existing Arduino Duemilanove compatible boards. Those have only 32kB of flash and 2kB of RAM, so that might be a stretch...

What to Include

Exploring Python, I think there's a reasonable subset that can be built here. Included in that are:

What to Exclude

It's hard to describe all that hasn't been included, but here's some major items:

Implementation

Newt is implemented in C, using flex and bison. It includes the incremental mark/sweep compacting GC system I developed for my small scheme interpreter last year. That provides a relatively simple to use and efficient memory system.

The Newt "Compiler"

Instead of directly executing a token stream as my old BASIC interpreter did, Newt is compiling to a byte coded virtual machine. Of course, we have no memory, so we don't generate a parse tree and perform optimizations on that. Instead, code is generated directly in the grammar productions.

The Newt "Virtual Machine"

With the source compiled to byte codes, execution is pretty simple -- read a byte code, execute some actions related to it. To keep things simple, the virtual machine has a single accumulator register and a stack of other values.

Global and local variables are stored in 'frames', with each frame implemented as a linked list of atom/value pairs. This isn't terribly efficient in space or time, but was quick to implement the required Python semantics for things like 'global'.

Lists and tuples are simple arrays in memory, just like C Python. I use the same sizing heuristic for lists that Python does; no sense inventing something new for that. Strings are C strings.

When calling a non-builtin function, a new frame is constructed that includes all of the formal names. Those get assigned values from the provided actuals and then the instructions in the function are executed. As new locals are discovered, the frame is extended to include them.

Testing

Any new language implementation really wants to have a test suite to ensure that the desired semantics are implemented correctly. One huge advantage for Newt is that we can cross-check the test suite by running it with Python.

Current Status

I think Newt is largely functionally complete at this point; I just finished adding the limited for statement capabilities this evening. I'm sure there are a lot of bugs to work out, and I expect to discover additional missing functionality as we go along.

I'm doing all of my development and testing on my regular x86 laptop, so I don't know how big the system will end up on the target yet.

I've written 4836 lines of code for the implementation and another 65 lines of Python for simple test cases. When compiled -Os for x86_64, the system is about 36kB of text and another few bytes of initialized data.

Links

The source code is available from my server at https://keithp.com/cgit/newt.git/, and also at github https://github.com/keith-packard/newt. It is licensed under the GPLv2 (or later version).

13 Dec 2018 7:55am GMT

12 Dec 2018

feedPlanet Debian

Jonathan Dowland: Game Engine Black Book: DOOM

Fabien's proof copies

Fabien's proof copies

*proud smug face*

proud smug face

Today is Doom's' 25th anniversary. To mark the occasion, Fabien Sanglard has written and released a book, Game Engine Black Book: DOOM.

It's a sequel of-sorts to "Game Engine Black Book: Wolfenstein 3D", which was originally published in August 2017 and has now been fully revised for a second edition.

I had the pleasure of proof-reading an earlier version of the Doom book and it's a real treasure. It goes into great depth as to the designs, features and limitations of PC hardware of the era, from the 386 that Wolfenstein 3D targetted to the 486 for Doom, as well as the peripherals available such as sound cards. It covers NeXT computers in similar depth. These were very important because Id Software made the decision to move all their development onto NeXT machines instead of developing directly on PC. This decision had some profound implications on the design of Doom as well as the speed at which they were able to produce it. I knew very little about the NeXTs and I really enjoyed the story of their development.

Detailed descriptions of those two types of personal computer set the scene at the start of the book, before Doom itself is described. The point of this book is to focus on the engine and it is explored sub-system by sub-system. It's fair to say that this is the most detailed description of Doom's engine that exists anywhere outside of its own source code. Despite being very familiar with Doom's engine, having worked on quite a few bits of it, I still learned plenty of new things. Fabien made special modifications to a private copy of Chocolate Doom in order to expose how various phases of the renderer worked. The whole book is full of full colour screenshots and illustrations.

The main section of the book closes with some detailed descriptions of the architectures of various home games console systems of the time to which Doom was ported, as well as describing the fate of that particular version of Doom: some were impressive technical achievements, some were car-crashes.

I'm really looking forward to buying a hard copy of the final book. I would recommend this to anyone has fond memories of that era, or is interested to know more about the low level voodoo that was required to squeeze every ounce of performance possible out of the machines from the time.

Edit: Fabien has now added a "pay what you want" option for the ebook. If the existing retailer prices were putting you off, now you can pay him for his effort at a level you feel is reasonable. The PDF is also guaranteed not to be mangled by Google Books or anyone else.

12 Dec 2018 4:50pm GMT

Petter Reinholdtsen: Non-blocking bittorrent plugin for vlc

A few hours ago, a new and improved version (2.4) of the VLC bittorrent plugin was uploaded to Debian. This new version include a complete rewrite of the bittorrent related code, which seem to make the plugin non-blocking. This mean you can actually exit VLC even when the plugin seem to be unable to get the bittorrent streaming started. The new version also include support for filtering playlist by file extension using command line options, if you want to avoid processing audio, video or images. The package is currently in Debian unstable, but should be available in Debian testing in two days. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file downloaded live via bittorrent like this:

vlc https://archive.org/download/Glass_201703/Glass_201703_archive.torrent

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

12 Dec 2018 6:20am GMT

Matthew Palmer: Falsehoods Programmers Believe About Pagination

The world needs it, so I may as well write it.

12 Dec 2018 12:00am GMT

11 Dec 2018

feedPlanet Debian

Louis-Philippe Véronneau: Montreal Bug Squashing Party - Jan 19th & 20th 2019

We are organising a BSP in Montréal in January! Unlike the one we organised for the Stretch release, this one will be over a whole weekend so hopefully folks from other provinces in Canada and from the USA can come.

So yeah, come and squash bugs with us! Montreal in January can be cold, but it's usually snowy and beautiful too.

A picture of Montréal during the winter

As always, the Debian Project is willing to reimburse 100 USD (or equivalent) of expenses to attend Bug Squashing Parties. If you can find a cheap flight or want to car pool with other people that are interested, going to Montréal for a weekend doesn't sound that bad, eh?

When: January 19th and 20th 2019

Where: Montréal, Eastern Bloc

Why: to squash bugs!

11 Dec 2018 11:15pm GMT

Reproducible builds folks: Reproducible Builds: Weekly report #189

Here's what happened in the Reproducible Builds effort between Sunday December 2 and Saturday December 8 2018:

Packages reviewed and fixed, and bugs filed

Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org this week, including:


This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Muz & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

11 Dec 2018 4:01pm GMT

Bits from Debian: Debian Cloud Sprint 2018

The Debian Cloud team held a sprint for the third time, hosted by Amazon at its Seattle offices from October 8th to October 10th, 2018.

We discussed the status of images on various platforms, especially in light of moving to FAI as the only method for building images on all the cloud platforms. The next topic was building and testing workflows, including the use of Debian machines for building, testing, storing, and publishing built images. This was partially caused by the move of all repositories to Salsa, which allows for better management of code changes, especially reviewing new code.

Recently we have made progress supporting cloud usage cases; grub and kernel optimised for cloud images help with reducing boot time and required memory footprint. There is also growing interest in non-x86 images, and FAI can now build such images.

Discussion of support for LTS images, which started at the sprint, has now moved to the debian-cloud mailing list). We also discussed providing many image variants, which requires a more advanced and automated workflow, especially regarding testing. Further discussion touched upon providing newer kernels and software like cloud-init from backports. As interest in using secure boot is increasing, we might cooperate with other team and use work on UEFI to provide images signed boot loader and kernel.

Another topic of discussion was the management of accounts used by Debian to build and publish Debian images. SPI will create and manage such accounts for Debian, including user accounts (synchronised with Debian accounts). Buster images should be published using those new accounts. Our Cloud Team delegation proposal (prepared by Luca Fillipozzi) was accepted by the Debian Project Leader. Sprint minutes are available, including a summary and a list of action items for individual members.

Group photo of the participants in the Cloud Team Sprint

11 Dec 2018 11:30am GMT

Dirk Eddelbuettel: RQuantLib 0.4.7: Now with corrected Windows library

A new version 0.4.7 of RQuantLib reached CRAN and Debian. Following up on the recent 0.4.6 release post which contained a dual call for help: RQuantLib was (is !!) still in need of a macOS library build, but also experienced issues on Windows.

Since then we set up a new (open) mailing list for RQuantLib and, I am happy to report, sorted that Windows issue out! In short, with the older g++ 4.9.3 imposed for R via Rtools, we must add an explicit C++11 flag at configuration time. Special thanks to Josh Ulrich for tireless and excellent help with testing these configurations, and to everybody else on the list!

QuantLib is a very comprehensice free/open-source library for quantitative finance, and RQuantLib connects it to the R environment and language.

This release re-enable most examples and tests that were disabled when Windows performance was shaky (due to, as we now know, as misconfiguration of ours for the windows binary library used). With the exception of the AffineSwaption example when running Windows i386, everything is back!

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.7 (2018-12-10)

  • Changes in RQuantLib tests:

    • Thanks to the updated #rwinlib/quantlib Windows library provided by Josh, all tests that previously exhibited issues have been re-enabled (Dirk in #126).
  • Changes in RQuantLib documentation:

    • The CallableBonds example now sets an evaluation date (#124).

    • Thanks to the updated #rwinlib/quantlib Windows library provided by Josh, examples that were set to dontrun are re-activated (Dirk in #126). AffineSwaption remains the sole holdout.

  • Changes in RQuantLib build system:

    • The src/Makevars.win file was updated to reflect the new layout used by the upstream build.

    • The -DBOOST_NO_AUTO_PTR compilation flag is now set.

As stated above, we are still looking for macOS help though. Please get in touch on-list if you can help build a library for Simon's recipes repo.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 Dec 2018 10:47am GMT

Julien Danjou: Podcast.__init__: Gnocchi, a Time Series Database for your Metrics

Podcast.__init__: Gnocchi, a Time Series Database for your Metrics

A few weeks ago, Tobias Macey contacted me as he wanted to talk about Gnocchi, the time series database I've been working on for the last few years.

It was a great opportunity to talk about the project, so I jumped on it! We talk about how Gnocchi came to life, how we built its architecture, the challenges we met, what kind of trade-off we made, etc.

You can list to this episode here.

11 Dec 2018 9:50am GMT

Masayuki Hatta: Good ciphers in OpenJDK

Until recently, I didn't know the list of supported Cipher Suites in OpenJDK is widely different between JDK versions. I used getSupportedCipherSuites() on OpenJDK 10 to get the following list, and check the strength of encryption.

Name Encryption Mode
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 256bit
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 128bit
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 256bit
TLS_RSA_WITH_AES_256_GCM_SHA384 256bit
TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384 256bit
TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384 256bit
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 256bit
TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 256bit
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 128bit
TLS_RSA_WITH_AES_128_GCM_SHA256 128bit
TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256 128bit
TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256 128bit
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 128bit
TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 128bit
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 256bit
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 256bit
TLS_RSA_WITH_AES_256_CBC_SHA256 256bit
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384 256bit
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384 256bit
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 256bit
TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 256bit
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA 256bit
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA 256bit
TLS_RSA_WITH_AES_256_CBC_SHA 256bit
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA 256bit
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA 256bit
TLS_DHE_RSA_WITH_AES_256_CBC_SHA 256bit
TLS_DHE_DSS_WITH_AES_256_CBC_SHA 256bit
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 128bit
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 128bit
TLS_RSA_WITH_AES_128_CBC_SHA256 128bit
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256 128bit
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256 128bit
TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 128bit
TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 128bit
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA 128bit
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA 128bit
TLS_RSA_WITH_AES_128_CBC_SHA 128bit
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA 128bit
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA 128bit
TLS_DHE_RSA_WITH_AES_128_CBC_SHA 128bit
TLS_DHE_DSS_WITH_AES_128_CBC_SHA 128bit
TLS_EMPTY_RENEGOTIATION_INFO_SCSV 0bit
TLS_DH_anon_WITH_AES_256_GCM_SHA384 256bit anon
TLS_DH_anon_WITH_AES_128_GCM_SHA256 128bit anon
TLS_DH_anon_WITH_AES_256_CBC_SHA256 256bit anon
TLS_ECDH_anon_WITH_AES_256_CBC_SHA 256bit anon
TLS_DH_anon_WITH_AES_256_CBC_SHA 256bit anon
TLS_DH_anon_WITH_AES_128_CBC_SHA256 128bit anon
TLS_ECDH_anon_WITH_AES_128_CBC_SHA 128bit anon
TLS_DH_anon_WITH_AES_128_CBC_SHA 128bit anon
SSL_RSA_WITH_DES_CBC_SHA 56bit
SSL_DHE_RSA_WITH_DES_CBC_SHA 56bit
SSL_DHE_DSS_WITH_DES_CBC_SHA 56bit
SSL_DH_anon_WITH_DES_CBC_SHA 56bit anon
TLS_RSA_WITH_NULL_SHA256 0bit null
TLS_ECDHE_ECDSA_WITH_NULL_SHA 0bit null
TLS_ECDHE_RSA_WITH_NULL_SHA 0bit null
SSL_RSA_WITH_NULL_SHA 0bit null
TLS_ECDH_ECDSA_WITH_NULL_SHA 0bit null
TLS_ECDH_RSA_WITH_NULL_SHA 0bit null
TLS_ECDH_anon_WITH_NULL_SHA 0bit null
SSL_RSA_WITH_NULL_MD5 0bit null
TLS_KRB5_WITH_DES_CBC_SHA 56bit
~~TLS_KRB5_WITH_DES_CBC_MD5~~ 56bit

11 Dec 2018 8:36am GMT

Masayuki Hatta: Good ciphers in OpenJDK 10

Until recently, I didn't know the list of supported Cipher Suites in OpenJDK is widely different between JDK versions. I used getSupportedCipherSuites() on OpenJDK 10 to get the following list, and check the strength of encryption.

My criteria is:

  1. At least 128bit.
  2. No NULL ciphers.
  3. No anonymous auth ciphers.

Then I got the following. The red ones are supposed to be weak.

Name Encryption Mode
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 256bit
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 128bit
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 256bit
TLS_RSA_WITH_AES_256_GCM_SHA384 256bit
TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384 256bit
TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384 256bit
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 256bit
TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 256bit
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 128bit
TLS_RSA_WITH_AES_128_GCM_SHA256 128bit
TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256 128bit
TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256 128bit
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 128bit
TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 128bit
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 256bit
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 256bit
TLS_RSA_WITH_AES_256_CBC_SHA256 256bit
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384 256bit
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384 256bit
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 256bit
TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 256bit
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA 256bit
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA 256bit
TLS_RSA_WITH_AES_256_CBC_SHA 256bit
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA 256bit
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA 256bit
TLS_DHE_RSA_WITH_AES_256_CBC_SHA 256bit
TLS_DHE_DSS_WITH_AES_256_CBC_SHA 256bit
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 128bit
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 128bit
TLS_RSA_WITH_AES_128_CBC_SHA256 128bit
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256 128bit
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256 128bit
TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 128bit
TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 128bit
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA 128bit
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA 128bit
TLS_RSA_WITH_AES_128_CBC_SHA 128bit
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA 128bit
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA 128bit
TLS_DHE_RSA_WITH_AES_128_CBC_SHA 128bit
TLS_DHE_DSS_WITH_AES_128_CBC_SHA 128bit
TLS_EMPTY_RENEGOTIATION_INFO_SCSV 0bit
TLS_DH_anon_WITH_AES_256_GCM_SHA384 256bit anon
TLS_DH_anon_WITH_AES_128_GCM_SHA256 128bit anon
TLS_DH_anon_WITH_AES_256_CBC_SHA256 256bit anon
TLS_ECDH_anon_WITH_AES_256_CBC_SHA 256bit anon
TLS_DH_anon_WITH_AES_256_CBC_SHA 256bit anon
TLS_DH_anon_WITH_AES_128_CBC_SHA256 128bit anon
TLS_ECDH_anon_WITH_AES_128_CBC_SHA 128bit anon
TLS_DH_anon_WITH_AES_128_CBC_SHA 128bit anon
SSL_RSA_WITH_DES_CBC_SHA 56bit
SSL_DHE_RSA_WITH_DES_CBC_SHA 56bit
SSL_DHE_DSS_WITH_DES_CBC_SHA 56bit
SSL_DH_anon_WITH_DES_CBC_SHA 56bit anon
TLS_RSA_WITH_NULL_SHA256 0bit null
TLS_ECDHE_ECDSA_WITH_NULL_SHA 0bit null
TLS_ECDHE_RSA_WITH_NULL_SHA 0bit null
SSL_RSA_WITH_NULL_SHA 0bit null
TLS_ECDH_ECDSA_WITH_NULL_SHA 0bit null
TLS_ECDH_RSA_WITH_NULL_SHA 0bit null
TLS_ECDH_anon_WITH_NULL_SHA 0bit null
SSL_RSA_WITH_NULL_MD5 0bit null
TLS_KRB5_WITH_DES_CBC_SHA 56bit
TLS_KRB5_WITH_DES_CBC_MD5 56bit

11 Dec 2018 8:36am GMT

Louis-Philippe Véronneau: Razer Deathadder Elite Review

After more than 10 years of use and abuse, my old Microsoft IntelliMouse died a few months ago. The right click had been troublesome for a while, but it became so broken I couldn't reliably drag and drop anymore.

It's the first mouse I kill and I don't know if I have to feel proud or troubled by that fact. I guess I'm getting old enough that saying I've used the same mouse for 10 years strait sounds reasonable?

I considered getting a new IntelliMouse, as Microsoft is reviving the brand, but at the price the 3.0 model was selling in August (~70 CAD), better options were available.

Picture of the mouse

After shopping online for a while, I ended up buying the Razer Dethadder Elite. Despite the very gamer oriented branding, I decided to get this one for its size and its build quality. I have very large hands and although I'm more of a "Tip Grip" type of person, I occasionally enjoy a "Palm Grip".

I have been using the mouse for around 3 months now and the only thing I really dislike is its default DPI and RGB settings. To me the DPI buttons were basically useless since anything beyond the lowest level was set too high.

The mouse also has two separate RGB zones for the scroll wheel and the Razer logo and I couldn't care less. As they are annoyingly set to a rainbow-colored shuffle by default, I turned them off.

Although Razer's program to modify mouse settings like DPI levels and RGB colors doesn't support Linux, the mouse is supported by OpenRazer. Settings are stored in the mouse directly, so you can setup OpenRazer in a throwaway VM, get the mouse the way you want and never think about that ever again.

Let's hope this one lasts another 10 years!

11 Dec 2018 5:00am GMT

09 Dec 2018

feedPlanet Debian

Benjamin Mako Hill: Awards and citations at computing conferences

I've heard a surprising "fact" repeated in the CHI and CSCW communities that receiving a best paper award at a conference is uncorrelated with future citations. Although it's surprising and counterintuitive, it's a nice thing to think about when you don't get an award and its a nice thing to say to others when you do. I've thought it and said it myself.

It also seems to be untrue. When I tried to check the "fact" recently, I found a body of evidence that suggests that computing papers that receive best paper awards are, in fact, cited more often than papers that do not.

The source of the original "fact" seems to be a CHI 2009 study by Christoph Bartneck and Jun Hu titled "Scientometric Analysis of the CHI Proceedings." Among many other things, the paper presents a null result for a test of a difference in the distribution of citations across best papers awardees, nominees, and a random sample of non-nominees.

Although the award analysis is only a small part of Bartneck and Hu's paper, there have been at least two papers have have subsequently brought more attention, more data, and more sophisticated analyses to the question. In 2015, the question was asked by Jaques Wainer, Michael Eckmann, and Anderson Rocha in their paper "Peer-Selected 'Best Papers'-Are They Really That 'Good'?"

Wainer et al. build two datasets: one of papers from 12 computer science conferences with citation data from Scopus and another papers from 17 different conferences with citation data from Google Scholar. Because of parametric concerns, Wainer et al. used a non-parametric rank-based technique to compare awardees to non-awardees. Wainer et al. summarize their results as follows:

The probability that a best paper will receive more citations than a non best paper is 0.72 (95% CI = 0.66, 0.77) for the Scopus data, and 0.78 (95% CI = 0.74, 0.81) for the Scholar data. There are no significant changes in the probabilities for different years. Also, 51% of the best papers are among the top 10% most cited papers in each conference/year, and 64% of them are among the top 20% most cited.

The question was also recently explored in a different way by Danielle H. Lee in her paper on "Predictive power of conference‐related factors on citation rates of conference papers" published in June 2018.

Lee looked at 43,000 papers from 81 conferences and built a regression model to predict citations. Taking into an account a number of controls not considered in previous analyses, Lee finds that the marginal effect of receiving a best paper award on citations is positive, well-estimated, and large.

Why did Bartneck and Hu come to such a different conclusions than later work?

Distribution of citations (received by 2009) of CHI papers published between 2004-2007 that were nominated for a best paper award (n=64), received one (n=12), or were part of a random sample of papers that did not (n=76).

My first thought was that perhaps CHI is different than the rest of computing. However, when I looked at the data from Bartneck and Hu's 2009 study-conveniently included as a figure in their original study-you can see that they did find a higher mean among the award recipients compared to both nominees and non-nominees. The entire distribution of citations among award winners appears to be pushed upwards. Although Bartneck and Hu found an effect, they did not find a statistically significant effect.

Given the more recent work by Wainer et al. and Lee, I'd be willing to venture that the original null finding was a function of the fact that citations is a very noisy measure-especially over a 2-5 post-publication period-and that the Bartneck and Hu dataset was small with only 12 awardees out of 152 papers total. This might have caused problems because the statistical test the authors used was an omnibus test for differences in a three-group sample that was imbalanced heavily toward the two groups (nominees and non-nominees) in which their appears to be little difference. My bet is that the paper's conclusions on awards is simply an example of how a null effect is not evidence of a non-effect-especially in an underpowered dataset.

Of course, none of this means that award winning papers are better. Despite Wainer et al.'s claim that they are showing that award winning papers are "good," none of the analyses presented can disentangle the signalling value of an award from differences in underlying paper quality. The packed rooms one routinely finds at best paper sessions at conferences suggest that at least some additional citations received by award winners might be caused by extra exposure caused by the awards themselves. In the future, perhaps people can say something along these lines instead of repeating the "fact" of the non-relationship.


09 Dec 2018 8:20pm GMT