22 Jul 2018

feedPlanet Debian

Vasudev Kamath: SPAKE2 In Golang: Journey to Cryptoland begins

This or the series of SPAKE2 related posts are inspired from Jonathan Lange's series on SPAKE2 when he ported it to Haskell, which is also the reference I used for my implementation in Golang.

Brief Background

Before I can go to detail I should tell why/how I came to implementing SPAKE2 in Golang. Story starts a couple of month back when I started contributing to *magic-wormhole.rs*, a Rust port of original Python project of magic-wormhole. You can read this LWN article to understand more about what magic-wormhole is.

During contribution my friend Ramakrishnan Muthukrishnan said to me that I should try to port the magic-wormhole to Golang. I was not a expert Go programmer but had understanding of language basics and thought why not use it to improve my language understanding. And this is where it all started.

What is SPAKE2 why is it used?

At this point we need to know why SPAKE2 is needed and how magic-wormhole uses it. SPAKE2 is Simple Password Authenticated Key Exchange protocol. It allows 2 parties having a shared weak password to derive a strong shared key which can then be used by the parties to setup a encrypted+authenticated channel between them.

magic-wormhole uses SPAKE2 to negotiate a shared session key between communicating parties which is then used by magic-wormhole to derive different keys needed for different purpose.

So I hit my first road block after agreeing to implement the magic-wormhole in Golang, there is no SPAKE2 implementation readily available in Go!.

Enter Cryptography and My knowledge about it

Ram convinced me that its easy to implement SPAKE2 in Go and I agreed. But my knowledge on cryptography was limited. I knew Cryptography is

  • basically math with big numbers and relies on fact that factoring big prime number is hard for computers.
  • These big numbers are derived from Abelian Group and related concepts from number theory, which I have studied in academics but since I've never learnt a practical use case for them, its eating dust some where inside my memory.

I've taken some theoretical course on Cryptography but never thought of implementing one myself. With these weak foundation I set out for a new adventure.

Python SPAKE2 Implementation

Since magic-wormhole is implemented in python, SPAKE2 implementation in Python is considered as reference for other language implementation. SPAKE2 paper does not specify much about where or how the requirement public constants are defined, so implementer can take liberty at defining those. Python code uses 2 groups one is Twisted Edwards Curve group Ed25519 and others are multiplicative group over Integer of 1024, 2048 and 3072 bits.

In Python code Warner himself has defined Ed25519 and other Integer group and related operations. In Rust code though there is only Ed25519 group but its created using curve25519-dalek library. Haskell code also defines the group operations by itself instead of depending on any other libraries (possibly Cryptonite). So as a first step I started for searching any library equivalent to curve25519-dalek as I've no clue what is an Elliptic curve.(forget groups I don't know basic itself).

First Try, Bruteforce

I've this bad habit of tackling problem with brute force; some times it works but most time it just exhausts me, taking me nowhere. So with my normal habit I started looking for Ed25519 curve operations library. (Without actually knowing what operations are and how it works). I tried to read through curve25519-dalek but invain, nothing entered into my head. I found ed25519 package for Go written by Adam Langley but eventually it turned out to be actually a signature package. I found an internal package defined in ed25519 package called edwards25519, which seem to have some operations defined but I was unable to understand it nor figure out why its made internal. I later even took a dig at embedding edwards25519 as part of my implementation of ed25519 group but finally had to drop it for my own version that story will be part of another post in this series.

Conclusion

During all these I was constantly in talk with Ram and first thing he told me was to slow down a bit and go from scratch. And that's what was the learning point for me. In short I can say the following.

Nothing can be done in single day, before you code understand the basic concepts and then build it from there. As they say you can't build a stable house on weak foundation.

In the next post in this series I will write about my learning on Elliptic Curves and Elliptic curve groups, followed by experiment with number groups and finally learning and decision I had to make in writing gospake2.

22 Jul 2018 4:37pm GMT

Joachim Breitner: WebGL, Fragment Shader, GHCJS and reflex-dom

What a potpourri of topics... too long to read? Click here!

On the side and very slowly I am working on a little game that involves breeding spherical patterns… more on that later (maybe). I want to implement it in Haskell, but have it run in the browser, so I reached for GHCJS, the Haskell-to-Javascript compiler.

WebGL for 2D images

A crucial question was: How do I draw a generative pattern onto a HTML canvas element. My first attempt was to calculate the pixel data into a bit array and use putImageData() to push it onto the canvas, but it was prohibitively slow. I might have done something stupid along the way, and some optimization might have helped, but I figured that I should not myself calculate the colors of each pixel, but leave this to who is best at it: The browser and (ideally) the graphic card.

So I took this as an opportunity to learn about WebGL, in particular fragment shaders. The term shader is misleading, and should mentally be replaced with "program", because it is no longer (just) about shading. WebGL is intended to do 3D graphics, and one sends a bunch of coordinates for triangles, a vertex shader and a fragment shader to the browser. The vertex shader can places the vertices, while the fragment shader colors each pixel on the visible triangles. This is a gross oversimplification, but that is fine: We only really care about the last step, and if our coordinates always just define a rectangle that fills the whole canvas, and the vertex shader does not do anything interesting, then what remains is a HTML canvas that takes a program (written in the GL shader language), which is run for each pixel and calculates the color to be shown at that pixel.

Perfect! Just what I need. Dynamically creating a program that renders the pattern I want to show is squarely within Haskell's strengths.

A reflex-dom widget

As my game UI grows, I will at some point no longer want to deal with raw DOM access, events etc., and the abstraction that makes creating such user interfaces painless is Functional Reactive Programming (FRP). One of the main mature implementations is Ryan Trinkle's reflex-dom, and I want to use this project to get more hands-on experience with it.

Based on my description above, once I hide all the details of the WebGL canvas setup, what I really have is a widget that takes a text string (representing the fragment shader), and creates a DOM element for it. This would suggest a function with this type signature

fragmentShaderCanvas ::
    MonadWidget t m =>
    Dynamic t Text ->
    m ()

where the input text is dynamic, meaning it can change over time (and the canvas will be updated) accordingly. In fact, I also want to specify attributes for the canvas (especially width and height), and if the supplied fragment shader source is invalid and does not compile, I want to get my hands on error messages, as provided by the browser. So I ended up with this:

fragmentShaderCanvas ::
    MonadWidget t m =>
    Map Text Text ->
    Dynamic t Text ->
    m (Dynamic t (Maybe Text))

which very pleasingly hides all the complexity of setting up the WebGL context from the user. This is abstraction at excellence!

I published this widget in the hackage.haskell.org/package/reflex-dom-fragment-shader-canvas package on Hackage.

A Demo

And because reflex-dom make it so nice, I created a little demo program; it is essentially a fragment shader playground!

On https://nomeata.github.io/reflex-dom-fragment-shader-canvas/ you will find a text area where you can edit the fragment shader code. All your changes are immediately reflected in the canvas on the right, and in the list of warnings and errors below the text area. The code for this demo is pretty short.

A few things could be improved, of course: For example, the canvas element should have its resolution automatically adjusted to the actual size on screen, but it is somewhat tricky to find out when and if a DOM element has changed size. Also, the WebGL setup should be rewritten to be more defensively, and fail more gracefully if things go wrong.

BTW, if you need a proper shader playground, check out Shadertoy.

Development and automatic deployment

The reflex authors all use Nix as their development environment, and if you want to use reflex-dom, then using Nix is certainly the path of least resistance. But I would like to point out that it is not a necessity, and you can stay squarely in cabal land if you want:

It uses Travis CI to build GHCJS and the dependencies, caches them, builds your program and - if successful - uploads the result to GitHub Pages. In fact, the demo linked above is produced using that. Just push, and 5 minutes later the changes available online!

I know about rumors that Herbert's excellent multi-GHC PPA repsitory might provide .deb packages with GHCJS prebuilt soon. Once that happens, and maybe ghcjs-base and reflex get uploaded to Hackage, then the power of reflex-based web development will be conveniently available to all Haskell developers (even those who shunned Nix so far), and I am looking forward to many cool projects coming out of that.

22 Jul 2018 2:41pm GMT

21 Jul 2018

feedPlanet Debian

Russ Allbery: Review: The Power of Habit

Review: The Power of Habit, by Charles Duhigg

Publisher: Random House
Copyright: 2012, 2014
Printing: 2014
ISBN: 0-679-60385-9
Format: Kindle
Pages: 366

One problem with reading pop psychology is that one runs into a lot of books like this one: summaries of valid psychological research that still leave one with the impression that the author was more interested in being dramatic and memorable than accurate. But without reproducing the author's research, it's hard to tell whether that fear is well-grounded or unfair, so one comes away feeling vaguely dissatisfied and grumpy.

Or at least I do. I might be weird.

As readers of my book reviews may have noticed, and which will become more apparent shortly, I'm going through another round of reading "self-help" books. This time, I'm focusing on work habits, concentration, and how to more reliably reach a flow state. The Power of Habit isn't on that topic but it's adjacent to it, so I picked it up when a co-worker recommended it.

Duhigg's project here is to explain habits, both good ones and bad ones, at a scientific level. He starts with a memorable and useful model of the habit loop: a cue triggers a routine, which results in a reward. The reward reinforcement strengthens the loop, and the brain starts internalizing the routine, allowing it to spend less cognitive energy and essentially codifying the routine like a computer program. With fully-formed habits (one's daily bathing routine, for example), the routine is run by a small, tuned part of your brain and requires very little effort, which is why we can have profound shower thoughts about something else entirely. That example immediately shows why habits are valuable and why our brain is so good at creating them: they reduce the mental energy required for routine actions so that we can spend that energy elsewhere.

The problem, of course, is that this mechanism doesn't first consult our conscious intent. It works just as well for things that we do repeatedly but may not want to automatically do, like smoking a pack of cigarettes a day. It's also exploitable; you are not the only person involved in creating your habits. Essentially every consumer product company is trying to get you to form habits around their products, often quite successfully. Duhigg covers marketing-generated habits as well as social and societal habits, the science behind how habits can be changed, and the evidence that often a large collection of apparently unrelated habits are based in a "keystone habit" that, if changed, makes changing all of the other habits far easier.

Perhaps the most useful part of this book is Duhigg's discussion of how to break the habit loop through substitution. When trying to break habits, our natural tendency is to consciously resist the link between cue and routine. This is possible, but it's very hard. It requires making an unconscious process conscious, and we have a limited amount of conscious decision-making energy available to us in a day. More effective than fighting the cues is to build a replacement habit with the same cue, but this requires careful attention to the reward stage so that the substituted habit will complete the loop and have a chance of developing enough strength to displace the original habit.

So far, so good. All of this seems consistent with other psychological research I've read (particularly the reasons why trying to break habits by willpower alone is rarely successful). But there are three things that troubled me about this book and left me reluctant to recommend it or rely on it.

The first is that a useful proxy for checking the research of a book is to look at what the author says about a topic that one already knows something about. Here, I'm being a bit unfair by picking on a footnote, but Duhigg has one anecdote about a woman with a gambling problem that has following definitive-sounding note attached:

It may seem irrational for anyone to believe they can beat the house in a casino. However, as regular gamblers know, it is possible to consistently win, particularly at games such as blackjack. Don Johnson of Bensalem, Pennsylvania, for instance, won a reported $15.1 million at blackjack over a six-month span starting in 2010. The house always wins in the aggregate because so many gamblers bet in a manner that doesn't maximize their odds, and most people do not have enough money to see themselves through losses. A gambler can consistently win over time, though, if he or she has memorized the complicated formulas and odds that guide how each hand should be played. Most players, however, don't have the discipline or mathematical skills to beat the house.

This is just barely this side of being outright false, and is dangerously deceptive to the point of being casino propaganda. And the argument from anecdote is both intellectually bogus (a lot of people gamble, which means that not only is it possible that someone will go on that sort of winning streak through pure chance, it is almost guaranteed) and disturbingly similar to how most points are argued in this book.

If one assumes an effectively infinite deck (in other words, assume each card dealt is an independent event), there is no complicated rule you can memorize to beat the house at blackjack. The best that you can do is to reduce the house edge to 1-2% depending on the exact local rules. Wikipedia has a comprehensive discussion if you want the details. Therefore, what Duhigg has to be talking about is counting cards (modifying your play based on what cards have already been dealt and therefore what cards are remaining in the deck).

However, and Duhigg should know this if he's going to make definitive statements about blackjack, US casinos except in Atlantic City (every other example in this book is from the US) can and do simply eject players who count cards. (There's a legal decision affecting Atlantic City that makes the story more complicated there.) They also use other techniques (large numbers of decks, frequent reshuffling) to make counting cards far less effective. Even if you are very good at counting cards, this is not a way to win "consistently over time" because you will be told to stop playing. Counting cards is therefore not a matter of memorizing complicated formulas and odds. It's a cat-and-mouse game against human adversaries to disguise your technique enough to not be ejected while still maintaining an edge over the house. This is rather far from Duhigg's description.

Duhigg makes another, if less egregious, error by uncritically accepting the popular interpretation of the Stanford marshmallow experiment. I'll spare you my usual rant about this because The Atlantic has now written it for me. Surprise surprise, new research shows that the original experiment was deeply flawed in its choice of subjects and that the effect drastically decreases once one controls for social and economic background.

So that's one problem: when writing on topics about which I already have some background, he makes some significant errors. The second problem is related: Duhigg's own sources in this book seem unconvinced by the conclusions he's drawing from their research.

Here, I have to give credit to Duhigg for publishing his own criticism, although you won't find it if you read only the main text of the book. Duhigg has extensive end notes (distinct from the much smaller number of footnotes that elaborate on some point) in which he provides excerpts from fact-checking replies he got from the researchers and interview subjects in this book. I read them all after finishing the rest of the book, and I thought a clear pattern emerged. After reading early drafts of portions of the book, many of Duhigg's sources replied with various forms of "well, but." They would say that the research is accurately portrayed, but Duhigg's conclusion isn't justified by the research. Or that Duhigg described part of the research but left out other parts that complicated the picture. Or that Duhigg has simplified dangerously. Or that Duhigg latched on to an ancillary part of their research or their story and ignored the elements that they thought were more central. Note after note reads as a plea to add more nuance, more complication, less certainty, and fewer sweeping conclusions.

Science is messy. Psychological research is particularly messy because humans are very good at doing what they're "supposed" to do, or changing behavior based on subtle cues from the researcher. And most psychological research of the type Duhigg is summarizing is based on very small sample sizes (20-60 people is common) drawn from very unrepresentative populations (often college students who are conveniently near the researchers and cheap to bribe to do weird things while being recorded). When those experiments are redone with larger sample sizes or more representative populations, often they can't be replicated. This is called the replication crisis.

Duhigg is not a scientist. He's a reporter. His job is to take complicated and messy stories and simplify them into entertaining, memorable, and understandable narratives for a mass audience. This is great for making difficult psychological research more approachable, but it also inherently involves amplifying tentative research into rules of human behavior and compelling statements about how humans work. Sometimes this is justified by the current state of the research. Sometimes it isn't. Are Duhigg's core points in this book justified? I don't know and, based on the notes, neither does Duhigg, but none of that uncertainty is on the pages of the main text.

The third problem is less foundational, but seriously hurt my enjoyment of The Power of Habit as a reader: Duhigg's examples are horrific. The first chapter opens with the story of a man whose brain was seriously injured by a viral infection and could no longer form new memories. Later chapters feature a surgeon operating on the wrong side of a stroke victim's brain, a woman who destroyed her life and family through gambling, and a man who murdered his wife in his sleep believing she was an intruder. I grant that these examples are memorable, and some are part of a long psychological tradition of learning about the brain from very extreme examples, but these were not the images that I wanted in my head while reading a book about the science of habits. I'm not sure this topic should require the reader brace themselves against nightmares.

The habit loop, habit substitution, and keystone habits are useful concepts. Capitalist manipulation of your habits is something everyone should be aware of. There are parts of this book that seem worth knowing. But there's also a lot of uncritical glorification of particular companies and scientific sloppiness and dubious assertions in areas I know something about. I didn't feel like I could trust this book, or Duhigg. The pop psychology I like the best is either written by practicing scientists who (hopefully) have a feel for which conclusions are justified by research and which aren't, or admits more questioning and doubt, usually by personalizing the research and talking about what worked for the author. This is neither, and I therefore can't bring myself to recommend it.

Rating: 6 out of 10

21 Jul 2018 4:00am GMT

20 Jul 2018

feedPlanet Grep

Dries Buytaert: Exploring Cape Cod

This past weekend Vanessa and I took our much-anticipated annual weekend trip to Cape Cod. It's always a highlight for us. We set out to explore a new part of the Cape as we've extensively explored the Upper Cape.

Stage Harbor lighthouse

We found The Platinum Pebble Inn in West Harwich by way of TripAdvisor, a small luxury bed and breakfast. The owners, Mike and Stefanie Hogan, were extremely gracious hosts. Not only are they running the Inn and serving up delicious breakfasts, they would ask what we wanted to do, and then created our adventure with helpful tips for the day.

On our first day we went on a 35 km (22 miles) bike ride out to Chatham, making stops along the way for ice cream, shopping and lobster rolls.

Bike ride

While we were at the Chatham Pier Fish Market, we watched the local fisherman offload their daily catch with sea lions and seagulls hovering to get some lunch of their own. Once we arrived back at the Inn we were able to cool off in the pool and relax in the late afternoon sun.

Unloading fish at the Chatham Pier Fish Market

Saturday we were up for a hike, so the Hogans sent us to the Dune Shacks Trail in Provincetown. We were told to carry in whatever we would need as there weren't any facilities on the beach. So we stopped at an authentic French bakery in Wellfleet to get lunch to take on our hike - the baguette took me right back to being in France, and while I was tempted by the pain au chocolat and pain aux raisins, I didn't indulge. I had too much ice cream already.

After we picked up lunch, we continued up Route 6 and parked on the side of the road to begin our journey into the woods and up the first of many, intense sand dunes. The trails were unmarked but there are visible paths that pass the Dune Shacks that date back to the early 1900's. After 45 minutes we finally reached the beach and ocean.

Dune Shacks Trail in Provincetown
Dune Shacks Trail in Provincetown

We rounded out the weekend with an afternoon sail of the Nantucket Sound. It was a beautiful day and the conditions lent themselves to a very relaxing sailing experience.

Sailing
Sailing
Sailing

It was a great weekend!

20 Jul 2018 1:41pm GMT

Wouter Verhelst: PKCS#11 v2.20

By way of experiment, I've just enabled the PKCS#11 v2.20 implementation in the eID packages for Linux, but for now only in the packages in the "continuous" repository. In the past, enabling this has caused issues; there have been a few cases where Firefox would deadlock when PKCS#11 v2.20 was enabled, rather than the (very old and outdated) v2.11 version that we support by default. We believe we have identified and fixed all outstanding issues that caused such deadlocks, but it's difficult to be sure. So, if you have a Belgian electronic ID card and are willing to help me out and experiment a bit, here's something I'd like you to do:

The installed version of the eid-mw-libs or libbeidpkcs11-0 package should be v4.4.3-42-gf78d786e or higher.

One of the new features in version 2.20 of the PKCS#11 API is that it supports hotplugging of card readers; in version 2.11 of that API, this is not the case, since it predates USB (like I said, it is outdated). So, try experimenting with hotplugging your card reader a bit; it should generally work. Try leaving it installed and using your system (and webbrowser) for a while with that version of the middleware; you shouldn't have any issues doing so, but if you do I'd like to know about it.

Bug reports are welcome as issues on our github repository.

Thanks!

20 Jul 2018 10:18am GMT

19 Jul 2018

feedPlanet Grep

Dries Buytaert: How Drupal continues to evolve towards an API-first platform

It's been 12 months since my last progress report on Drupal core's API-first initiative. Over the past year, we've made a lot of important progress, so I wanted to provide another update.

Two and a half years ago, we shipped Drupal 8.0 with a built-in REST API. It marked the start of Drupal's evolution to an API-first platform. Since then, each of the five new releases of Drupal 8 introduced significant web service API improvements.

While I was an early advocate for adding web services to Drupal 8 five years ago, I'm even more certain about it today. Important market trends endorse this strategy, including integration with other technology solutions, the proliferation of new devices and digital channels, the growing adoption of JavaScript frameworks, and more.

In fact, I believe that this functionality is so crucial to the success of Drupal, that for several years now, Acquia has sponsored one or more full-time software developers to contribute to Drupal's web service APIs, in addition to funding different community contributors. Today, two Acquia developers work on Drupal web service APIs full time.

Drupal core's REST API

While Drupal 8.0 shipped with a basic REST API, the community has worked hard to improve its capabilities, robustness and test coverage. Drupal 8.5 shipped 5 months ago and included new REST API features and significant improvements. Drupal 8.6 will ship in September with a new batch of improvements.

One Drupal 8.6 improvement is the move of the API-first code to the individual modules, instead of the REST module providing it on their behalf. This might not seem like a significant change, but it is. In the long term, all Drupal modules should ship with web service APIs rather than depending on a central API module to provide their APIs - that forces them to consider the impact on REST API clients when making changes.

Another improvement we've made to the REST API in Drupal 8.6 is support for file uploads. If you want to understand how much thought and care went into REST support for file uploads, check out API-first Drupal: file uploads. It's hard work to make file uploads secure, support large files, optimize for performance, and provide a good developer experience.

JSON API

Adopting the JSON API module into core is important because JSON API is increasingly common in the JavaScript community.

We had originally planned to add JSON API to Drupal 8.3, which didn't happen. When that plan was originally conceived, we were only beginning to discover the extent to which Drupal's Routing, Entity, Field and Typed Data subsystems were insufficiently prepared for an API-first world. It's taken until the end of 2017 to prepare and solidify those foundational subsystems.

The same shortcomings that prevented the REST API to mature also manifested themselves in JSON API, GraphQL and other API-first modules. Properly solving them at the root rather than adding workarounds takes time. However, this approach will make for a stronger API-first ecosystem and increasingly faster progress!

Despite the delay, the JSON API team has been making incredible strides. In just the last six months, they have released 15 versions of their module. They have delivered improvements at a breathtaking pace, including comprehensive test coverage, better compliance with the JSON API specification, and numerous stability improvements.

The Drupal community has been eager for these improvements, and the usage of the JSON API module has grown 50% in the first half of 2018. The fact that module usage has increased while the total number of open issues has gone down is proof that the JSON API module has become stable and mature.

As excited as I am about this growth in adoption, the rapid pace of development, and the maturity of the JSON API module, we have decided not to add JSON API as an experimental module to Drupal 8.6. Instead, we plan to commit it to Drupal core early in the Drupal 8.7 development cycle and ship it as stable in Drupal 8.7.

GraphQL

For more than two years I've advocated that we consider adding GraphQL to Drupal core.

While core committers and core contributors haven't made GraphQL a priority yet, a lot of great progress has been made on the contributed GraphQL module, which has been getting closer to its first stable release. Despite not having a stable release, its adoption has grown an impressive 200% in the first six months of 2018 (though its usage is still measured in the hundreds of sites rather than thousands).

I'm also excited that the GraphQL specification has finally seen a new edition that is no longer encumbered by licensing concerns. This is great news for the Open Source community, and can only benefit GraphQL's adoption.

Admittedly, I don't know yet if the GraphQL module maintainers are on board with my recommendation to add GraphQL to core. We purposely postponed these conversations until we stabilized the REST API and added JSON API support. I'd still love to see the GraphQL module added to a future release of Drupal 8. Regardless of what we decide, GraphQL is an important component to an API-first Drupal, and I'm excited about its progress.

OAuth 2.0

A web services API update would not be complete without touching on the topic of authentication. Last year, I explained how the OAuth 2.0 module would be another logical addition to Drupal core.

Since then, the OAuth 2.0 module was revised to exclude its own OAuth 2.0 implementation, and to adopt The PHP League's OAuth 2.0 Server instead. That implementation is widely used, with over 5 million installs. Instead of having a separate Drupal-specific implementation that we have to maintain, we can leverage a de facto standard implementation maintained by others.

API-first ecosystem

While I've personally been most focused on the REST API and JSON API work, with GraphQL a close second, it's also encouraging to see that many other API-first modules are being developed:

Conclusion

Hopefully, you are as excited for the upcoming release of Drupal 8.6 as I am, and all of the web service improvements that it will bring. I am very thankful for all of the contributions that have been made in our continued efforts to make Drupal API-first, and for the incredible momentum these projects and initiatives have achieved.

Special thanks to Wim Leers (Acquia) and Gabe Sullice (Acquia) for contributions to this blog post and to Mark Winberry (Acquia) and Jeff Beeman (Acquia) for their feedback during the writing process.

19 Jul 2018 11:06am GMT

08 Nov 2011

feedfosdem - Google Blog Search

papupapu39 (papupapu39)'s status on Tuesday, 08-Nov-11 00:28 ...

papupapu39 · http://identi.ca/url/56409795 #fosdem #freeknowledge #usamabinladen · about a day ago from web. Help · About · FAQ · TOS · Privacy · Source · Version · Contact. Identi.ca is a microblogging service brought to you by Status.net. ...

08 Nov 2011 12:28am GMT

05 Nov 2011

feedfosdem - Google Blog Search

Write and Submit your first Linux kernel Patch | HowLinux.Tk ...

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the world. ...

05 Nov 2011 1:19am GMT

03 Nov 2011

feedfosdem - Google Blog Search

Silicon Valley Linux Users Group – Kernel Walkthrough | Digital Tux

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the ...

03 Nov 2011 3:45pm GMT

26 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Update your RSS link

If you see this message in your RSS reader, please correct your RSS link to the following URL: http://fosdem.org/rss.xml.

26 Jul 2008 5:55am GMT

25 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Archive of FOSDEM 2008

These pages have been archived.
For information about the latest FOSDEM edition please check this url: http://fosdem.org

25 Jul 2008 4:43pm GMT

09 Mar 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Slides and videos online

Two weeks after FOSDEM and we are proud to publish most of the slides and videos from this year's edition.

All of the material from the Lightning Talks has been put online. We are still missing some slides and videos from the Main Tracks but we are working hard on getting those completed too.

We would like to thank our mirrors: HEAnet (IE) and Unixheads (US) for hosting our videos, and NamurLUG for quick recording and encoding.

The videos from the Janson room were live-streamed during the event and are also online on the Linux Magazin site.

We are having some synchronisation issues with Belnet (BE) at the moment. We're working to sort these out.

09 Mar 2008 3:12pm GMT