06 Oct 2015

feedPlanet Grep

Dries Buytaert: The coming era of data and software transparency

Algorithms are shaping what we see and think -- even what our futures hold. The order of Google's search results, the people Twitter recommends us to follow, or the way Facebook filters our newsfeed can impact our perception of the world and drive our actions. But think about it: we have very little insight into how these algorithms work or what data is used. Given that algorithms guide much of our lives, how do we know that they don't have a bias, withhold information, or have bugs with negative consequences on individuals or society? This is a problem that we aren't talking about enough, and that we have to address in the next decade.

Open Sourcing software quality

In the past several weeks, Volkswagen's emissions crisis has raised new concerns around "cheating algorithms" and the overall need to validate the trustworthiness of companies. One of the many suggestions to solve this problem was to open-source the software around emissions and automobile safety testing (Dave Bollier's post about the dangers of proprietary software is particularly good). While open-sourcing alone will not fix software's accountability problems, it's certainly a good start.

As self-driving cars emerge, checks and balances on software quality will become even more important. Companies like Google and Tesla are the benchmarks of this next wave of automotive innovation, but all it will take is one safety incident to intensify the pressure on software versus human-driven cars. The idea of "autonomous things" has ignited a huge discussion around regulating artificially intelligent algorithms. Elon Musk went as far as stating that artificial intelligence is our biggest existential threat and donated millions to make artificial intelligence safer.

While making important algorithms available as Open Source does not guarantee security, it can only make the software more secure, not less. As Eric S. Raymond famously stated "given enough eyeballs, all bugs are shallow". When more people look at code, mistakes are corrected faster, and software gets stronger and more secure.

Less "Secret Sauce" please

Automobiles aside, there is possibly a larger scale, hidden controversy brewing on the web. Proprietary algorithms and data are big revenue generators for companies like Facebook and Google, whose services are used by billions of internet users around the world. With that type of reach, there is big potential for manipulation -- whether intentional or not.

There are many examples as to why. Recently Politico reported on Google's ability to influence presidential elections. Google can build bias into the results returned by its search engine, simply by tweaking its algorithm. As a result, certain candidates can display more prominently than others in search results. Research has shown that Google can shift voting preferences by 20 percent or more (up to 80 percent in certain groups), and potentially flip the margins of voting elections worldwide. The scary part is that none of these voters know what is happening.

And, when Facebook's 2014 "emotional contagion" mood manipulation study was exposed, people were outraged at the thought of being tested at the mercy of a secret algorithm. Researchers manipulated the news feeds of 689,003 users to see if more negative-appearing news led to an increase in negative posts (it did). Although the experiment was found to comply with the terms of service of Facebook's user agreement, there was a tremendous outcry around the ethics of manipulating people's moods with an algorithm.

In theory, providing greater transparency into algorithms using an Open Source approach could avoid a crisis. However, in practice, it's not very likely this shift will happen, since these companies profit from the use of these algorithms. A middle ground might be allowing regulatory organizations to periodically check the effects of these algorithms to determine whether they're causing society harm. It's not crazy to imagine that governments will require organizations to give others access to key parts of their data and algorithms.

Ethical early days

The explosion of software and data can either have horribly negative effects, or transformative positive effects. The key to the ethical use of algorithms is providing consumers, academics, governments and other organizations access to data and source code so they can study how and why their data is used, and why it matters. This could mean that despite the huge success and impact of Open Source and Open Data, we're still in the early days. There are few things about which I'm more convinced.

06 Oct 2015 5:04pm GMT

Joram Barrez: Multi-Tenancy with separate database schemas in Activiti

One feature request we've heard in the past is that of running the Activiti engine in a multi-tenant way where the data of a tenant is isolated from the others. Certainly in certain cloud/SaaS environments this is a must. A couple of months ago I was approached by Raphael Gielen, who is a student at […]

06 Oct 2015 3:21pm GMT

05 Oct 2015

feedPlanet Grep

Mattias Geniar: The Ping That Makes a Sound

The post The Ping That Makes a Sound appeared first on ma.ttias.be.

After so many years, you think the ping command couldn't get any better, right? Well, it may be common knowledge for a lot of sysadmins out there, but I sure as hell didn't know it yet: ping can make a sound when it can or can't ping a target.

This is super useful if you're reconfiguring a server's networking stack, and want to have immediate feedback when the server starts or stops ping'ing.

You can leave this ping running in the background and focus on your main task, it'll give you a clear but subtle beep when the server starts/stops replying to ping requests.

Note: this works natively on Mac OSX, for Windows you'll need something like bping and on Linux there's only limited audible support (ping can only make a sound when it can ping the target).

Beep when a server starts ping'ing

Situation: a server is down, you're trying to fix the networking and want an immediate heads-up when it starts to ping.

$ ping -a

The -a (lower case a) causes an audible bell whenever the target replies to a ping.

-a Audible.

Include a bell (ASCII 0x07) character in the output when any packet
is received. This option is ignored if other format options are present.

Beep when a server stops ping'ing

Note: only supported on Mac OSX as far as I'm aware.

Situation: you're reconfiguring a server that's currently online, and want to know it the moment one IP stops replying (because you shutdown an interface that didn't come up, for instance).

ping -A

In this case we use the uppercase A parameter to send us a beep sound whenever the target stops replying to our ping.

-A Audible.

Output a bell (ASCII 0x07) character when no packet is received
before the next packet is transmitted. To cater for round-trip times that
are longer than the interval between transmissions, further missing packets
cause a bell only if the maximum number of unreceived packets has increased.

Now I no longer need to keep a visual eye on the status of my ping's, I can just rely on the sound coming through my headset.

Hooray for obsessive efficiency!

The post The Ping That Makes a Sound appeared first on ma.ttias.be.

Related posts:

  1. How To Use A Jumphost in your SSH Client Configurations Jumphosts are used as intermediate hops between your actual SSH...
  2. Monitor All HTTP Requests (like TCPdump) On a Linux Server with httpry Wouldn't it be really cool if you could run a...
  3. CentOS 7 NetworkManager Keeps Overwriting /etc/resolv.conf In CentOS or Red Hat Enterprise Linux (RHEL) 7, you...

05 Oct 2015 9:00pm GMT

02 Oct 2015

feedPlanet Grep

Dieter Plaetinck: Interview with Matt Reiferson, creator of NSQ

I'm a fan of the NSQ message processing system written in golang. I've studied the code, transplanted its diskqueue code into another project, and have used NSQ by itself. The code is well thought out, organized and written.

Inspired by the book coders at work and the systems live podcast, I wanted to try something I've never done before: spend an hour talking to Matt Reiferson - the main author of NSQ - about software design and Go programming patterns, and post the video online for whomever might be interested.

We talked about Matt's background, starting the NSQ project at Bitly as his first (!) Go project, (code) design patterns in NSQ and the nsqd diskqueue in particular and the new WAL (write-ahead-log) approach in terms of design and functionality.

You can watch it on youtube

Unfortunately, the video got cut a bit short. But basically in the cut off part i asked about the new go internals convention that prevents importing packages that are in an internals subdirectory. Matt wants to make it very clear that certain implementation details are not supported (by the NSQ team) and may change, whereas my take was that it's annoying when i want to reuse code some I find in a project. We ultimately both agreed that while a bit clunky, it gets the job done, and is probably a bit crude because there is also no proper package management yet.

I'ld like to occasionally interview other programmers in a similar way and post on my site later.

02 Oct 2015 8:25am GMT

Les Jeudis du Libre: Mons, le 15 octobre : Piloter son appareil photo numérique avec des logiciels libres

cameraCe jeudi 15 octobre 2015 à 19h se déroulera la 42ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Piloter son appareil photo numérique avec des logiciels libres

Thématique : Photographie|Hardware & Embarqué

Public : Tout public

L'animateur conférencier : Rober Viseur (UMONS / CETIC / photographe indépendant)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l'UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l'entrée de la séance. Merci d'indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. Cette 42ème séance montoise se terminera, suivant l'habitude, par un verre de l'amitié. L'organisation bénéficie du soutien de la Fédération Wallonie-Bruxelles dans le cadre de la Quinzaine Numérique, et en particulier de la Quinzaine Numérique @ Mons (cf. le calendrier complet des activités).

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb, NextLab, Phonoid et Creative Monkeys.

Si vous êtes intéressé(e) par ce cycle mensuel, n'hésitez pas à consulter l'agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d'échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d'informaticiens (UMONS, HEH et Condorcet), et avec le concours de l'A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Le logiciel libre connait un essor continu. Les photographes, souvent fidèles à des logiciels propriétaires à la réputation bien établie, disposent aujourd'hui d'alternatives intéressantes pour le traitement des photographies (Gimp, UFraw, Rawtherapee, Hugin,…). Au delà de ces logiciels populaires, le logiciel libre ouvre cependant d'autres opportunités en matière d'automatisation de la prise de vue et, donc, de création d'installations artistiques ou de machines de prise de vue personnalisées.

Plusieurs familles d'outils existent aujourd'hui. La première comprend des logiciels permettant l'acquisition de photographies depuis la webcam d'un ordinateur. La seconde comprend les logiciels permettant le pilotage à distance des appareils photos par port USB. La troisième comprend les firmwares alternatifs capables d'étendre les capacités des firmwares officiels, pouvant aller jusqu'à la programmation du boîtier. Couplés aux ressources matérielles et logicielles disponibles (notamment sous GNU/Linux), ces outils réutilisables ouvrent de nombreuses perspectives pour les passionnés de technologies et les photographes.

Des exemples concrets, utilisant notamment gphoto2 et CHDK, seront présentés.

02 Oct 2015 4:36am GMT

01 Oct 2015

feedPlanet Grep

Frank Goossens: Everyone has JavaScript, right?

So you think everyone has JavaScript? Look at this little flowchart by Stuart Langridge and kindly reconsider:


Possibly related twitterless twaddle:

01 Oct 2015 11:33am GMT

Frank Goossens: De grote wat-voor-koffiedrinker-ben-je-test

Op De Standaard de "Wat zegt uw koffie over u?"-test gedaan op vraag van mijn vrouwken. Het resultaat hoeft niemand te verwonderen;

voor mij koffie, niet verkeerd

Possibly related twitterless twaddle:

01 Oct 2015 7:37am GMT

29 Sep 2015

feedPlanet Grep

Frank Goossens: Apple to start charging you for Apple Music unless …

Apparently you have to go through a less-then-obvious procedure if you don't want Apple to automatically start charging you for access to Apple Music;

Back in June, Apple Music was born. […] It was free for the first three months […] Whether you're loving the service or not, there's good chance you may have forgotten that you entered your bank details when you signed up, ready for the paid subscription to start of 30 September. Here's how to stop the automatic monthly payments. Only if you want to of course.
(source: BBC Newsbeat)

Isn't it ironic (really) that a company that prides itself in the simplicity and usability of its products, requires users to jump through hoops to disable automatic payment?

Possibly related twitterless twaddle:

29 Sep 2015 6:21am GMT

28 Sep 2015

feedPlanet Grep

Mattias Geniar: The Otto Project: Meet the Successor to Vagrant

The post The Otto Project: Meet the Successor to Vagrant appeared first on ma.ttias.be.

A tagline like meet the successor to vagrant is a pretty hard one to claim, but seeing as the same developers (Hashicorp) as Vagrant are behind Otto, it's probably true.

The otto project website has a lot of buzzwords like maximize productivity, zero configuration, automatic X, automatic Y, ... I'm skeptical of anything that claims this kind of magic powers, but the creators of Otto have had a pretty solid track record so far with Vagrant, Consul, ...

Otto automatically builds an infrastructure and deploys your application using industry standard tooling and best practices, so you don't have to.

There are hundreds of how-to guides to deploy applications. Unfortunately, all of these how-to guides are usually copying and pasting outdated information onto a server to barely get your application running.

Deploying an application properly with industry best practices in security, scalability, monitoring, and more requires an immense amount of domain knowledge. The solution to each of these problems usually requires mastering a new tool.

Due to this complexity, many developers completely ignore best practices and stick to the simple how-to guides.

Otto solves all these problems and automatically manages all of the various software solutions for you to have a best-in-class infrastructure. You only need to learn Otto, and Otto does the rest.
Introduction to Otto

Otto, you have my attention.

So is Vagrant dead then? Not entirely, but we'll have to switch eventually.

Vagrant is a mature, battle-hardened piece of technology. It has been in use by millions of users for many years. We don't want to reinvent the wheel, so we've taken the best parts of Vagrant and used them within Otto to manage development environments automatically for the user.


Due to the above, we're committed to continuing to improve Vagrant and releasing new versions of Vagrant for years to come. But for the everyday developer, Otto should replace Vagrant over time.
The Future of Vagrant

For developers, Otto may be next thing to be using. For DevOps testing Puppet/Chef/Ansible scripts locally, you'll be using Vagrant for a couple of more years.

The post The Otto Project: Meet the Successor to Vagrant appeared first on ma.ttias.be.

Related posts:

  1. Running Kali Linux as a Vagrant Box (virtual machine) Here's the simplest way to start a Kali Linux virtual...
  2. Automating the Unknown While Config Management isn't new as a concept, it is...
  3. The Worst Possible DevOps Advice It's not a link bait title, bear with me as...

28 Sep 2015 7:22pm GMT

Mattias Geniar: Boot in single user mode on CentOS 7 / RHEL 7

The post Boot in single user mode on CentOS 7 / RHEL 7 appeared first on ma.ttias.be.

This guide will show you how to boot into single user mode on a CentOS 7 server. You'll need single user boot to recover a corrupt file system, reset the root password, ...

First, reboot your server and when you enter the Kernel Selection menu, press e to modify the parameters to boot the kernel.

centos7 single user mode 1

The next screen will show you a confusing screen of kernel parameters. It'll look like this.

centos7 single user mode 2

Scroll down until you find the actual kernel line. It starts with linux16 /vmlinuz-... and will span a couple of lines. You're now looking for the ro keyword in the kernel linux, which would start the OS with a read only (ro) file system.

centos7 single user mode 3

Use your arrow keys to go to the ro line and replace it with rw init=/sysroot/bin/bash. The result should look like this. If that's the case, press ctrl+x to boot the kernel with those options.

centos7 single user mode 4

If everything went fine, you're now in a limited shell with access to the entire filesystem. To make things easier, you can chroot the filesystem so you can access all your known files/directories with the same paths.

centos7 single user mode 5

After you typed chroot /sysroot/, you'll find your familiar files in /etc, /usr,/var, ...

If you're done working in single user mode, reboot again by pressing ctrl+alt+del.

The post Boot in single user mode on CentOS 7 / RHEL 7 appeared first on ma.ttias.be.

Related posts:

  1. Enable or Disable Service At Boot on CentOS 7 This post will show you how to enable or disable...
  2. Reinstall the Linux Kernel on CentOS or RHEL One would expect that yum's reinstall command would do the...
  3. Deep Insights: The Kernel Boot Process You've got to love collaboration. Especially on documentation. The github...

28 Sep 2015 6:57pm GMT

Frank Goossens: Wordfeud, server maintenance & monetization

wordfeudSo I'm a Wordfeud-addict (you know, Scrabble without the TM infringement) and the game is down since this morning. Their Twitter-account reads;

#Wordfeud servers are going down for maintenance around 06:00 CET. We expect 2-3 hours of downtime.

This message is 9h old but still no Wordfeud, so they must be facing major problems. Which begs the question; is Bertheussen IT into server-technology? And wouldn't they invest more if paying customers could simply stop paying if service-level became too bad instead of paying a one-time fee?

Possibly related twitterless twaddle:

28 Sep 2015 1:16pm GMT

Dries Buytaert: Acquia raises $55 million series G

Today, we're excited to announce that Acquia has closed a $55 million financing round, bringing total investment in the company to $188.6 million. Led by new investor Centerview Capital Technology, the round includes existing investors New Enterprise Associates (NEA) and Split Rock Partners.

We are in the middle of a big technological and economic shift, driven by the web, in how large organizations and industries operate. At Acquia, we have set out to build the best platform for helping organizations run their businesses online, help them invent new ways of doing business, and maximize their digital impact on the world. What Acquia does is not at all easy -- or cheap -- but we've made good strides towards that vision. We have become the backbone for many of the world's most influential digital experiences and continue to grow fast. In the process, we are charting new territory with a very unique business model rooted Drupal and Open Source.

A fundraise like this helps us scale our global operations, sales and marketing as well as the development of our solutions for building, delivering and optimizing digital experiences. It also gives us flexibility. I'm proud of what we have accomplished so far, and I'm excited about the big opportunity ahead of us.

28 Sep 2015 12:59pm GMT

27 Sep 2015

feedPlanet Grep

Steven Wittens: Yak Shading

Data-Driven Geometry

MathBox primitives need to take arbitrary data, transform it on the fly, and render it as styled geometry based on their attributes. Done as much as possible on the graphics hardware.

Three.js can render points, lines, triangles, but only with a few predetermined strategies. The alternative is to write your own vertex and fragment shader and do everything from scratch. Each new use case means a new ShaderMaterial with its own properties, so called uniforms. If the stock geometry doesn't suffice, you can make your own triangles by filling a raw BufferGeometry and assign custom per-vertex attributes. Essentially, to leverage GPU computation with Three.js-most engines, really-you have to ignore most of it.

Virtual Geometry

Shader computations are mainly rote transforms. For example, if you want to draw a line between two points, you'll have to make a long rectangle, made out of two triangles. But this simple idea gets complicated quickly once you add corner joins, depth scaling, 3D clipping, and so on. Doing this to an entire data set at once is what GPUs are made for, through vertex shaders which transform points.

Vertex Shader

Vertex shaders can only do 1-to-1 mappings. This isn't a problem by itself. You can use a gather approach to do N-to-1 mapping, where all the necessary data is pre-arranged into attribute arrays, with the data repeated and interleaved per vertex as necessary.

Vertex Shader Attributes

The proper tool for this is a geometry shader: a program that creates new geometry by N-to-M mapping of data, like making triangles out of points. WebGL doesn't support geometry shaders, won't any time soon, but you can emulate them with texture samplers. A texture image is just a big typed array, and you have random access unlike vertex attributes.

Yak Shading

The original geometry acts only as a template, directing the shader's real data lookups. You lose some performance this way, but it's not too bad. Any procedural sampling pattern works, drawing 1 shape or 10,000. As textures can be rendered to, not just from, this also enables transform feedback, using the result of one pass to create new geometry in another.

All geometry rendered this way is 100% static as far as Three.js is concerned. New values are uploaded directly to GPU memory just before the rendering starts. The only gotcha is handling variable size input, because reallocation is costly. Pre-allocating a larger texture is easy, but clipping off the excess geometry in an O(1) fashion on the JS side is hard. In most cases there's the work around of dynamically generating degenerate triangles in a shader, which collapse down to invisible edges or points. This way, MathBox can accept variable sized arrays in multiple dimensions and will do its best to minimize disruption. If attribute instancing was more standard in WebGL, this wouldn't be such an issue, but as it stands, the workarounds are very necessary.

Vertex Party

If you squint very hard it looks a bit like React for live geometry. Except instead of a diffing algorithm, there's a few events, some texture uploads, a handful of draw calls and then an idle CPU. It's ideal for drawing thousands of things that look similar and follow the same rules. It can handle not just basic GL primitives like lines or triangles, but higher level shapes like 3D arrows or sprites.

My first prototype of this was my last christmas demo. It was messy and tedious to make, especially the shaders, but it performed excellently: the final scene renders ~200,000 triangles. Despite being a layer around Three.js … around WebGL … around OpenGL … around a driver … around a GPU … performance has far exceeded my expectations. Even complex scenes run great on my Android phone, easily 10x faster than MathBox 1, in some cases more like 1000x.

Of course compared to cutting edge DirectX or OpenCL (not a typo), this is still very limited. In today's GPUs, the charade of attributes, geometries, vertices and samples has mostly been stripped away. What remains is buffers and massive operations on them, exposed raw in new APIs like AMD's Mantle and iOS's Metal. My vertex trickery acts like a polyfill, virtualizing WebGL's capabilities to bring them closer to the present. It goes a bit beyond what geometry shaders can provide, but still lacks many useful things like atomic append queues or stream compaction.

For large geometries, the set up cost can be noticeable though. Shader compilation time also grows with transform complexity, doubly so on Windows where shaders are recompiled to HLSL / Direct3D. This makes drawing ops the heaviest MathBox primitives to spawn and reallocate. You could call this the MathBox version of the dreaded 'paint' of HTML. Once warmed up though, most other properties can be animated instantly, including the data being displayed: this is the opposite of how HTML works. Hence you can mostly spawn things ahead of time, revealing and hiding objects as needed, with minimal overhead and jank at runtime.

This all relies on carefully constructed shaders which have to be wired up in all their individual permutations. This needed to be solved programmatically, which is where we go last.

27 Sep 2015 7:00am GMT

Steven Wittens: Shader­Graph 2

Functional GLSL

For MathBox 1, I already needed to generate GL shaders programmatically. So I built ShaderGraph. You gave it snippets of GLSL code, each with a function inside. It would connect them for you, matching up the inputs and outputs. It supported directed graphs of calls with splits and joins, which were compiled down into a single shader. To help build up the graph progressively, it came with a simple chainable factory API.

It worked despite being several steps short of being a real compiler and having gaps in its functionality. It also committed the cardinal sin of regex code parsing, and hence accepted only a small subset of GLSL. All in all it was a bit of a happy mess, weaving vertex and fragment shaders together in a very ad-hoc fashion. Each snippet could only appear once in a shader, as it was still just a dumb code concatenator. I needed a proper way to compose shaders.

Select a node to view its code

Instanced Data Flow

Enter ShaderGraph 2. It's a total rewrite using Chris Dickinson's bona fide glsl-parser. It still parses snippets and connects them into a directed graph to be compiled. But a snippet is now a full GLSL program whose main() function can have open inputs and outputs. What's more, it now also links code in the proper sense of the word: linking up module entry points as callbacks.

Basically, snippets can now have inputs and outputs that are themselves functions. These connections don't obey the typical data flow of a directed graph and instead are for function calls. A callback connection provides a path along which calls are made and values are returned.

Snippets can be instanced multiple times, including their uniforms, attributes and varyings (if requested). Uniforms are bound to Three.js-style registers as you build the graph incrementally. So it's a module system, sort of, which enables functional shader building. Using callbacks as micro-interfaces feels very natural in practice, especially with bound parameters. You can decorate existing functions, e.g. turning a texture sampler into a convolution filter.

// Build shader graph
var shader = shadergraph.shader();

GLSL Composer

If you know GLSL, you can write ShaderGraph snippets: there is no extra syntax, you just add inputs and outputs to your main() function. You can use in/out/inout qualifiers or return a value. If there's no main function, the last defined function is exported.

vec3 callback(vec3 arg1, vec3 arg2);

To create a callback input in a snippet, you declare a function prototype in GLSL without a body. The function name and signature is used to create the outlet.

To create a callback output, you use the factory API. You can .require() a snippet directly, or bundle up a subgraph with .callback().….join(). In the latter case, the function signature includes all unconnected inputs and outputs inside. Outlets are auto-matched by name, type and order, with the semantics from v1 cleaned up.

Building basic pipes is easy: .pipe(…).pipe(…).…, passing in a snippet or factory. For forked graphs, you can .fan() (1-to-N) or .split() (N-to-N), use .next() to begin a new branch, and then .join() at the end. There's a few other operations, nothing crazy.

var v = shadergraph.shader();

// Graphs generated elsewhere
v.pipe(vertexColor(color, mask));
v.require(vertexPosition(position, material, map, 2, stpq));

v.pipe('line.position',    uniforms, defs);
v.pipe('project.position', uniforms);

By connecting pairs you create a functional data flow that compiles down to vanilla GLSL. It's not functional programming in GLSL, it just enables useful run-time assembly patterns, letting the snippets do the heavy lifting the old fashioned way.

As GPUs are massively parallel pure function applicators, the resulting mega-shaders are a great fit.

$ cat *.glsl | magic

The process still comes down to concatenating the code in a clever way, with global symbols namespaced to be unique. Function bodies are generated to call snippets in the right order, and the callbacks are linked. In the trivial case it links a callback by #defineing the two symbols to be the same. It can also impedance match compatible signatures like void main(in float, out vec2) and vec2 main(float) by inserting an intermediate call.

precision highp float;
precision highp int;
uniform mat4 modelMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat3 normalMatrix;
uniform vec3 cameraPosition;
#define _sn_191_getPosition _pg_103_
#define _sn_190_getPosition _pg_102_
#define _sn_189_getSample _pg_100_
#define _pg_99_ _sn_185_warpVertex
#define _pg_103_ _sn_190_getMeshPosition
#define _pg_100_ _sn_188_getTransitionSDFMask
#define _pg_101_ _sn_189_maskLevel
vec2 _sn_180_truncateVec(vec4 v) { return v.xy; }
uniform vec2 _sn_181_dataResolution;
uniform vec2 _sn_181_dataPointer;

vec2 _sn_181_map2DData(vec2 xy) {
  return fract((xy + _sn_181_dataPointer) * _sn_181_dataResolution);

uniform sampler2D _sn_182_dataTexture;

vec4 _sn_182_sample2D(vec2 uv) {
  return texture2D(_sn_182_dataTexture, uv);

vec4 _sn_183_swizzle(vec4 xyzw) {
  return vec4(xyzw.x, xyzw.w, 0.0, 0.0);
uniform float _sn_184_polarBend;
uniform float _sn_184_polarFocus;
uniform float _sn_184_polarAspect;
uniform float _sn_184_polarHelix;

uniform mat4 _sn_184_viewMatrix;

vec4 _sn_184_getPolarPosition(vec4 position, inout vec4 stpq) {
  if (_sn_184_polarBend > 0.0) {

    if (_sn_184_polarBend < 0.001) {
      vec2 pb = position.xy * _sn_184_polarBend;
      float ppbbx = pb.x * pb.x;
      return _sn_184_viewMatrix * vec4(
        position.x * (1.0 - _sn_184_polarBend + (pb.y * _sn_184_polarAspect)),
        position.y * (1.0 - .5 * ppbbx) - (.5 * ppbbx) * _sn_184_polarFocus / _sn_184_polarAspect,
        position.z + position.x * _sn_184_polarHelix * _sn_184_polarBend,
    else {
      vec2 xy = position.xy * vec2(_sn_184_polarBend, _sn_184_polarAspect);
      float radius = _sn_184_polarFocus + xy.y;
      return _sn_184_viewMatrix * vec4(
        sin(xy.x) * radius,
        (cos(xy.x) * radius - _sn_184_polarFocus) / _sn_184_polarAspect,
        position.z + position.x * _sn_184_polarHelix * _sn_184_polarBend,
  else {
    return _sn_184_viewMatrix * vec4(position.xyz, 1.0);
uniform float _sn_185_time;
uniform float _sn_185_intensity;

vec4 _sn_185_warpVertex(vec4 xyzw, inout vec4 stpq) {
  xyzw +=   0.2 * _sn_185_intensity * (sin(xyzw.yzwx * 1.91 + _sn_185_time + sin(xyzw.wxyz * 1.74 + _sn_185_time)));
  xyzw +=   0.1 * _sn_185_intensity * (sin(xyzw.yzwx * 4.03 + _sn_185_time + sin(xyzw.wxyz * 2.74 + _sn_185_time)));
  xyzw +=  0.05 * _sn_185_intensity * (sin(xyzw.yzwx * 8.39 + _sn_185_time + sin(xyzw.wxyz * 4.18 + _sn_185_time)));
  xyzw += 0.025 * _sn_185_intensity * (sin(xyzw.yzwx * 15.1 + _sn_185_time + sin(xyzw.wxyz * 9.18 + _sn_185_time)));

  return xyzw;

vec4 _sn_186_getViewPosition(vec4 position, inout vec4 stpq) {
  return (viewMatrix * vec4(position.xyz, 1.0));

vec3 _sn_187_getRootPosition(vec4 position, in vec4 stpq) {
  return position.xyz;
vec3 _pg_102_(vec4 _io_510_v, in vec4 _io_519_stpq) {
  vec2 _io_509_return;
  vec2 _io_511_return;
  vec4 _io_513_return;
  vec4 _io_515_return;
  vec4 _io_517_return;
  vec4 _io_520_stpq;
  vec4 _io_527_return;
  vec4 _io_528_stpq;
  vec4 _io_529_return;
  vec4 _io_532_stpq;
  vec3 _io_533_return;

  _io_509_return = _sn_180_truncateVec(_io_510_v);
  _io_511_return = _sn_181_map2DData(_io_509_return);
  _io_513_return = _sn_182_sample2D(_io_511_return);
  _io_515_return = _sn_183_swizzle(_io_513_return);
  _io_520_stpq = _io_519_stpq;
  _io_517_return = _sn_184_getPolarPosition(_io_515_return, _io_520_stpq);
  _io_528_stpq = _io_520_stpq;
  _io_527_return = _pg_99_(_io_517_return, _io_528_stpq);
  _io_532_stpq = _io_528_stpq;
  _io_529_return = _sn_186_getViewPosition(_io_527_return, _io_532_stpq);
  _io_533_return = _sn_187_getRootPosition(_io_529_return, _io_532_stpq);
  return _io_533_return;
uniform vec4 _sn_190_geometryResolution;

varying vec4 vSTPQ;
varying float vU;
varying vec2 vUV;
varying vec3 vUVW;
varying vec4 vUVWO;

vec3 _sn_190_getMeshPosition(vec4 xyzw, float canonical) {
  vec4 stpq = xyzw * _sn_190_geometryResolution;
  vec3 xyz = _sn_190_getPosition(xyzw, stpq);

  if (canonical > 0.5) {
    #ifdef POSITION_STPQ
    vSTPQ = stpq;
    #ifdef POSITION_U
    vU = stpq.x;
    #ifdef POSITION_UV
    vUV = stpq.xy;
    #ifdef POSITION_UVW
    vUVW = stpq.xyz;
    #ifdef POSITION_UVWO
    vUVWO = stpq;
  return xyz;

uniform float _sn_188_transitionEnter;
uniform float _sn_188_transitionExit;
uniform vec4  _sn_188_transitionScale;
uniform vec4  _sn_188_transitionBias;
uniform float _sn_188_transitionSkew;
uniform float _sn_188_transitionActive;

float _sn_188_getTransitionSDFMask(vec4 stpq) {
  if (_sn_188_transitionActive < 0.5) return 1.0;

  float enter   = _sn_188_transitionEnter;
  float exit    = _sn_188_transitionExit;
  float skew    = _sn_188_transitionSkew;
  vec4  scale   = _sn_188_transitionScale;
  vec4  bias    = _sn_188_transitionBias;

  float factor  = 1.0 + skew;
  float offset  = dot(vec4(1.0), stpq * scale + bias);

  vec2 d = vec2(enter, exit) * factor + vec2(-offset, offset - skew);
  if (exit  == 1.0) return d.x;
  if (enter == 1.0) return d.y;
  return min(d.x, d.y);
uniform float _sn_191_worldUnit;
uniform float _sn_191_lineWidth;
uniform float _sn_191_lineDepth;
uniform float _sn_191_focusDepth;

uniform vec4 _sn_191_geometryClip;
attribute vec2 line;
attribute vec4 position4;

uniform float _sn_191_lineProximity;
varying float vClipProximity;

varying float vClipStrokeWidth;
varying float vClipStrokeIndex;
varying vec3  vClipStrokeEven;
varying vec3  vClipStrokeOdd;
varying vec3  vClipStrokePosition;

#ifdef LINE_CLIP
uniform float _sn_191_clipRange;
uniform vec2  _sn_191_clipStyle;
uniform float _sn_191_clipSpace;

attribute vec2 strip;

varying vec2 vClipEnds;

void _sn_191_clipEnds(vec4 xyzw, vec3 center, vec3 pos) {

  vec4 xyzwE = vec4(strip.y, xyzw.yzw);
  vec3 end   = _sn_191_getPosition(xyzwE, 0.0);

  vec4 xyzwS = vec4(strip.x, xyzw.yzw);
  vec3 start = _sn_191_getPosition(xyzwS, 0.0);

  vec3 diff = end - start;
  float l = length(diff) * _sn_191_clipSpace;

  float arrowSize = 1.25 * _sn_191_clipRange * _sn_191_lineWidth * _sn_191_worldUnit;

  vClipEnds = vec2(1.0);

  if (_sn_191_clipStyle.y > 0.0) {
    float depth = _sn_191_focusDepth;
    if (_sn_191_lineDepth < 1.0) {
      float z = max(0.00001, -end.z);
      depth = mix(z, _sn_191_focusDepth, _sn_191_lineDepth);
    float size = arrowSize * depth;

    float mini = clamp(1.0 - l / size * .333, 0.0, 1.0);
    float scale = 1.0 - mini * mini * mini; 
    float invrange = 1.0 / (size * scale);
    diff = normalize(end - center);
    float d = dot(end - pos, diff);
    vClipEnds.x = d * invrange - 1.0;

  if (_sn_191_clipStyle.x > 0.0) {
    float depth = _sn_191_focusDepth;
    if (_sn_191_lineDepth < 1.0) {
      float z = max(0.00001, -start.z);
      depth = mix(z, _sn_191_focusDepth, _sn_191_lineDepth);
    float size = arrowSize * depth;

    float mini = clamp(1.0 - l / size * .333, 0.0, 1.0);
    float scale = 1.0 - mini * mini * mini; 
    float invrange = 1.0 / (size * scale);
    diff = normalize(center - start);
    float d = dot(pos - start, diff);
    vClipEnds.y = d * invrange - 1.0;


const float _sn_191_epsilon = 1e-5;
void _sn_191_fixCenter(vec3 left, inout vec3 center, vec3 right) {
  if (center.z >= 0.0) {
    if (left.z < 0.0) {
      float d = (center.z - _sn_191_epsilon) / (center.z - left.z);
      center = mix(center, left, d);
    else if (right.z < 0.0) {
      float d = (center.z - _sn_191_epsilon) / (center.z - right.z);
      center = mix(center, right, d);

void _sn_191_getLineGeometry(vec4 xyzw, float edge, out vec3 left, out vec3 center, out vec3 right) {
  vec4 delta = vec4(1.0, 0.0, 0.0, 0.0);

  center =                 _sn_191_getPosition(xyzw, 1.0);
  left   = (edge > -0.5) ? _sn_191_getPosition(xyzw - delta, 0.0) : center;
  right  = (edge < 0.5)  ? _sn_191_getPosition(xyzw + delta, 0.0) : center;

vec3 _sn_191_getLineJoin(float edge, bool odd, vec3 left, vec3 center, vec3 right, float width) {
  vec2 join = vec2(1.0, 0.0);

  _sn_191_fixCenter(left, center, right);

  vec4 a = vec4(left.xy, right.xy);
  vec4 b = a / vec4(left.zz, right.zz);

  vec2 l = b.xy;
  vec2 r = b.zw;
  vec2 c = center.xy / center.z;

  vec4 d = vec4(l, c) - vec4(c, r);
  float l1 = dot(d.xy, d.xy);
  float l2 = dot(d.zw, d.zw);

  if (l1 + l2 > 0.0) {
    if (edge > 0.5 || l2 == 0.0) {
      vec2 nl = normalize(d.xy);
      vec2 tl = vec2(nl.y, -nl.x);

      vClipProximity = 1.0;

      vClipStrokeEven = vClipStrokeOdd = normalize(left - center);
      join = tl;
    else if (edge < -0.5 || l1 == 0.0) {
      vec2 nr = normalize(d.zw);
      vec2 tr = vec2(nr.y, -nr.x);

      vClipProximity = 1.0;

      vClipStrokeEven = vClipStrokeOdd = normalize(center - right);
      join = tr;
    else {
      float lmin2 = min(l1, l2) / (width * width);

      float lr     = l1 / l2;
      float rl     = l2 / l1;
      float ratio  = max(lr, rl);
      float thresh = _sn_191_lineProximity + 1.0;
      vClipProximity = (ratio > thresh * thresh) ? 1.0 : 0.0;
      vec2 nl = normalize(d.xy);
      vec2 nr = normalize(d.zw);

      vec2 tl = vec2(nl.y, -nl.x);
      vec2 tr = vec2(nr.y, -nr.x);

      vec2 tc = normalize(mix(tl, tr, l1/(l1+l2)));
      vec2 tc = normalize(tl + tr);
      float cosA   = dot(nl, tc);
      float sinA   = max(0.1, abs(dot(tl, tc)));
      float factor = cosA / sinA;
      float scale  = sqrt(1.0 + min(lmin2, factor * factor));

      vec3 stroke1 = normalize(left - center);
      vec3 stroke2 = normalize(center - right);

      if (odd) {
        vClipStrokeEven = stroke1;
        vClipStrokeOdd  = stroke2;
      else {
        vClipStrokeEven = stroke2;
        vClipStrokeOdd  = stroke1;
      join = tc * scale;
    return vec3(join, 0.0);
  else {
    return vec3(0.0);


vec3 _sn_191_getLinePosition() {
  vec3 left, center, right, join;

  float edge = line.x;
  float offset = line.y;

  vec4 p = min(_sn_191_geometryClip, position4);
  edge += max(0.0, position4.x - _sn_191_geometryClip.x);

  _sn_191_getLineGeometry(p, edge, left, center, right);

  vClipStrokePosition = center;
  vClipStrokeIndex = p.x;
  bool odd = mod(p.x, 2.0) >= 1.0;
  bool odd = true;

  float width = _sn_191_lineWidth * 0.5;

  float depth = _sn_191_focusDepth;
  if (_sn_191_lineDepth < 1.0) {
    float z = max(0.00001, -center.z);
    depth = mix(z, _sn_191_focusDepth, _sn_191_lineDepth);
  width *= depth;

  width *= _sn_191_worldUnit;

  join = _sn_191_getLineJoin(edge, odd, left, center, right, width);

  vClipStrokeWidth = width;
  vec3 pos = center + join * offset * width;

#ifdef LINE_CLIP
  _sn_191_clipEnds(p, center, pos);

  return pos;

uniform vec4 _sn_189_geometryResolution;
uniform vec4 _sn_189_geometryClip;
varying float vMask;

void _sn_189_maskLevel() {
  vec4 p = min(_sn_189_geometryClip, position4);
  vMask = _sn_189_getSample(p * _sn_189_geometryResolution);

uniform float _sn_192_styleZBias;
uniform float _sn_192_styleZIndex;

void _sn_192_setPosition(vec3 position) {
  vec4 pos = projectionMatrix * vec4(position, 1.0);

  float bias  = (1.0 - _sn_192_styleZBias / 32768.0);
  pos.z *= bias;
  if (_sn_192_styleZIndex > 0.0) {
    float z = pos.z / pos.w;
    pos.z = ((z + 1.0) / (_sn_192_styleZIndex + 1.0) - 1.0) * pos.w;
  gl_Position = pos;
void main() {
  vec3 _io_546_return;

  _io_546_return = _sn_191_getLinePosition();

It still does guarded regex manipulation of code too, but those manipulations are now derived from a proper syntax tree. GLSL doesn't have strings and its scope is simple, so this is unusually safe. I'm sure you can still trip it up somehow, but it's worth it for speed. I'm seeing assembly times of ~10-30ms cold, 2-4ms warm, but it depends entirely on the particular shaders.

The assembly process is now properly recursive. Unassembled shaders can be used in factory form, standing in for snippets. Completed graphs form stand-alone programs with no open inputs or outputs. The result can be turned straight into a Three.js ShaderMaterial, but there is no strict Three dependency. It's just a dictionary with code and a list of uniforms, attributes and varyings. Unlike before, building a combined vertex/fragment program is now merely syntactic sugar for a pair of separate graphs.

As it's run-time, you can slot in user-defined or code-generated GLSL just the same. Shaders are fetched by name or passed as inline code, mixed freely as needed. You supply the dictionary or lookup method. You could bundle your GLSL into JS with a build step or include embedded <script> tags.

This is the fragment shader that implements the partial differential equation for this ripple effect (getFramesSample). It samples from a volumetric N×N×2 array, feeding back into itself.

Paging Dr. Hickey

ShaderGraph 2 drives the entirety of MathBox 2. Its shaders are specialized for particular types and dimensions, generating procedural data, clipping geometry, resampling transformed data on the fly, …. The composibility comes out naturally. To do so, I pass a partially built factory by interested parties. This way I build graphs for position, color, normal, mask and more. These are injected as callbacks into a final shader. Shader factories enable ad-hoc contracts, sandwiched between the inner and outer retained layers of Three.js and MathBox, but disappearing entirely in the end result.

Of course, all of this is meta-programming of GLSL, done through a stateful JS lasagna and a ghetto compiler, instead of an idiomatic language. I know this, it's an inner platform effect bathing luxuriously in Turing tar like a rhino in mud. I didn't really see a way around it, given the constraints at play.

While the factory API is designed for making graphs on the spot and then tossing them, you could keep graphs around. There's a full data model underneath. You can always skip the factory entirely.

Plenty of caveats of course. There is no built-in preprocessor, so you can't #define or #ifdef uniforms or attributes and have it make sense. But then the point of ShaderGraph is to formalize exactly that sort of ad-hoc fiddling. Preprocessor directives will just pass through. glsl-parser has gaps too, and it is also exceedingly picky with reserved variable names, so watch out for that.

I did sometimes feel the need for more powerful metaprogramming, but you can work around it. It is easy to dynamically make GLSL one-liner snippets and feed them in. String manipulation of code is always still an option, you just don't need to do it at the macro-level anymore.

ShaderGraph 2 has been in active use now for months, it does the job I need it to very well. In a perfect world, this would be solved at the GPU driver level. Until SPIR-V or WebVulkan gets here, imma stick to my regexes. Don't try this at home, kids.

For docs and more, see the Git repository.

27 Sep 2015 7:00am GMT

Steven Wittens: MathBox²

PowerPoint Must Die

"I think a lot of mathematics is really about how you understand things in your head. It's people that did mathematics, we're not just general purpose machines, we're people. We see things, we feel things, we think of things. A lot of what I have done in my mathematical career has had to do with finding new ways to build models, to see things, do computations. Really get a feel for stuff.

It may seem unimportant, but when I started out people drew pictures of 3-manifolds one way and I started drawing them a different way. People drew pictures of surfaces one way and I started drawing them a different way. There's something significant about how the representation in your head profoundly changes how you think.

It's very hard to do a brain dump. Very hard to do that. But I'm still going to try to do something to give a feel for 3-manifolds. Words are one thing, we can talk about geometric structures. There are many precise mathematical words that could be used, but they don't automatically convey a feeling for it. I probably can't convey a feeling for it either, but I want to try."

- William Thurston, The Mystery of 3-Manifolds (Video)

How do you convince web developers-heck, people in general-to care about math? This was the challenge underlying Making Things With Maths, a talk I gave three years ago. I didn't know either, I just knew why I liked this stuff: demoscene, games, simulation, physics, VR, … It had little to do with what passed for mathematics in my own engineering education. There we were served only eyesore PowerPoints or handwritten overhead transparencies, with simplified graphs, abstract flowcharts and rote formulas, available on black and white photocopies.

Smart people who were supposed to teach us about technology seemed unable to teach us with technology. Fixing this felt like a huge challenge where I'd have to start from scratch. This is why the focus was entirely on showing rather than telling, and why MathBox 1 was born. It's how this stuff looks and feels in my head, and how I got my degree: by translating formulas into mental pictures, which I could replay and reason about on demand.

PowerPoint Syndrome

Initially I used MathBox like an embedded image or video: compact diagrams, each a point or two in a presentation. My style quickly shifted though. I kept on finding ways to transform from one visualization to another. Not for show, but to reveal the similarities and relationships underneath. MathBox encouraged me to animate things correctly, leveraging the actual models themselves, instead of doing a visual morph from A to B. Each animation became a continuous stream of valid examples, a quality both captivating and revealing.

How to Fold a Julia Fractal

For instance, How to Fold a Julia Fractal is filled with animations of complex exponentials, right from the get go. This way I avoid the scare that ($ e^{i\pi} $) is a meaningful expression; symbology and tau-tology never have a chance to obscure geometrical workings. Instead a web page that casually demonstrates conformal mapping and complex differential equations got 340,000 visits. Despite spotty web browser support and excluding all mobile phones for years.

Elsevier $42 per PDF paywall

Meanwhile academics voluntarily published their writings behind a $42 per PDF paywall, the colossal idiots.

The next talk, Making WebGL Dance, contained elaborate long takes worthy of an Alfonso Cuarón film, with only 3 separate shots for the bulk of a 30 minute talk. The lesson seemed obvious: the slides shouldn't have graphics in them, rather, the graphics should have slides in them. The diagnosis of PowerPoint syndrome is then the constant trashing of context from one slide to the next. A traditional blackboard doesn't have this problem: you build up diagrams slowly, by hand, across a large surface, erasing selectively and only when you run out of space.

It's not just about permanence and progression though, it's also about leveraging our natural understanding of shape, scale, color and motion. Think of how a toddler learns to interact with the world: poke, grab, chew, spit, smash. Which evolves into run, jump, fall, get back up again. Humans are naturals at taking multiple cases of "If I do this, that will happen" and turning it into a consistent, functional model of how things work. We learn language by bootstrapping random jibberish into situational meaning, converging on a shared protocol.

That said, I find the usual descriptions of how people experience language and thought foreign. Instead, when Temple Grandin speaks about visual thinking, I nod vigorously. Thought to me is analog concepts and sensory memories, remixed with visual and other simulations. It builds off the quantities and qualities present in spatial and temporal notions, which appear built-in to us.

Speech and writing is then a program designed to reconstruct particular thoughts in a compatible brain. There are a multitude of evolving languages, they can be used elegantly, bluntly, incomprehensibly, but the desired output remains the same. In my talks, armed with weapons-grade C2-continuous animations, it is easy to transcode my film reel into words, because the slides run themselves. The string of concepts already hangs in the air, I only add the missing grammar that links them up. This is a puzzle our brains are so good at solving, we usually do it without thinking.

Language is the ability of thoughts to compute their own source code.

(It's not proof, I just supply pudding.)

powerpoint remote

Tip: Powerpoint remotes are 4-key USB keyboards with PageUp/PageDown, F5 and . keys.

Comes with dongle.

mathbox presentation slide sketches
mathbox presentation slide sketches

I sketch rough thumbnails, then start animating until I hit a dead end. Then start another one. Titles and overlays always come last.

Manifold Dreams

I don't say all this to up my Rain Man cred, but to lay to rest the recurring question of where my work comes from. I translate the pictures in my head to HD, in order to learn from and refine the view. As I did with quaternions: I struggled to grok the hypersphere, it wouldn't fit together right. So I wrote the code to trace out geodesics in color and fly around in it, and suddenly the twisting made sense. Hence my entire tutorial was built to replicate the same discovery process I went through myself.

For visualizing the 4D hypersphere, quaternions are a natural fit.
They reveal their underlying cyclic symmetry under 4D stereographic projection.

There was one big problem: scenes now consisted of diagrams of diagrams, which meant working around MathBox more than with it. Performance issues arose as complexity grew. Above all there was a total lack of composability in the components. None of this could be fixed without ripping out significant pieces, so doing it incrementally seemed futile. I started from scratch and set off to reinvent all the wheels.

$$ \text{MathBox}^2 = \int_1^2 \text{code}(v) dv $$

MathBox 2 was inevitably going to suffer second-system syndrome, parts would be overengineered. Rather than fight it, I embraced it and effectively wrote a strange vector GPU driver in CoffeeScript. (Such is life, this is a blueprint meant to be simplified and made obsolete over time, not expanded upon.) It's a freight train straight to the heart of a graphics card, combining low-level and high-level in a way that feels novel 🐴 when you use it, squeezing 🐴 through a very small opening.

What was tedious before, now falls out naturally. If I format the scene above as XML/JSX, it becomes:

  <!-- Place the camera -->
  <camera />
  <!-- Change clock speed -->
    <!-- 4D Stereographic projection -->
      <!-- Custom 4D rotation shader -->
      <shader />
      <!-- Move vertices -->
        <!-- Sample an area -->
        <!-- Draw a set of lines -->
        <area />
        <line />

        <!-- Sample an area -->
        <!-- Draw a set of lines -->
        <area />
        <line />

        <!-- Sample an area -->
        <!-- Draw a set of lines -->
        <area />
        <line />

In order to make these pieces behave, a bunch of additional attributes are applied, most of which are strings or values, some of which are functions/code, either JavaScript or GLSL:

<root id="1" scale={300}>
  <camera id="2" proxy={true} position={[0, 0, 3]} />
  <clock id="3" speed={1/4}>
    <stereographic4 id="4" bend={1}>
      <shader id="5" code="
uniform float cos1;
uniform float sin1;
uniform float cos2;
uniform float sin2;
uniform float cos3;
uniform float sin3;
uniform float cos4;
uniform float sin4;

vec4 getRotate4D(vec4 xyzw, inout vec4 stpq) {
  xyzw.xy = xyzw.xy * mat2(cos1, sin1, -sin1, cos1);
  xyzw.zw = xyzw.zw * mat2(cos2, sin2, -sin2, cos2);
  xyzw.xz = xyzw.xz * mat2(cos3, sin3, -sin3, cos3);
  xyzw.yw = xyzw.yw * mat2(cos4, sin4, -sin4, cos4);

  return xyzw;
 cos1=>{(t) => Math.cos(t * .111)} sin1=>{(t) => Math.sin(t * .111)} cos2=>{(t) => Math.cos(t * .151 + 1)} sin2=>{(t) => Math.sin(t * .151 + 1)} cos3=>{(t) => Math.cos(t * .071 + Math.sin(t * .081))} sin3=>{(t) => Math.sin(t * .071 + Math.sin(t * .081))} cos4=>{(t) => Math.cos(t * .053 + Math.sin(t * .066) + 1)} sin4=>{(t) => Math.sin(t * .053 + Math.sin(t * .066) + 1)} />
      <vertex id="6">
        <area id="7" rangeX={[-π/2, π/2]} rangeY={[0, τ]} width={129} height={65} expr={(emit, θ, ϕ, i, j) => {
        q1.set(0, 0, Math.sin(θ), Math.cos(θ));
        q2.set(0, Math.sin(ϕ), 0, Math.cos(ϕ));
        emit(q1.x, q1.y, q1.z, q1.w);
      }} live={false} channels={4} />
        <line id="8" color="#3090FF" />
        <area id="9" rangeX={[-π/2, π/2]} rangeY={[0, τ]} width={129} height={65} expr={(emit, θ, ϕ, i, j) => {
        q1.set(0, Math.sin(θ), 0, Math.cos(θ));
        q2.set(Math.sin(ϕ), 0, 0, Math.cos(ϕ));
        emit(q1.x, q1.y, q1.z, q1.w);
      }} live={false} channels={4} />
        <line id="10" color="#20A000" />
        <area id="11" rangeX={[-π/2, π/2]} rangeY={[0, τ]} width={129} height={65} expr={(emit, θ, ϕ, i, j) => {
        q1.set(Math.sin(θ), 0, 0, Math.cos(θ));
        q2.set(0, 0, Math.sin(ϕ), Math.cos(ϕ));
        emit(q1.x, q1.y, q1.z, q1.w);
      }} live={false} channels={4} />
        <line id="12" color="#DF2000" />

Phew. That's how you make a 4D diagram with Hopf fibration as far as the eye can see. Except it's not actually JSX, that's just me and my pretty-printer pretending.

Geometry Streaming

The key is the data itself. It's an array of points mostly, but how that data is laid out and interpreted determines how useful it can be.

Most basic primitives come in fixed size chunks. Particles are single points, lines have two points, triangles have three points. Polygons and polylines have N points. So it made sense to have a tuple of N points be the basic logical unit. You can think in logical pieces of geometry, rather than raw points or individual triangles, unlike GL.

Each primitive maps over data in a standard way. Feed an array of points to a line, you get a polyline. Feed a matrix of points to a surface and you get a grid mesh. Simple. But feed a voxel to a vector, and you get a 3D vector field. The general idea is that drawing 1 of something should be as easy as drawing 100×100×100.

This is particularly useful for custom data expressions, which stream in live or procedural data. They now receive an emit(x, y, z, w) function, for emitting a 4-vector like XYZW or RGBA. This is little more than an inlineable call to fill a floatArray[i++] = x, quite a lot faster than returning an array or object.

    expr: function (emit, x, i, t) {
      y = Math.sin(x + t);
      emit(x,  y);
      emit(x, -y);
    length:   64,
    items:    2,
    channels: 2,
    color: 0x3090FF,
    width: 3,
    start: true,

Emitting 64 2D vectors on an interval, 2 points each.

More importantly it lets you emit N points in one iteration, which makes the JS expressions themselves feel like geometry shaders. The result feeds into one or more styled drawing ops. The number of emit calls has to be constant, but you can always knock out or mask the excess geometry.

emit = switch channels
  when 1 then (x) ->
    array[i++] = x

  when 2 then (x, y) ->
    array[i++] = x
    array[i++] = y

  when 3 then (x, y, z) ->
    array[i++] = x
    array[i++] = y
    array[i++] = z

  when 4 then (x, y, z, w) ->
    array[i++] = x
    array[i++] = y
    array[i++] = z
    array[i++] = w

Both the expression and emitter will be inlined into the stream's iteration loop.

consume = switch channels
  when 1 then (emit) ->
    emit array[i++]

  when 2 then (emit) ->
    emit array[i++], array[i++]

  when 3 then (emit) ->
    emit array[i++], array[i++], array[i++]

  when 4 then (emit) ->
    emit array[i++], array[i++], array[i++], array[i++]

Closures of Hanoi


GPUs can operate on 4×1 vectors and 4×4 matrices, so working with 4D values is natural. Values can also be referenced by 4D indices. With one dimension reserved for the tuples, that leaves us 3 dimensions XYZ. Hence MathBox arrays are 3+1D. This is for width, height, depth, while the tuple dimension is called items. It does what it says on the tin, creating 1D W, 2D W×H and 3D W×H×D arrays of tuples. Each tuple is made of N vectors of up to 4 channels each.

Thanks to cyclic buffers and partial updates, history also comes baked in. You can use a spare dimension as a free time axis, retaining samples on the go. You can .set('history', N) to record a short log of a whole array over time, indefinitely.

All of this is modular: a data source is something that can be sampled by a 4D pointer from GLSL. Underneath, arrays end up packed into a regular 2D float texture, with "items × width" horizontally and "height × depth" vertically. Each 'pixel' holds a 1/2/3/4D point.

Mapping a 4D 'pointer' to the real 2D UV coordinates is just arithmetic, and so are operators like transpose and repeat. You just swap the XY indices and tell everyone downstream that it's now this big instead. They can't tell the difference.

You can create giant procedural arrays this way, including across rectangular texture size limits, as none of them actually exist except as transient values deep inside a GPU core. Until you materialize them by rendering to a texture using the memo primitive. Add in operators like interpolation and convolution and it's a pretty neat real-time finishing kit for data.

Too many WebGL contexts

Continued in Part 2.

27 Sep 2015 7:00am GMT

Steven Wittens: A DOM for Robots

Modelling Live Data

I want to render live 3D graphics based on a declarative data model. That means a choice of shapes and transforms, as well as data sources and formats. I also want to combine them and make live changes. Which sounds kind of DOMmy.

Three.js Editor

3D engines don't have Document Object Models though, they have scene graphs and render trees: minimal data structures optimized for rendering output. In Three.js, each tree node is a JS object with properties and children like itself. Composition only exists in a limited form, with a parent's matrix and visibility combining with that of its children. There is no fancy data binding: the renderer loops over the visible tree leaves every frame, passing in values directly to GL calls. Any geometry is uploaded once to GPU memory and cached. If you put in new parameters or data, it will be used to produce the next frame automatically, aside from a needsUpdate bit here and there for performance reasons.

So Three.js is a thin retained mode layer on top of an immediate mode API. It makes it trivial to draw the same thing over and over again in various configurations. That won't do, I want to draw dynamic things with the same ease. I need a richer model, which means wrapping another retained mode layer around it. That could mean observables, data binding, tree diffing, immutable data, and all the other fun stuff nobody can agree on.

However I mostly feed data in and many parameters will end up as shader properties. These are passed to Three as a dictionary of { type: '…', value: x } objects, each holding a single parameter. Any code that holds a reference to the dictionary will see the same value, as such it acts as a register: you can share it, transparently binding one value to N shaders. This way a single .set('color', 'blue') call on the fringes can instantly affect data structures deep inside the WebGLRenderer, without actually cascading through.

MathBox Three Scene Object

I applied this to build a view tree which retains this property, storing all attributes as shareable registers. The Three.js scene graph is reduced to a single layer of THREE.Mesh objects, flattening the hierarchy. Rather than clumsy CSS3D divs which encode matrices as strings, there's binary arrays, GLSL shaders, and highly optimizable JS lambdas.

As long as you don't go overboard with the numbers, it runs fine even on mobile.

<root id="1" scale={600} focus={3}>
  <camera id="2" proxy={true} position={[0, 0, 3]} />
  <shader id="3" code="
uniform float time;
uniform float intensity;

vec4 warpVertex(vec4 xyzw, inout vec4 stpq) {
  xyzw +=   0.2 * intensity * (sin(xyzw.yzwx * 1.91 + time + sin(xyzw.wxyz * 1.74 + time)));
  xyzw +=   0.1 * intensity * (sin(xyzw.yzwx * 4.03 + time + sin(xyzw.wxyz * 2.74 + time)));
  xyzw +=  0.05 * intensity * (sin(xyzw.yzwx * 8.39 + time + sin(xyzw.wxyz * 4.18 + time)));
  xyzw += 0.025 * intensity * (sin(xyzw.yzwx * 15.1 + time + sin(xyzw.wxyz * 9.18 + time)));

  return xyzw;
 time=>{(t) => t / 4} intensity=>{(t) => {
        t = t / 4;
        intensity = .5 + .5 * Math.cos(t / 3);
        intensity = 1.0 - Math.pow(intensity, 4);
        return intensity * 2.5;
      }} />
  <reveal id="4" stagger={[10, 0, 0, 0]} enter=>{(t) => 1.0 - Math.pow(1.0 - Math.min(1,  (1 + pingpong(t))*2), 2)} exit=>{(t) => 1.0 - Math.pow(1.0 - Math.min(1,  (1 - pingpong(t))*2), 2)}>
    <vertex id="5" pass="view">
      <polar id="6" bend={1/4} range={[[-π, π], [0, 1], [-1, 1]]} scale={[2, 1, 1]}>
        <transform id="7" position={[0, 1/2, 0]}>
          <axis id="8" detail={512} />
          <scale id="9" divide={10} unit={π} base={2} />
          <ticks id="10" width={3} classes=["foo", "bar"] />
          <scale id="11" divide={5} unit={π} base={2} />
          <format id="12" expr={(x) => {
        return x ? (x / π).toPrecision(2) + 'π' : 0
      }} />
          <label id="13" depth={1/2} zIndex={1} />
        <axis id="14" axis={2} detail={128} crossed={true} />
        <transform id="15" position={[π/2, 0, 0]}>
          <axis id="16" axis={2} detail={128} crossed={true} />
        <transform id="17" position={[-π/2, 0, 0]}>
          <axis id="18" axis={2} detail={128} crossed={true} />
        <grid id="19" divideX={40} detailX={512} divideY={20} detailY={128} width={1} opacity={1/2} unitX={π} baseX={2} zBias={-5} />
        <interval id="20" length={512} expr={(emit, x, i, t) => {
        emit(x, .5 + .25 * Math.sin(x + t) + .25 * Math.sin(x * 1.91 + t * 1.81));
      }} channels={2} />
        <line id="21" width={5} />
        <play id="22" pace={10} loop={true} to={3} script=[[{color: "rgb(48, 144, 255)"}], [{color: "rgb(100, 180, 60)"}], [{color: "rgb(240, 20, 40)"}], [{color: "rgb(48, 144, 255)"}]] />

Note: The JSX is a lie, you define nodes in pure JS.

Keep it Simple

From afar there's a tree of nodes, similar to SVG tags. This is the MathBox library of vector primitives. The basic shapes are all there: points, lines, faces, vectors, surfaces, etc. These nodes are placed inside a shallow hierarchy of views and transforms.

However none of the shapes draw anything by themselves. They only know how to draw data supplied by a linked source. Data can be an array (static or live), a procedural source, custom JS / GLSL code, etc. This is further augmented by data operators which can be sandwiched between source and shape, forming automatic pipelines between siblings.

The current set of components looks like this:


  • Group
  • Inherit
  • Root
  • Unit


  • Camera


  • Axis
  • Face
  • Grid
  • Line
  • Point
  • Strip
  • Surface
  • Ticks
  • Vector


  • Area
  • Array
  • Interval
  • Matrix
  • Scale
  • Volume
  • Voxel


  • Grow
  • Join
  • Lerp
  • Memo
  • Resample
  • Repeat
  • Slice
  • Split
  • Spread
  • Swizzle
  • Transpose


  • DOM
  • HTML


  • Move
  • Play
  • Present
  • Reveal
  • Slide
  • Step


  • Compose
  • RTT


  • Shader


  • Format
  • Label
  • Text
  • Retext


  • Clock
  • Now


  • Fragment
  • Layer
  • Transform
  • Transform4
  • Vertex


  • Cartesian
  • Cartesian4
  • Polar
  • Spherical
  • Stereographic
  • Stereographic4
  • View

To make you feel at home, nodes have an id and classes, and you can use CSS selectors to identify them. Nodes link up with preceding siblings and parents by default, but you can select any node in the tree. This allows for arbitrary graphs, including feedback loops. However all of this is optional: you can also pass in direct node objects or MathBox's own jQuery-like selections. What it doesn't have is a notion of detached document fragments: nodes are immediately inserted on creation.

A node's attributes can be .get() and .set(), though there is also a read-only .props dictionary for fashionable reasons. The values are strongly typed as Three.js colors, vectors, matrices, … but accept e.g. CSS colors and ordinary arrays too. The values are normalized for immediate use, the original values are preserved on the side for printing and serialization.

MathBox Node API

What's unique is the emphasis on time. First, properties can be directly bound to time-dependent expressions, on creation or afterwards. Second, clocks are primitives on their own. This allows for nested timelines, on-demand bullet time, fast forwards and more. It even supports limited time travel, evaluating an expression several frames in the past. This can be used to ensure consistent 60 fps data logging through janky updates, useful for all sorts of things. It's exposed publicly as .bind(key, expr) and .evaluate(key, time) per node. It's also dogfood for declarative animation tracks. The primitives clock/now provide timing, while step and play handle keyframes on tracks.

This is definitely a DOM, but it has only basic features in common with the HTML DOM and does much less. Most of the magic comes from the components themselves. There's no cascade of styles to inherit. Children compose with a parent, they do not inherit from it, only caring about their own attributes. The namespace is clean, with no weird combo styles à la CSS. As much as possible, attributes are unique orthogonal knobs you can turn freely.


On the inside I separate the generic data model from the type-specific View Controller attached to it. The controller's job is to create and manage Three.js objects to display the node (if any). Because a data source and a visible shape have very little in common, the nodes and their controllers are blank slates built and organized around named traits. Each trait is a data mix-in, with associated attributes and helpers for common behavior. Primitives with the same traits can be expected to work the same, as their public facing models are identical.

Controllers can traverse the graph to find each other by matching traits, listening for events and making calls in response. This way only specific events will cascade through cause and effect, often skipping large parts of the hierarchy. The only way to do a "global style recalculation" would be to send a forced change event to every single controller, and there's never a reason to do so.

The controller lifecycle is deliberately kept simple: make(), made(), change(…), unmake(), unmade(). When a model changes, its controller either updates in place, or rebuilds itself, doing an unmake/make cycle. The change handler is invoked on creation as well, to encourage stateless updates. It affords live editing of anything, without having to micro-optimize every possible change scenario. Controllers can also watch bound selectors, retargeting if their matched set changes. This lets primitives link up with elements that have yet to be inserted.

Unlike HTML, the DOM is not forced to contain a render tree as well. Only some of the leaf nodes have styles and create renderables. Siblings and parents are called upon to help, but the effects don't have to be strictly hierarchical. For example, a visual effect can wrap a single leaf but still be applied after all its parents, as transformations are collected and composed in passes.

It'll Do

The result is not so much a document model as it is a computational model inside a presentational model. You can feed it finalized data and draw it directly… or you can build new models within it and reproject them live. Memoization enables feedback and meta-visualization. The line between data viz and demo scene is rarely this blurry.

Here, the notion of a computed style has little meaning. Any value will end up being transformed and processed in arbitrary ways down the pipe. As I've tried to explain before, the kinds of things people do with getComputedStyle() and getClientBoundingRect() are better achieved by having an extensible layout model, one that affords custom constraints and composition on an equal footing. To do otherwise is to admit defeat and embrace a leaky abstraction by design.

The shallow hierarchy with composition between siblings is particularly appealing to me, even if I realize it introduces non-traditional semantics more reminiscent of a command-line. It acts as both a jQuery-style chainable API, and a minimal document model. If it offends your sensibilities, you could always defuse the magic by explicitly wiring up every relationship. In case of confusion, .inspect() will log syntax highlighted JSX, while .debug() will draw the underlying shader graphs.

I've defined a good set of basic primitives and iterated on them a few times. But how to implement it, when WebGL doesn't even fully cover OpenGL ES 2?

27 Sep 2015 7:00am GMT