28 Nov 2015
Les Jeudis du Libre: Mons, le 17 décembre – Rudder : la certitude d’une infrastructure robuste pour vos applicatifs critiques
Ce jeudi 17 décembre 2015 à 19h se déroulera la 44ème séance montoise des Jeudis du Libre de Belgique.
Le sujet de cette séance : Rudder : la certitude d'une infrastructure robuste pour vos applicatifs critiques
Thématique : Infrastructure|Administration systeme|Automatisation
Public : sysadmin / entreprises / dev / étudiants
L'animateur conférencier : Nicolas Charles (Normation)
Lieu de cette séance : HEPH Condorcet, Chemin du Champ de Mars, 15 - 7000 Mons - Auditorium 2 (G01) situé au rez de chaussée (cf. ce plan sur le site d'Openstreetmap; ATTENTION, l'entrée est peu visible de la voie principale, elle se trouve dans l'angle formé par un très grand parking).
La participation sera gratuite et ne nécessitera que votre inscription nominative (notamment pour des raisons de sécurité), de préférence préalable, ou à l'entrée de la séance. Merci d'indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d'un verre de l'amitié.
Pour rappel, les Jeudis du Libre se veulent des espaces d'échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d'informaticiens (UMONS, HEH et Condorcet), et avec le concours de l'A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.
Description : Nicolas Charles a lancé avec 2 autres fondateurs, la société Normation, coorganisateur des DevOps Days Paris, et avant tout l'éditeur du logiciel Rudder, solution open source de gestion de configuration. Lancé en 2010, Rudder est aujourd'hui utilisé au sein de productions critiques, des serveurs financiers de la Caisse d'Epargne aux satellites d'Eutelsat, mais aussi dans d'autres domaines comme l'industrie pharmaceutique et l'automobile.
L'objectif de Rudder est de garantir une infrastructure solide afin de permettre des changements rapides, fréquents et conformes au niveau applicatif, et ainsi réduire le time to market des projets et le downtime des services.
Les key features - Rudder en 3 points :
Production ready : Rudder est un outil de gestion de configuration open source orienté DevOps et dédié à la production. En effet, il se distingue des outils de déploiement pur comme Ansible, des ordonnanceurs comme Rundeck ou des outils de gestion de configuration de génération précédente (Puppet, Chef, CFEngine) en allant plus loin que la simple automatisation des commandes de déploiement / configurations. Rudder vérifie et maintient proactivement l'état cible du SI afin de le garder en conformité : les règles mises en places sont vérifiées en permanence, et peuvent, en cas de modification imprévue, soit remonter des alertes, soit se corriger automatiquement.
Accessible à tous : Rudder se distingue également de ses prédécesseurs et concurrents par une excellente accessibilité : facile et rapide à installer, il est également très intuitif à utiliser grâce à son interface graphique de gestion et à la librairie de règles prêtes à l'emploi. Il n'est donc plus nécessaire de savoir développer pour gérer son infrastructure de manière automatique, ouvrant ainsi la porte à tous les niveaux d'expertise, y compris hors du service technique afin que les développeurs ou la sécurité par exemple puissent simplement consulter le niveau de conformité ou le contenu (hardware ou software) d'une machine.
Universel : Grâce à son agent léger en C compilable sur n'importe quel OS, Rudder sait gérer les serveurs physiques comme les instances cloud, en passant par les desktops et les objets connectés.
Les utilisateurs de Rudder l'utilisent principalement dans 3 cas de figure :
Le déploiement initial sur système provisionné (installer et configurer l'ensemble des composants logiciels d'une machine en un clic)
La gestion de configuration (configuration de services, application de règles de sécurité, paramétrage système, etc.)
Le contrôle de conformité (politique de sécurité interne, PCI-DSS, ISO 27001, PSSI de l'ANSSI, etc.)
28 Nov 2015 4:24pm GMT
27 Nov 2015
One thing that is exciting to me, is how much we appear to have gotten right in Drupal 8. The other day, for example, I stumbled upon a recent article from the LinkedIn engineering team describing how they completely changed how their homepage is built. Their primary engineering objective was to deliver the fastest page load time possible, and one of the crucial ingredients to achieve that was Facebook's BigPipe.
When a very high-profile, very high-traffic, highly personalized site like LinkedIn uses the same technique as Drupal 8, that solidifies my belief in Drupal 8.
LinkedIn supports both server-side and client-side rendering. While Drupal 8 does server-side rendering, we're still missing explicit support for client-side rendering. The advantage of client-side rendering versus server-side rendering is debatable. I've touched upon it in my blog post on progressive decoupling, but I'll address the topic of client-side rendering in a future blog post.
However, there is also something LinkedIn could learn from Drupal! Every component of a LinkedIn page that should be delivered via BigPipe needs to write BigPipe-specific code which is prone to errors and requires all engineers to be familiar with BigPipe. Drupal 8 on the other hand has a level of abstraction that allows BigPipe to work without the need for BigPipe-specific code. Thanks to Drupal's higher-level API, Drupal module developers don't have to understand BigPipe: Drupal 8 knows what page components are poorly cacheable or not cacheable at all, and what page components are renderable in isolation, and uses that information to automatically optimize the delivery of page components using BigPipe.
It is exciting to see Drupal support the advanced techniques that were previously only within reach of the top 50 most visited sites of the world! Drupal's BigPipe support will benefit websites small and large.
27 Nov 2015 3:27pm GMT
26 Nov 2015
The Cactus Channel are a Melbourne based dark instrumental soul band.
Add Chet Faker you get a Melbournce based dark soul band and if you wonder how that sounds, you can listen to this here little YouTube video;
Watch this video on YouTube.
Possibly related twitterless twaddle:
26 Nov 2015 4:51pm GMT
Like every year since I joined Percona, I also plan to speak at the next Percona Live Conference & Expo in Santa Clara during spring 2016.
Once again I want to share with the users, DBAs and developers my experience related to MySQL High Availability and especially with Galera.
This year, Percona implemented a Community Vote to rate the talks you would like to see in the schedule. So if you want to attend one or more of my talks, please vote for them.
Of course, I invite you to rate other people's talk too, that will help the organisation and the Conference Committee to prepare the schedule.
These are the direct links to my talks and tutorial:
26 Nov 2015 11:08am GMT
24 Nov 2015
Visualise what's trending in your build process
I'm happy to inform you that Buildtime Trend v0.3 is released. Those of you using Buildtime Trend as a Service had a running preview of all the new features :
- introduction of a worker queue to make processing build job logs more scalable
- dashboard chart data can be filtered on build properties
- several new dashboard charts and layout improvements
- enable Keen.io query caching to improve chart loading speed
- the dashboard takes url parameters to set the refresh rate and the default settings for time interval and filter properties
- a statistics dashboard is added to monitor usage of Buildtime Trend as a Service
Do you want to enable Buildtime Trend for your the build process of your project on Travis CI? It is easy to set up.
Buildtime Trend as a Service is currently available for free for Open Source projects, thanks to the kind people of Keen.io.
24 Nov 2015 8:52pm GMT
[…] we need to stop. It's time to let icon fonts pass on to Hack Heaven, where they can frolic with table-based layouts, Bullet-Proof Rounded Corners and Scalable Inman Flash Replacements.
Read why (accessibility & reliability) and what to use instead (SVG) on the Cloud Four blog.
Possibly related twitterless twaddle:
24 Nov 2015 1:07pm GMT
23 Nov 2015
If you've been following the Devops Weekly newsletter, DevOps-like conferences or if you're just really interested in technology, you've probably heard of unikernels being mentioned a few times. In the last few months, its popularity has greatly increased.
But, what are these 'unikernels' really? And, is it something for me?
I struggled with that question. Both defining a unikernel and answering the question 'who are they for'?
What are unikernels
The single source of truth is Wikipedia with a cryptic explanation, but let's start there.
Unikernels are specialised, single address space machine images constructed by using library operating systems. A developer selects, from a modular stack, the minimal set of libraries which correspond to the OS constructs required for their application to run.
These libraries are then compiled with the application and configuration code to build sealed, fixed-purpose images (unikernels) which run directly on a hypervisor or hardware without an intervening OS such as Linux or Windows.
All clear, right?
Well, if you're like me, that may not tell you much. So here's my explanation of unikernels.
Let's step back a little first and follow this example. Let's say you're a developer writing a PHP application. When you run your PHP (or Ruby, or Node, or Perl, or ...) application, you're essentially running:
- Your language interpreter: PHP, Perl, Ruby, ...
- Which calls system level APIs of your operating system
- Some of these API calls require other level privileges, forcing context switches for your application ... (user space vs. kernel space)
- Which is all running on an operating system like CentOS, Debian, Ubuntu, ...
- Which is probably a Virtual Machine on VMware, Xen, KVM, ...
- Which is run by its own virtualisation operating system (ESXi, Xen Hypervisor) ...
- Which in turn is running on hardware
- Which is bootstrapped by a BIOS or UEFI
Honestly, if you're thinking about all the levels an application is built upon, it's a miracle things even work.
But they do. And they work pretty well and with reasonable performance. But you have to admit, there are a lot of layers between the hardware that's supposed to be powering your application and the application itself.
That's what unikernels as a concept try to solve: remove the bloat that separates hardware from application. Have "just enough" of the Operating System to run your code, nothing more.
There's a great paper that sums it up nicely:
The idea [of a unikernel] is that you look at cloud guests just like you would look at single-application hardware.
The Rise and Fall of the Operating System
A unikernel tries to remove some of complexities that modern operating systems bring. Because they are "general purpose" operating systems (like just about any Linux or Windows distribution), they also come with drivers, packages, services, ... that may not apply to your application, but are generally considered OK to have on every OS install.
Even core modules in the Linux kernel don't apply to every installation. Things like USB drivers are useless in a virtualised "cloud" environment, but are still included in the kernel.
Compared to containers and virtualisation, the excellent road to unikernels presentation pictures it like this:
(Source: road to unikernels)
Unikernels have a couple of advantages over general purpose operating systems like Linux;
- Improved security: only the core of the OS is implemented, no video or USB drivers that aren't needed and could be a source of intrusion.
- Very small footprint: imagine being able to remove 95% of the kernel size, simply because your application doesn't need it.
- Specialised implementations: you know your application and you can tweak and run your kernel exactly the way you want it.
- Quick enough to be "Just in time" to summon a unikernel live (similar to live spawning Docker instances), with boot times less than 1 second.
This very nature makes unikernels an excellent candidate for microservices.
Removing layers of complexity with unikernels
If what you're after is an application that runs with as little overhead as possible, you may want to consider writing it as a unikernel.
To do so, you use a library operating system. A library OS gives you the tools to create your own unikernel. The most noticeable ones are MirageOS (which actually coined the term "unikernel") and Rump Kernels. They are both essentially a set of standardised drivers and libraries so you don't have to reinvent things like a TCP stack, a persistent storage layer, ...
Unikernels are specialized OS kernels that are written in a high-level language and act as individual software components. A full application (or appliance) consists of a set of running unikernels working together as a distributed system.
MirageOS is based on the OCaml language and emits unikernels that run on the Xen hypervisor.
queue.acm.org: Unikernels: Rise of the Virtual Library Operating System
The most popular languages nowadays to write a unikernel seem to be:
These aren't new programming languages. With the exception of Go and Rust, they've been around for more than 15 years.
In order to make the OS and the application run as smoothly as possible, these unikernel libraries need a kernel footprint that is as small as possible.
Today, that's possible because of virtualisation. Because an Operating System like Xen or VMware can do the work of translating the different hardware models into a defined set of virtualised hardware, a unikernel can be optimised for just that specific set of virtual hardware.
Unikernels leverage the advantages of virtualisation to create an operating system that's as specialised and optimised as possible.
The result of an application written in OCaml with the MirageOS set of libraries to form a "unikernel" can be summarised like this:
The compiler can then output a full stand-alone kernel instead of just a Unix executable. These unikernels are single-purpose libOS VMs that perform only the task defined in their application source and configuration files, and they depend on the hypervisor to provide resource multiplexing and isolation.
queue.acm.org: Unikernels: Rise of the Virtual Library Operating System
The result is that you run a unikernel, a small but dedicated operating system, to run (parts of) your application. If an update to your application or configuration is needed, you compile a new version of your source code to a new unikernel and you deploy that new version. A new security release? Re-compile and deploy.
This makes coordination and orchestration of deploys harder at the benefit of running a more efficient application.
This essential creates the concept of immutable servers: an application server no longer stores state and can be thrown away and rebuilt at your convenience.
One approach may be to start running unikernels in docker containers, but aren't we adding another layer of complexity that we should try to avoid? On the other hand, Docker adds ease of use and deployments to the mix that may make the trade-off worthwhile.
Who should run unikernels?
To be perfectly honest, the answer to me isn't exactly clear yet. I think it's fair to say that if you're currently deploying web applications built on WordPress, unikernels may be a bridge too far.
On the other hand, the benefits of unikernels are evident but require a completely different mindset to managing your infrastructure, a different skillset in creating these kind of applications and kernels and require a very deep understanding of concepts that are mostly foreign to us now: immutable infrastructure.
Maybe in 2, 5 or 10 years we'll deploy unikernels like they're the new normal. Right now, I think it's for a very select niche set of users that are looking for highly specialised and secure applications. For the most common use cases, either a Virtual Machine (or, if you're already on the bandwagon: docker containers) are probably what you'll be focussing on.
More reading material on Unikernels
If you're interested in the subject, here are some other links I can recommend you spend your time on:
- Unikernels: Rise of the Virtual Library Operating System
- The Rise and Fall of the Operating System (pdf)
- Presentation: The Road to Unikernels
- After Docker: Unikernels and Immutable Infrastructure
- Unikernels, meet Docker!
23 Nov 2015 11:26pm GMT
22 Nov 2015
The Linux Professional Institute, the BSD Certification Group and The Document Foundation will offer exam sessions at FOSDEM 2016. Interested candidates can now register for exams with the respective organisations. Further details are available on the certification page.
22 Nov 2015 3:00pm GMT
20 Nov 2015
I really loved reading Git from the bottom up when I was learning Git, which starts by showing how all the pieces fit together. Starting with the basics and gradually working towards the big picture is a great way to understand any complex piece of technology.
Recently I've been working with Kubernetes, a fantastic cluster manager. Like Git it is tremendously powerful, but the learning curve can be quite steep.
But there is hope. Kamal Marhubi has written a great series of articles that take the same approach: start from the basic building blocks, build with those.
- What even is a kubelet?
- Kubernetes from the ground up: the API server
- Kubernetes from the ground up: the scheduler
20 Nov 2015 8:31pm GMT
Building Drupal 8 with all of you has been a wild ride. I thought it would be fun to take a little end-of-week look back at some of our community's biggest milestones through Twitter. If you can think of others important Tweets, please share them in the comments, and I'll update the post.
Feeling nostalgic? See every single version of Drupal running!
- Cheppers (@cheppers) November 19, 2015
Here is how we opened the development branch for Drupal 8: live at Drupalcon!
The secretsauce of #drupal isn't code or features or market share, important thought they are. The secret sauce is community.
- Sean Yo (@seanyo) March 10, 2011
- Jeff Geerling (@geerlingguy) March 10, 2011
Drupal 8's first beta showed the power of community
Drupal 8.0.0 beta 1 released! https://t.co/FwdmRYaZUx Ahh the power of COMMUNITY driven software! :-)
- Doug Vann (@dougvann) October 1, 2014
- Gábor Hojtsy (@gaborhojtsy) October 1, 2014
We had issues ... but the queue steadily declined
- xjm (@xjmdrupal) September 19, 2014
Drupal 8.0.x-rc1 release window is today. Good sign of real stability is major issue count going down for 6+ weeks. pic.twitter.com/5VnHGmL9zb
- catch (@catch56) October 7, 2015
We held sprints around the world: here are just a few
- xjm (@xjmdrupal) July 5, 2015
Working on D8 Criticals at the Ghent DA critical sprint, this is how the "My issues" page looks for me right now! pic.twitter.com/y5SnavVtND
- Sascha Grossenbacher (@berdir) December 13, 2014
- Cameron Eagans (@cweagans) March 23, 2012
And we created many game-changing features
- Wim Leers (@wimleers) April 8, 2015
And.... there we go! http://t.co/ed6XtMIs MOTHER BLEEPING VIEWS IN MOTHER BLEEPING CORE!
- webchick (@webchick) October 22, 2012
- Alex Pott (@alexpott) February 15, 2014
With Content + Config Translation in core D8 core is more translatable than D7 with all of contrib. #drupal
- Tobias Stöckler (@tstoeckler) November 18, 2013
Amazing to see Drupal 8's multilingual capabilities explained on the multilingual release page (for example Farsi): pic.twitter.com/9owVE3xABo
- Gábor Hojtsy (@gaborhojtsy) November 19, 2015
The founder of PHP said: Drupal 8 + PHP7 = a lot of happy people
- Rasmus Lerdorf (@rasmus) April 21, 2015
We reached the first release candidate and celebrated ... a little
- Whitney Hess (@whitneyhess) October 7, 2015
- Manuel Garcia (@drupalero) October 7, 2015
Kudos to the 3000+ contributors and to the entire Drupal community that helped make this happen. https://t.co/FtATRtSmCU
- Leslie Glynn (@leslieglynn) October 7, 2015
And, just yesterday, we painted the world blue and celebrated Drupal 8 ... a lot!
- Drupal (@drupal) November 10, 2015
- Drupal (@drupal) November 19, 2015
- Taco Potze˙ (@tacopotze) November 19, 2015
- Duo (@DuoConsulting) November 19, 2015
- Shakeel Tariq (@shakeeltariq) November 19, 2015
- Agustin Rojas Silva (@Aguztinrs) November 19, 2015
- HornCologne (@HornCologne) November 19, 2015
- webchick (@webchick) November 19, 2015
- Paul Johnson (@pdjohnson) November 19, 2015
- Dries Buytaert (@Dries) November 18, 2015
- Peter Decuyper (@sgrame) November 23, 2015
20 Nov 2015 3:24pm GMT
19 Nov 2015
Rather than explaining what it does, see for yourself:
(That's with 2 slow blocks that take 3 s to render. Only one is cacheable. Hence the page load takes ~6 s with cold caches, ~3 s with warm caches.)
Fastest Drupal yet!
- Fast anonymous user page loads: Page Cache - entire page is cached.
- Fast authenticated user page loads: BigPipe - majority of page including main content is cached (thanks to Dynamic Page Cache) and sent first, the rest is rendered later and streamed.
Go and enjoy the fastest Drupal yet!2
P.S.: none of this would have been possible without my employer Acquia, whom sponsored both my time and Fabian's to make BigPipe a reality.
We were able to release it today because the code was ready: it was developed over the course of several months in a Drupal core issue and "just" moved into a module, with every commit matching a comment in the issue, to make it easier to understand how the code base got to this point. ↩
And please report any issues you encounter at d.o/project/issues/big_pipe - depending on how well BigPipe works in the real world during Drupal 8.0.x, we should be able to get it into Drupal 8.1.x core! ↩
19 Nov 2015 2:55pm GMT
We just released Drupal 8.0.0! Today really marks the beginning of a new era for Drupal. Over the course of almost five years, we've brought the work of more than 3,000 contributors together to make something that is more flexible, more innovative, more easy to use, and more scalable.
Drupal 8 has been a big transformation for our community. This particular reboot has taken one-third of Drupal's lifespan to complete. In the process we've learned that reinvention doesn't come easily or quickly. There are huge market forces happening around us, and we can't exactly look away. Mobile is moving our society to near-universal, global internet access. Most companies have begun to transform themselves digitally, leaving established business models and old business processes in the dust. Digital experience builders are turning to platforms that give them greater flexibility, better usability, better integrations, and faster innovation. The pace of change in the digital world has become dizzying. If we were to ignore these market forces, Drupal would be caught flat-footed and quickly become irrelevant.
But we didn't. I'm proud to see that we've responded to these market forces with Drupal 8, and delivered a robust, solid product that can be used to build next-generation websites, web applications and digital experiences. We've implemented a more modern development framework, reimagined the usability and authoring experience, and made technical improvements that will help us build for the multilingual, mobile and highly personalized experiences of the future. From how we model content and get content in and out the system, to how we build and assemble experiences on various devices, to how we scale that to millions and millions of pageviews -- it all got much better with Drupal 8.
I'm personally incredibly proud of this release. Drupal 8 is the result of years of hard work and innovation by thousands of people, with lots of attention to detail at every level. Congratulations to everyone who stepped up to contribute; this was only possible thanks to your persistence and tireless hard work. It took a lot of learning, our best thinking and our best people to create Drupal 8, and I'm very, very proud of what we have accomplished together.
For 15 years, I have believed that Open Source offers significant advantages to proprietary solutions through superior innovation. Today, I believe that more than ever. Drupal 8 is another key milestone in helping us win and doing what is best for an open web. Of course, our job is not done but now is the time to have fun and celebrate this monumental milestone. Tonight, we'll be hosting more than 200 parties around the world! (It's also my 37th birthday today and the release of Drupal 8 along with all those parties is pretty much the best present ever!)
19 Nov 2015 2:54pm GMT
Later today, Drupal 8 will be released! At this time, good docs are of course crucial.
As the maintainer and de facto co-maintainer of several Drupal 8 core modules and subsystems, I spent the last several days making sure that the documentation is up-to-date for:
- the Text Editor module (
- the CKEditor module (
- the Quick Edit module (
- the Filter module (
- the Cache system
- the Render system (specifically the render caching part)
- the Asset Library system
drupal.org handbook pages have been either received minor updates, received complete overhauls or were written from scratch:
P.S.: if you find anything unclear on those pages, ping me in
#drupal-contribute - I want to make sure these docs are as clear and helpful as possible.
19 Nov 2015 11:04am GMT
Timmy Thomas asking the right question;
Watch this video on YouTube.
Possibly related twitterless twaddle:
19 Nov 2015 6:35am GMT
18 Nov 2015
A couple of weeks ago a Chief Digital Officer (CDO) of one of the largest mobile telecommunications companies in the world asked me how a large organization such as hers should think about organizing itself to maintain control over costs and risks while still giving their global organization the freedom to innovate.
When it comes to managing their websites and the digital customer experience, they have over 50 different platforms managed by local teams in over 50 countries around the world. Her goal is to improve operational efficiency, improve brand consistency, and set governance by standardizing on a central platform. The challenge is that they have no global IT organization that can force the different teams to re-platform.
When asked if I had any insights from my work with other large global organizations, it occurred to me the ideal model she is seeking is very aligned to how an Open Source project like Drupal is managed (a subject I have more than a passing interest in).
Teams in different countries around the world often demand full control and decision-making authority over their own web properties and reject centralization. How then might someone in a large organization get the rest of the organization to rally behind a single platform and encourage individual teams and departments to innovate and share their innovations within the organization?
In a large Open Source project such as Drupal, contributions to the project can come from anywhere. On the one extreme there are corporate sponsors who cover the cost of full-time contributors, and on the other extreme there are individuals making substantial contributions from dorm rooms, basements, and cabins in the woods. Open Source's contribution models are incredible at coordinating, accepting, evaluating, and tracking the contributions from a community of contributors distributed around the world. Can that model be applied in the enterprise so contributions can come from every team or individual in the organization?
Reams have been written on how to incubate innovation, how to source it from the wisdom of the crowd, ignite it in the proverbial garage, or buy it from some entrepreneurial upstart. For large organizations like the mobile telecommunications company this CDO works at, innovation is about building, like Open Source, communities of practice where a culture of test-and-learn is encouraged, and sharing -- the essence of Open Source -- is rewarded. Consider the library of modules available to extend Drupal: there can be several contributed solutions for a particular need -- say embedding a carousel of images or adding commerce capability to a site -- all developed independently by different developers, but all available to the community to test, evaluate and implement. It may seem redundant (some would argue inefficient) to have multiple options available for the same task, but the fact that there are multiple solutions means more choices for people building experiences. It's inconceivable for a proprietary software company to fund five different teams to develop five different modules for the same task. They develop one and that is what their customers get. In a global innovation network, teams have the freedom to experiment and share their solutions with their peers -- but only if there is a structure and culture in place that rewards sharing them through a single platform.
Centers of Excellence (CoEs) are familiar models to share expertise and build alignment around a digital strategy in a decentralized, global enterprise. Some form multiple CoEs around shared utility functions such as advanced data analytics, search engine optimization, social media monitoring, and content management. CoEs have also grown to include Communities of Practice (CoP) where various "communities" of people doing similar things for different products or functions in multiple departments or locations, coalesce to share insights and techniques. In companies I've worked with that have standardized on Drupal, I've seen internal Drupal Camps and hackathons pop up much as they do within the Drupal community at-large.
My advice to her? Loosen control without losing control.
That may sound like a "have-your-cake-and-eat-it-too" cliche, but the Open Source model grew around models of crowd-sourced collaboration, constant and transparent communications, meritocracies, and a governance model that provides the platform and structure to keep the community pointed at a common goal. What would my guidance be for getting started?
- Start with a small pilot. Build that pilot around a team that includes the different functions of local country teams and bring them together into one working model where they can evangelize their peers and become the nucleus of a future CoE "community". Usually, one or more champions will arise from that.
- Establish a collaboration model where innovations can be shared back to the rest of the organization, and where each innovation can be analyzed and discussed. This is the essence of Drupal's model with Drupal.org acting as the clearing house for contributions coming in from everywhere in the world.
Drupal and Open Source were created to address a need, and from their small beginnings grew something large and powerful. It is a model any business can replicate within their organization. So take a page out of the Open Source playbook: innovate, collaborate and share. Governance and innovation can coexist, but for that to happen, you have to give up a measure of control and start to think outside the box.
18 Nov 2015 1:17pm GMT
In case you missed the following tweet last week Welcome #Activiti on https://t.co/xJZy2DixHH! cc @starbuxman @jbarrez pic.twitter.com/kDpx0kRB72 - Stéphane Nicoll (@snicoll) November 12, 2015 That's right! Activiti is now on start.spring.io! This is a huge deal - the Spring Initializr is the place where the journey for many Spring Boot projects start, so being on […]
18 Nov 2015 1:07pm GMT