24 May 2016

feedPlanet Grep

Frank Goossens: Autoptimize Power-Up sneak peak; Critical CSS

So although I am taking things rather slowly, I am in fact still working on Power-Ups for Autoptimize, focusing on the one most people were asking for; critical CSS. The Critical CSS Power-Up will allow one to add "above the fold"-CSS for specific pages or types of pages.

The first screenshot shows the main screen (as a tab in Autoptimize), listing the pages for which Critical CSS is to be applied:

The second screenshot shows the "edit"-modal (which is almost the same when adding new rules) where you can choose what rule to create (based on URL or on WordPress Conditional Tag), the actual string from the URL or Conditional Tag and a textarea to copy/ paste the critical CSS:

ao_critcss_edit

The next step will be to contact people who already expressed interest in beta-testing Power-Ups, getting feedback from them to improve and hopefully make "Autoptimize Critical Css" available somewhere in Q3 2016 (but no promises, off course).

Possibly related twitterless twaddle:

24 May 2016 4:49pm GMT

Jeroen De Dauw: I T.A.K.E. 2016

Last week I attended the I T.A.K.E. unconference in Bucharest. This unconference is about software development, and has tracks such as code quality, DevOps, craftsmanship, microservices and leadership. In this post I share my overall impressions as well as the notes I took during the uncoference.

Conference impression

itakeThis was my first attendance of I T.A.K.E, and I had not researched in high detail what the setup would look like, so I did not really know what to expect. What surprised me is that most of the unconference is actually pretty much a regular conference. For the majority of the two days, there where several tracks in parallel, with talks on various topics. The unconference part is limited to two hours each day during which there is an open space.

Overall I enjoyed the conference and learned some interesting new things. Some talks were a bit underwhelming quality wise, with speakers not properly using the microphone, code on slides in such a quantity that no one can read it, and speakers looking at their slides the whole time and not connecting to the audience. The parts I enjoyed most were the open space, conversations during coffee breaks, and a little pair programming. I liked I T.A.K.E more than the recent CraftConf, though less than SoCraTes, which perhaps is a high standard to set.

Keynote: Scaling Agile

Day one started with a keynote by James Shore (who you might know from Let's Code: Test-Driven JavaScript) on how to apply agile methods when growing beyond a single team.

The first half of the talk focused on how to divide work amongst developers, be it between multiple teams, or within a team using "lanes". The main point that was made is that one wants to minimize dependencies between groups of developers (so people don't get blocked by things outside of their control), and therefore the split should happen along feature boundaries, not within features themselves. This of course builds on the premise that the whole team picks up a story, and not some subset or even individuals.

ScalingAgile

A point that caught my interest is that while collective ownership of code within teams is desired, sharing responsibility between teams is more problematic. The reason for this being that supposedly people will not clean up after themselves enough, as it's not their code, and rather resort to finger-pointing to the other team(s). As James eloquently put it:

"Human nature is to form tribes and throw poo at each other." @jamesshore #itakeunconf

- Alastair Smith (@alastairs) May 19, 2016

My TL;DR for this talk is basically: low coupling, high cohesion 🙂

@jamesshore with some useful insight into scaling agile teams! #itakeunconf pic.twitter.com/4ApVdraBGZ

- Adrian Oprea (@opreaadrian) May 19, 2016

Mutation Testing to the rescue of your Tests

During this talk, one of the first things the speaker said is that the only goal of tests is to make sure there are no bugs in production. This very much goes against my point of view, as I think the primary value is that they allow refactoring with confidence, without which code quality suffers greatly. Additionally, tests provide plenty of other advantages, such as documenting what the system does, and forcing you to pay a minimal amount of attention to certain aspects of software design.

The speaker continued to ask about who uses test coverage, and had a quote from Uncle Bob on needing 100% test coverage. After another few minutes of build up to the inevitable denunciation of chasing test coverage as being a good idea, I left to go find a more interesting talk.

Afterwards during one of the coffee breaks I talked with some people that had joined the talk 10 minutes or so after it started and had actually found it interesting. Apparently the speaker got to the actual topic of the talk; mutation testing, and presented it as a superior metric. I did not know about mutation testing before and recommend you have a look at the Wikipedia page about it if you do not know what it is. It automates an approximation of what you do in trying to determine which tests are valuable to write. As with code coverage, one should not focus on the metric though, and merely use it as the tool that it is.

Interesting related posts:

Raising The Bar

A talk on Software Craftsmanship that made me add The Coding Dojo Handbook to my to-read list.

Metrics For Good Developers

Open Space

The Open Space is a two hour slot which puts the "un" in unconference. It starts by having a market place, where people propose sessions on topics of their interest. These sessions are typically highly interactive, in the form of self-organized discussions.

Open space agenda is ready to be setup @itakeunconf #itakeunconf #openspace pic.twitter.com/8LfE7WpMf1

- Vlad Salagean (@vlad_salagean) May 19, 2016

Open space ready to take off! @itakeunconf #itakeunconf Open agenda is ready @claudia_rosu pic.twitter.com/yegKxNMpWr

- Vlad Salagean (@vlad_salagean) May 19, 2016

Open Space: Leadership

This session started by people writing down things they associate with good leadership, and then discussing those points.

Awesome input on leadership skills at the #itakeunconf open space. Thanks everyone! pic.twitter.com/xdmRPBLJ8U

- Lady Retweetsalot (@Singsalad) May 19, 2016

Two books where mentioned, the first being The Five Dysfunctions of a Team.

The second book was Leadership and the One Minute Manager: Increasing Effectiveness Through Situational Leadership.

Open Space: Maintenance work: bad and good

This session was about finding reasons to dislike doing maintenance work, and then finding out how to look at it more positively. My input here was that a lot of the negative things, such as having to deal with crufty legacy code, can also be positive, in that they provide technical challenges absent in greenfield projects, and that you can refactor a mess into something nice.

I did not stay in this session until the very end, and unfortunately cannot find any pictures of the whiteboard.

Open Space: Coaching dojo

I had misheard what this was about and thought the topic was "Coding Dojo". Instead we did a coaching exercise focused on asking open ended questions.

Are your Mocks Mocking at You?

This session was spread over two time slots, and I only attended the first part, as during the second one I had some pair programming scheduled. One of the first things covered in this talk was an explanation of the different types of Test Doubles, much like in my recent post 5 ways to write better mocks. The speakers also covered the differences between inside-out and outside-in TDD, and ended (the first time slot) with JavaScript peculiarities.

Never Develop Alone : always with a partner

In this talk, the speaker, who has been doing full-time pair programming for several years, outlined the primary benefits provided by, and challenges encountered during, pair programming.

pair programming is not a "go faster" strategy, it is a "waste less" strategy (which often results in going faster)

- Kent Beck (@KentBeck) February 12, 2015

Benefits: more focus / less distractions, more confidence, rapid feedback, knowledge sharing, fun, helps on-boarding, continuous improvement, less blaming.

Challenges: synchronization / communication, keyboard hogging

Do:

Live coding: Easier To Change Code

In this session the presenter walked us through some typical legacy code, and then demonstrated how one can start refactoring (relatively) safely. The code made me think of the Gilded Rose kata, though it was more elaborate/interesting. The presenter started by adding a safety net in the form of golden master tests and then proceeded with incremental refactoring.

Is management dead?WMDE management

Uncle Abraham certainly is most of the time! (Though when he is not, he approves of the below list.)

delegationlevels

Visualizing codebases

This talk was about how to extract and visualize metrics from codebases. I was hoping it would include various code quality related metrics, but alas, the talk only included file level details and simple line counts.

24 May 2016 1:09pm GMT

22 May 2016

feedPlanet Grep

Philip Van Hoof: Geef vorm

We zijn goed. We tonen dat door ons respect voor privacy en veiligheid te combineren. Kennis is daar onontbeerlijk voor. Ik pleit voor investeren in techneuten die de twee beheersen.

Onze overheid moet niet alles investeren in miljoenen voor het bestrijden van computervredebreuk; wel ook investeren in betere software.

Belgische bedrijven maken soms software. Ze moeten aangemoedig worden, gestuurd, om het goede te doen.

Ik zou graag van ons centrum cybersecurity zien dat ze bedrijven aanmoedigt om goede en dus veilige software te maken. We moeten ook inzetten op repressie. Maar we moeten net zo veel inzetten op hoge kwaliteit.

Wij denken wel eens dat, ach, wij te klein zijn. Maar dat is niet waar. Als wij beslissen dat hier, in België, de software goed moet zijn: dan creërt dat een markt die zich zal aanpassen aan wat wij willen. Het is zaak standvastig te zijn.

Wanneer wij zeggen dat a - b hier welkom is, of niet, geven we vorm aan technologie.

Ik verwacht niet minder van mijn land. Geef vorm.

22 May 2016 5:17pm GMT

21 May 2016

feedPlanet Grep

Dieter Adriaenssens: Some guidelines for writing better and safer code

Recently, I came across some code of a web application that, on brief inspection, was vulnerable to XSS and SQL injection attacks : the SQL queries and the HTML output were not properly escaped, the input variables were not sanitized. After a bit more reviewing I made a list of measures and notified the developer who quickly fixed the issues.

I was a bit surprised to come across code that was very insecure, which took the author only a few hours to drastically improve with a few simple changes. I started wondering why the code wasn't of better quality in the first place? Did the developer not know about vulnerabilities like SQL injection and how to prevent them? Was it time pressure that kept him from writing safer code?

Anyway, there are a few guidelines to write better and safer code.

Educate yourself

As a developer you should familiarize yourself with possible vulnerabilities and how to avoid them. There are plenty of books and online tutorials covering this. A good starting point is the Top 25 Most Dangerous Software Errors list. Reading security related blogs and going to conferences (or watch talks online) is useful as well.

Use frameworks and libraries

About every language has a framework for web applications (Drupal, Symfony (PHP), Spring (Java), Django (Python), ...) that has tools and libraries for creating forms, sanitizing input variables, properly escaping HTML output, handling cookies, check authorization and do user and privileges management, database-object abstraction (so you don't have to write your own SQL queries) and much more.
Those frameworks and libraries are used by a lot of applications and developers, so they are tested much more than code you write yourself, so bugs are found more quickly.

It is also important to regularly update the libraries and frameworks you use, to have the latest bugs and vulnerabilities fixed.

Code review

More people see more than one. Have your code reviewed by a coworker and use automated tools to check your code for vulnerabilities. Most IDEs have code checking tools, or you can implement them in a Continuous Integration (CI) environment like Jenkins, Travis CI, Circle CI, ... to check your code during every build.
A lot of online code checking tools exist that can check your code every time you push your code to your version control system.
There is no silver bullet here, but a combination manual code review and automated checks will help to spot vulnerabilities sooner.

Test your code

Code reviewing tools can't spot every bug, so testing your code is important as well. You will need automated unit tests, integration tests, ... so you can test your code during every build in you CI environment.
Writing good tests is an art and takes time, but more tests means less possible bugs remaining in your code.

Coding style

While not directly a measure against vulnerabilities, using a coding style that is common for the programming language you are using, makes your code more readable both for you, the reviewer and future maintainers of your code. Better readability makes it easier to spot bugs, maintain code and avoid new bugs.


I guess there are many more ways to improve code quality and reduce vulnerabilities. Feel free to leave a comment with your ideas.


21 May 2016 3:01pm GMT

20 May 2016

feedPlanet Grep

Frank Goossens: Music from Our Tube; Jameszoo ft. Arthur Verocai doing weird electro-jazz

But don't you worry, there's no obnoxious 4/4 beat to be heard. And how electro can a violin get?

YouTube Video
Watch this video on YouTube.

Possibly related twitterless twaddle:

20 May 2016 7:21pm GMT

19 May 2016

feedPlanet Grep

Philip Van Hoof: QML coding conventions checker that uses QML parser’s own abstract syntax tree

My colleague Henk Van Der Laak made a interesting tool that checks your code against the QML coding conventions. It uses the internal parser's abstract syntax tree of Qt 5.6 and a visitor design.

It has a command line, but being developers ourselves we want an API too of course. Then we can integrate it in our development environments without having to use popen!

So this is how to use that API:

// Parse the code
QQmlJS::Engine engine;
QQmlJS::Lexer lexer(&engine);
QQmlJS::Parser parser(&engine);

QFileInfo info(a_filename);
bool isJavaScript = info.suffix().toLower() == QLatin1String("js");
lexer.setCode(code,  1, !isJavaScript);
bool success = isJavaScript ? parser.parseProgram() : parser.parse();
if (success) {
    // Check the code
    QQmlJS::AST::UiProgram *program = parser.ast();
    CheckingVisitor checkingVisitor(a_filename);
    program->accept(&checkingVisitor);
    foreach (const QString &warning, checkingVisitor.getWarnings()) {
        qWarning() << qPrintable(warning);
    }
}

19 May 2016 1:31pm GMT

18 May 2016

feedPlanet Grep

Dries Buytaert: Megan Sanicki to become Executive Director at the Drupal Association

This is a time of transition for the Drupal Association. As you might have read on the Drupal Association blog, Holly Ross, our Executive Director, is moving on. Megan Sanicki, who has been with the Drupal Association for almost 6 years, and was working alongside Holly as the Drupal Association's COO, will take over Holly's role as the Executive Director.

Open source stewardship is not easy but in the 3 years Holly was leading the Drupal Association, she lead with passion, determination and transparency. She operationalized the Drupal Association and built a team that truly embraces its mission to serve the community, growing that team by over 50% over three years of her tenure. She established a relationship with the community that wasn't there before, allowing the Drupal Association to help in new ways like supporting the Drupal 8 launch, providing test infrastructure, implementing the Drupal contribution credit system, and more. Holly also matured our DrupalCon, expanding its reach to more users with conferences in Latin America and India. She also executed the Drupal 8 Accelerate Fund, which allowed direct funding of key contributors to help lead Drupal 8 to a successful release.

Holly did a lot for Drupal. She touched all of us in the Drupal community. She helped us become better and work closer together. It is sad to see her leave, but I'm confident she'll find success in future endeavors. Thanks, Holly!

Megan, the Drupal Association staff and the Board of Directors are committed to supporting the Drupal project. In this time of transition, we are focused on the work that Drupal Association must do and looking at how to do that in a sustainable way so we can support the project for many years to come.

18 May 2016 9:03pm GMT

Frank Goossens: Quick KeyCDN’s Cache Enabler test

cache enablerCache Enabler - WordPress Cache is a new page caching kid on the WordPress plugin block by the Switzerland-based KeyCDN. It's based in part on Cachify (which has a strong user-base in Germany) but seems less complex/ flexible. What makes it unique though, is it that it allows one to serve pages with WEBP images (which are not supported by Safari, MS IE/ Edge or Firefox) instead of JPEG's to browsers that support WEBP. To be able to do that, you'll need to also install Optimus, an image optimization plugin that plugs into a freemium service by KeyCDN (you'll need a premium account to convert to WEBP though).

I did some tests with Cache Enabler and it works great together with Autoptimize out of the box, especially after the latest release (1.1.0) which also hooks into AO's autoptimize_action_cachepurged action to clear Cache Enabler's cache if AO's get purged (to avoid having pages in cache the refer to deleted autoptimized CSS/ JS-files).

Just not sure I agree with this text on the plugin's settings page;

Avoid […] concatenation of your assets to benefit from parallelism of HTTP/2.

because based on previous tests by smarter people than me concatenation of assets can still make (a lot of) sense, even when on HTTP/2 :-)

Possibly related twitterless twaddle:

18 May 2016 11:19am GMT

Philip Van Hoof: Item isChild of another Item in QML

Damned, QML is inconsistent! Things have a content, data or children. And apparently they can all mean the same thing. So how do we know if something is a child of something else?

After a failed stackoverflow search I gave up on copy-paste coding and invented the damn thing myself.

function isChild( a_child, a_parent ) {
        if ( a_parent === null ) {
                return false
        }

        var tmp = ( a_parent.hasOwnProperty("content") ? a_parent.content
                : ( a_parent.hasOwnProperty("children") ? a_parent.children : a_parent.data ) )

        if ( tmp === null || tmp === undefined ) {
                return false
        }

        for (var i = 0; i < tmp.length; ++i) {

                if ( tmp[i] === a_child ) {
                        return true
                } else {
                        if ( isChild ( a_child, tmp[i] ) ) {
                                return true
                        }
                }
        }
        return false
}

18 May 2016 7:30am GMT

17 May 2016

feedPlanet Grep

Mattias Geniar: The async Puppet pattern

The post The async Puppet pattern appeared first on ma.ttias.be.

I'm pretty sure this isn't tied to Puppet and is probably widely used by everyone else, but it only occurred to me recently what the structural benefits of this pattern are.

Async Puppet: stop fixing things in one Puppet run

This has always been a bit of a debated topic, both for me internally as well as in the Puppet community at large: should a Puppet run be 100% complete after the first run?

I'm starting to back away from that idea, having spent countless hours optimising my Puppet code to have the "one-puppet-run-to-rule-them-all" scenario. It's much easier to gradually build your Puppet logic in steps, each step activating when the next one has caused its final state to be set.

What I'm mostly seeing this scenario shine in is the ability to automatically add monitoring from within your Puppet code. There's support for Nagios out of the box and I contributed to the zabbixapi ruby gem to facilitate managing Zabbix host and templates from within Puppet.

Monitoring should only be added to a server when there's something to monitor. And there's only something to monitor once Puppet has done its thing and caused state on the server to be as expected.

Custom facts for async behaviour

So here's a pattern I particularly like. There are many alternatives to this one, but it's simple, straight forward and super easy to understand -- even for beginning Puppeteers.

  1. A first Puppet run starts and installs Apache with all its vhosts
  2. The second Puppet run starts and gets a fact called "apache_vhost_count", a simple integer that counts the amount of vhosts configured
  3. When that fact is a positive integer (aka: there are vhosts configured), monitoring is added

This pattern takes 2 Puppet runs to be completely done: the first gets everything up-and-running, the second detects that there are things up-and-running and adds the monitoring.

Monitoring wrappers around existing Puppet modules

You've probably done this: you get a cool module from Forge (Apache, MySQL, Redis, ...), you implement it and want to add your monitoring to it. But how? It's not cool to hack away in the modules themselves, those come via r10k or puppet-librarian.

Here's my take on it:

  1. Create a new module, call it "monitoring"
  2. Add custom facts in there, called has_mysql, has_apache, ... for all the services you want
  3. If you want to go further, create facts like apache_vhost_count, mysql_databases_count, ... to count the specific instance of each service, to determine if it's being used or not.
  4. Use those facts to determine whether to add monitoring or not:
    if ($::has_apache > 0) and ($::apache_vhost_count > 0) {
      @@zabbix_template_link { "zbx_application_apache_${::fqdn}":
        ensure   => present,
        template => 'Application - PHP-FPM',
        host     => $::fqdn,
        require  => Zabbix_host [ $::fqdn ],
      }
    }
        
    

Is this perfect? Far from it. But it's pragmatic and it gets the job done.

The facts are easy to write and understand, too.

Facter.add(:apache_vhost_count) do
  confine :kernel => :linux
  setcode do
    if File.exists? "/etc/httpd/conf.d/"
      Facter::Util::Resolution.exec('ls -l /etc/httpd/conf.d | grep \'vhost-\' | wc -l')
    else
      nil
    end
  end
end

It's mostly bash (which most sysadmins understand) -- and very little Ruby (which few sysadmins understand).

The biggest benefit I see to it is that whoever implements the modules and creates the server manifests doesn't have to toggle a parameter called enable_monitoring (been there, done that) to decide whether or not that particular service should be monitored. Puppet can now figure that out on its own.

Detecting Puppet-managed services

Because some services are installed because of dependencies, the custom facts need to be clever enough to understand when they're being managed by Puppet. For instance, when you install the package "httpd-tools" because it contains the useful htpasswd tool, most package managers will automatically install the "httpd" (Apache) package, too.

Having that package present shouldn't trigger your custom facts to automatically enable monitoring, it should probably only do that when it's being managed by Puppet.

A very simple workaround (up for debate whether it's a good one), is to have each Puppet module write a simple file to /etc/puppet-managed in each module.

$ ls /etc/puppet-managed
apache mysql php postfix ...

Now you can extend your custom facts with the presence of that file to determine if A) a service is Puppet managed and B) if monitoring should be added.

Facter.add(:has_apache) do
  confine :kernel => :linux
  setcode do
    if File.exists? "/sbin/httpd"
      if File.exists? "/etc/puppet-managed/apache"
        # Apache installed and Puppet managed
        true
      else
        # Apache is installed, but isn't Puppet managed
        nil
      end
    else
      # Apache isn't installed
      nil
    end
  end
end

(example explicitly split up in order to add comments)

You may also be tempted to use the defined() (manual) function, to check if Apache has been defined in your Puppet code and then add monitoring. However, that's dependent on the resource order in which it's evaluated.

Your code may look like this:

if (defined(Service['httpd']) {
   # Apache is managed by Puppet, add monitoring ? 
}

Puppet's manual explains the big caveat though:

Puppet depends on the configuration's evaluation order when checking whether a resource is declared.

In other words: if your monitoring code is evaluated before your Apache code, that defined() will always return false.

Working with facter circumvents this.

Again, this pattern isn't perfect, but it allows for a clean separation of logic and -- if your team grows -- an easier way to separate responsibilities for the monitoring team and the implementation team to each have their own modules with their own responsibilities.

The post The async Puppet pattern appeared first on ma.ttias.be.

Related posts:

  1. Setting custom puppet facts from within your Vagrantfile You may want to set custom puppet facts in your...
  2. Puppet: Error: Could not retrieve catalog from remote server: Error 400 on SERVER: stack level too deep on node something.pp As a Puppet user, you can run into the following...
  3. Puppet: Error 400 on SERVER ArgumentError: malformed format string - %S at … Here's an error to screw with your debugging skills. ~$...

17 May 2016 8:07pm GMT

Frank Goossens: Goosebumps from Our Tube: Syreeta & Stevie Wonder Leaving Home in 1972

Syreeta (Wright) was once married to Stevie Wonder and in 1972 they recorded this magnificent cover of the Beatles' "She's leaving home".

YouTube Video
Watch this video on YouTube.

Possibly related twitterless twaddle:

17 May 2016 8:42am GMT

16 May 2016

feedPlanet Grep

Dries Buytaert: Cross-channel user experiences with Drupal

Last year around this time, I wrote that The Big Reverse of Web would force a major re-architecture of the web to bring the right information, to the right person, at the right time, in the right context. I believe that conversational interfaces like Amazon Echo are further proof that the big reverse is happening.

New user experience and distribution platforms only come along every 5-10 years, and when they do, they cause massive shifts in the web's underlying technology. The last big one was mobile, and the web industry adapted. Conversational interfaces could be the next user experience and distribution platform - just look at Amazon Echo (aka Alexa), Facebook's messenger or Microsoft's Conversation-as-a-Platform.

Today, hardly anyone questions whether to build a mobile-optimized website. A decade from now, we might be saying the same thing about optimizing digital experiences for voice or chat commands. The convenience of a customer experience will be a critical key differentiator. As a result, no one will think twice about optimizing their websites for multiple interaction patterns, including conversational interfaces like voice and chat. Anyone will be able to deliver a continuous user experience across multiple channels, devices and interaction patterns. In some of these cross-channel experiences, users will never even look at a website. Conversational interfaces let users disintermediate the website by asking anything and getting instant, often personalized, results.

To prototype this future, my team at Acquia built a fully functional demo based on Drupal 8 and recorded a video of it. In the demo video below, we show a sample supermarket chain called Gourmet Market. Gourmet Market wants their customers to not only shop online using their website, but also use Echo or push notifications to do business with them.

We built an Alexa integration module to connect Alexa to the Gourmet Market site and to answer questions about sale items. For example, you can speak the command: "Alexa, ask Gourmet Market what fruits are on sale today". From there, Alexa would make a call to the Gourmet Market website, finding what is on sale in the specified category and pull only the needed information related to your ask.

On the website's side, a store manager can tag certain items as "on sale", and Alexa's voice responses will automatically and instantly reflect those changes. The marketing manager needs no expertise in programming -- Alexa composes its response by talking to Drupal 8 using web service APIs.

The demo video also shows how a site could deliver smart notifications. If you ask for an item that is not on sale, the Gourmet Market site can automatically notify you via text once the store manager tags it as "On Sale".

From a technical point of view, we've had to teach Drupal how to respond to a voice command, otherwise known as a "Skill", coming into Alexa. Alexa Skills are fairly straightforward to create. First, you specify a list of "Intents", which are basically the commands you want users to run in a way very similar to Drupal's routes. From there, you specify a list of "Utterances", or sentences you want Echo to react to that map to the Intents. In the example of Gourmet Market above, the Intents would have a command called GetSaleItems. Once the command is executed, your Drupal site will receive a webhook callback on /alexa/callback with a payload of the command and any arguments. The Alexa module for Drupal 8 will validate that the request really came from Alexa, and fire a Drupal Event that allows any Drupal module to respond.

It's exciting to think about how new user experiences and distribution platforms will change the way we build the web in the future. As I referenced in Drupalcon New Orleans keynote, the Drupal community needs to put some thought into how to design and build multichannel customer experiences. Voice assistance, chatbots or notifications are just one part of the greater equation. If you have any further thoughts on this topic, please share them in the comments.

Digital trends

16 May 2016 2:55pm GMT

Mattias Geniar: Redis: OOM command not allowed when used memory > ‘maxmemory’

The post Redis: OOM command not allowed when used memory > 'maxmemory' appeared first on ma.ttias.be.

If you're using Redis, you can find your application logs start to show the following error messages:

$ tail -f error.log
OOM command not allowed when used memory > 'maxmemory'

This can happen every time a WRITE operations is sent to Redis, to store new data.

What does it mean?

The OOM command not allowed when used memory > 'maxmemory' error means that Redis was configured with a memory limit and that particular limit was reached. In other words: its memory is full, it can't store any new data.

You can see the memory values by using the redis CLI tool.

$ redis-cli -p 6903

127.0.0.1:6903> info memory
# Memory
used_memory:3221293632
used_memory_human:3.00G
used_memory_rss:3244535808
used_memory_peak:3222595224

If you run a Redis instance with a password on it, change the redis-cli command to this:

$ redis-cli -p 6903 -a your_secret_pass

The info memory command remains the same.

The example above shows a Redis instance configured to run with a maximum of 3GB of memory and consuming all of it (=used_memory counter).

Fixing the OOM command problem

There are 3 potential fixes.

1. Increase Redis memory

Probably the easiest to do, but it has its limits. Find the Redis config (usually somewhere in /etc/redis/*) and increase the memory limit.

 $ vim /etc/redis/6903.conf
maxmemory 3gb

Somewhere in that config file, you'll find the maxmemory parameter. Modify it to your needs and restart the Redis instance afterwards.

2. Change the cache invalidation settings

Redis is throwing the error because it can't store new items in memory. By default, the "cache invalidation" setting is set pretty conservatively, to volatile-lru. This means it'll remove a key with an expire set using an LRU algorithm.

This can cause items to be kept in the queue even when new items try to be stored. In other words, if your Redis instance is full, it won't just throw away the oldest items (like a Memcached would).

You can change this to a couple of alternatives:

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached? You can select among five behavior:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys->random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with all the kind of policies, Redis will return an error on write
#       operations, when there are not suitable keys for eviction.
#
#       At the date of writing this commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort

In the very same Redis config you can find the directive (somewhere in /etc/redis/*), there's also an option called maxmemory-policy.

The default is:

$ grep maxmemory-policy /etc/redis/*
maxmemory-policy volatile-lru

If you don't really care about the data in memory, you can change it to something more agressive, like allkeys-lru.

$ vim /etc/redis/6903.conf
maxmemory-policy allkeys-lru

Afterwards, restart your Redis again.

Keep in mind though that this can mean Redis removes items from its memory that haven't been persisted to disk just yet. This is configured with the save parameter, so make sure you look at this values too to determine a correct "max memory" policy. Here are the defaults:

#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving at all commenting all the save lines.

save 900 1
save 300 10
save 60 10000

With the above in mind, setting a different maxmemory-policy could mean dataloss in your Redis instance!

3. Store less data in Redis

I know, stupid 'solution', right? But ask yourself this: is everything you're storing in Redis really needed? Or are you using Redis as a caching solution and just storing too much data in it?

If your SQL queries return 10 columns but realistically you only need 3 of those on a regular basis, just store those 3 values -- not all 10.

The post Redis: OOM command not allowed when used memory > 'maxmemory' appeared first on ma.ttias.be.

Related posts:

  1. supervisor job: spawnerr: can't find command 'something' I love supervisor, an easy to use process controller on...
  2. PHP's Memcached sessions: Failed to write session data (memcached) for Magento A couple of days ago I was debugging a particularly...
  3. Reload Varnish VCL without losing cache data You can reload the Varnish VCL configuration without actually restarting...

16 May 2016 6:48am GMT

14 May 2016

feedPlanet Grep

Dries Buytaert: The people of New Orleans

I love street photography. Walking and shooting. Walking, talking and shooting. Slightly pushing me out of my comfort zone looking for that one great photo.

Street life
Street life
Street life
Street life
Sunrise

Street photography is all fun and games until someone pulls out a handgun. The anarchy sign in the background makes these shots complete.

Gun
Gun

For more photos, check out the entire album.

14 May 2016 4:15pm GMT

Mattias Geniar: The day Google Chrome disables HTTP/2 for nearly everyone: May 31st, 2016

The post The day Google Chrome disables HTTP/2 for nearly everyone: May 31st, 2016 appeared first on ma.ttias.be.

If you've been reading this blog for a while (or have been reading my rants on Twitter), you'll probably know this was coming already. If you haven't, here's the short version.

The Chromium project (whose end result is the Chrome Browser) has switched the negotiation protocol by which it decides whether to use HTTP/1.1 or the newer HTTP/2 on May 15th, 2016 May 31st, 2016.

Update: this change is coming to Chrome 51 (thanks Eric), originally scheduled for May 15th, but could be a few days later. The newly updated development calendar puts this on May 31st.

That in and of itself isn't a really big deal, but the consequences unfortunately are. Previously (as in: before May 31st, 2016), a protocol named NPN was used -- Next Protocol Negotiation. This wasn't a very efficient protocol, but it got the job done.

There's a newer negotiation protocol in town called ALPN -- Application-Layer Protocol Negotiation. This is a more efficient version with more future-oriented features. It's a good decision to switch from NPN to ALPN, there are far more benefits than there are downsides.

However, on the server side -- the side which runs the webservers that in turn run HTTP/2 -- there's a rather practical issue: to support ALPN, you need at least OpenSSL 1.0.2.

So what? You're a sysadmin, upgrade your shit already!

I know. It sounds easy, right? Well, it isn't. Just for comparison, here's the current (May 2016) state of OpenSSL on Linux.

Operating System OpenSSL version
CentOS 5 0.9.8e
CentOS 6 1.0.1e
CentOS 7 1.0.1e
Ubuntu 14.04 LTS 1.0.1f
Ubuntu 16.04 LTS 1.0.2g
Debian 7 (Wheezy) 1.0.1e
Debian 8 (Jessie) 1.0.1k

As you can tell from the list, there's a problem: out of the box, only the latest Ubuntu 16.04 LTS (out for less than a month) supports OpenSSL 1.0.2.

Upgrading OpenSSL packages isn't a trivial task, either. Since just about every other service links against the OpenSSL libraries, they too should be re-packaged (and tested!) to work against the latest OpenSSL release.

On the other hand, it's just a matter of time before distributions have to upgrade as support for OpenSSL 1.0.1 ends soon.

Support for version 1.0.1 will cease on 2016-12-31. No further releases of 1.0.1 will be made after that date. Security fixes only will be applied to 1.0.1 until then.

OpenSSL Release Strategy

To give you an idea of the scope of such an operation, on a typical LAMP server (the one powering the blogpost you're now reading), the following services all make use of the OpenSSL libraries.

$ lsof | grep libssl | awk '{print $1}' | sort | uniq
anvil
fail2ban
gdbus
gmain
httpd
postfix
mysqld
NetworkManager
nginx
php-fpm
puppet
sshd
sudo
tuned
zabbix_agent

A proper OpenSSL upgrade would cause all of those packages to be recreated too. That's a hassle, to say the least. And truth be told, it probably isn't just repackaging but potentially changing the code of each application to be compatible to the newer or changed API's in OpenSSL 1.0.2.

Right now, the proper simplest way to run HTTP/2 on a modern server (that isn't Ubuntu 16.04 LTS) would be to run a Docker container, based on Ubuntu 16.04, and run your webserver inside of it.

I don't blame Google for switching protocols and evolving the web, but I'm sad to see that as a result of it, a very large portion of Google Chrome users will have to live without HTTP/2, once again.

Before May 15th, 2016 -- a Google Chrome user would see this in its network inspector:

protocol_http2_enabled

After May 31st, it'll be old-skool HTTP/1.1.

protocol_http2_disabled

It used to be that enabling HTTP/2 in Nginx was a very simple operation, but in order to support Chrome it'll be a bit more complicated from now on.

This change also didn't come out of the blue: Chrome had disabled NPN back in 2015 but quickly undid that change when the impact was clear. We knew, since the end of 2015, that this change was coming -- we were given 6 months time to get support for ALPN going, but by the current state of OpenSSL packages that was too little time.

If you want to keep track of the state of Red Hat (Fedora, RHEL & CentOS) upgrades, here's some further reading: RFE: Need OpenSSL 1.0.2.

As I'm mostly a CentOS user, I'm unaware of the state of Debian or Ubuntu OpenSSL packages at this time.

The post The day Google Chrome disables HTTP/2 for nearly everyone: May 31st, 2016 appeared first on ma.ttias.be.

Related posts:

  1. Chrome drops NPN support for HTTP/2, ALPN only Update 23/11/2015: Chrome reverted the change, NPN is allowed again!...
  2. View the HTTP/SPDY/HTTP2 Protocol in Google Chrome A cool little improvement just landed in Chrome Canary (the...
  3. Enable HTTP/2 support in Chrome I actually thought HTTP/2 was enabled by default in Chrome,...

14 May 2016 12:28pm GMT

13 May 2016

feedPlanet Grep

Philip Van Hoof: Composition and aggregation to choose memory types in Qt

As we all know has Qt types like QPointer, QSharedPointer and we know about its object trees. So when do we use what?

Let's first go back to school, and remember the difference between composition and aggregation. Most of you probably remember drawings like this?

It thought us when to use composition, and when to use aggregation:

This model in the picture will for example tell us that a car's passenger must have ten fingers.

But what does this have to do with QPointer, QSharedPointer and Qt's object trees?

First situation is a shared composition. Both Owner1 and Owner2 can't survive without Shared (composition, filled up diamonds). For this situation you would typically use a QSharedPointer<Shared> at Owner1 and Owner2:

If there is no other owner, then it's probably better to just use Qt's object trees and setParent() instead. Note that for example QML's GC is not very well aware of QSharedPointer, but does seem to understand Qt's object trees.

Second situation are shared users. User1 and User2 can stay alive when Shared goes away (aggregation, empty diamonds). In this situation you typically use a QPointer<Shared> at User1 and at User2. You want to be aware when Shared goes away. QPointer<Shared>'s isNull() will become true after that happened.

Third situation is a mixed one. In this case you could at Owner use a QSharedPointer<Shared> or a parented raw QObject pointer (using setParent()), but a QPointer<Shared> at User. When Owner goes away and its destructor (due to the parenting) deletes Shared, User can check for it using the previously mentioned isNull check.

Finally if you have a typical object tree, then use QObject's infrastructure for this.

13 May 2016 4:53pm GMT