17 Aug 2018

feedPlanet Grep

Dries Buytaert: Acquia recognized among Inc 5000 fastest growing companies

Acquia was once again included the Inc 5000 listing of fast-growing private U.S. companies. It's a nice milestone for us because it is the seventh year in a row that Acquia has been included. We first appeared on the list in 2012, when Acquia was ranked the eighth fastest growing private company in the United States. It's easy to grow fast when you get started, but as you grow, it's increasingly more challenging to sustain high growth rates. While there may be 4,700 companies ahead of us, we have kept a solid track record of growth ever since our debut seven years ago. I continue to be proud of the entire Acquia team who are relentless in making these achievements possible. Kapow!

17 Aug 2018 8:09pm GMT

16 Aug 2018

feedPlanet Grep

Xavier Mertens: Detecting SSH Username Enumeration

A very quick post about a new thread which has been started yesterday on the OSS-Security mailing list. It's about a vulnerability affecting almost ALL SSH server version. Quoted from the initial message;

It affects all operating systems, all OpenSSH versions (we went back as far as OpenSSH 2.3.0, released in November 2000)

It is possible to enumerate usernames on a server that offers SSH services publicly. Of course, it did not take too long to see a proof-of-concept posted. I just tested it and it works like a charm:

$ ./ssh-check-username.py victim.domain.com test
[*] Invalid username
$ ./ssh-check-username.py victim.domain.com xavier
[+] Valid username

This is very nice/evil (depending on the side you're working on). For Red Teams, it's nice to enumerate usernames and focus on the weakest ones ("guest", "support", "test", etc). There are plenty of username lists available online to brute force the server.

From a Blue Team point of view, how to detect if a host is targeted by this attack? Search for this type of event:

Aug 16 21:42:10 victim sshd[10680]: fatal: ssh_packet_get_string: incomplete message [preauth]

Note that the offending IP address is not listed in the error message. It's time to keep an eye on your log files and block suspicious IP addresses that make too many SSH attempts (correlate with your firewall logs).

[The post Detecting SSH Username Enumeration has been first published on /dev/random]

16 Aug 2018 8:02pm GMT

Xavier Mertens: [SANS ISC] Truncating Payloads and Anonymizing PCAP files

I published the following diary on isc.sans.org: "Truncating Payloads and Anonymizing PCAP files":

Sometimes, you may need to provide PCAP files to third-party organizations like a vendor support team to investigate a problem with your network. I was looking for a small tool to anonymize network traffic but also to restrict data to packet headers (and drop the payload). Google pointed me to a tool called 'TCPurify'… [Read more]

[The post [SANS ISC] Truncating Payloads and Anonymizing PCAP files has been first published on /dev/random]

16 Aug 2018 11:43am GMT

14 Aug 2018

feedPlanet Grep

Jeroen De Dauw: Clean Architecture + Bounded Contexts

In this follow-up to Implementing the Clean Architecture I introduce you to a combination of The Clean Architecture and the strategic DDD pattern known as Bounded Contexts.

At Wikimedia Deutschland we use this combination of The Clean Architecture and Bounded Contexts for our fundraising applications. In this post I describe the structure we have and the architectural rules we follow in the abstract. For the story on how we got to this point and a more concrete description, see my post Bounded Contexts in the Wikimedia Fundraising Software. In that post and at the end of this one I link you to a real-world codebase that follows the abstract rules described in this post.

If you are not yet familiar with The Clean Architecture, please first read Implementing the Clean Architecture.

Clean Architecture + Bounded Contexts

Diagram by Jeroen De Dauw, Charlie Kritschmar, Jan Dittrich and Hanna Petruschat

Diagram depicting Clean Architecture + Bounded Contexts

In the top layer of the diagram we have applications. These can be web applications, they can be console applications, they can be monoliths, they can be microservices, etc. Each application has presentation code which in bigger applications tends to reside in a decoupled presentation layer using patterns such as presenters. All applications also somehow construct the dependency graph they need, perhaps using a Dependency Injection Container or set of factories. Often this involves reading configuration from somewhere. The applications contain ALL framework binding, hence they are the place where you will find the Controllers if you are using a typical web framework.

Since the applications are in the top layer, and dependencies can only go down, no code outside of the applications is allowed to depend on code in the applications. That means there is 0 binding to mechanisms such as frameworks and presentation code outside of the applications.

In the second layer we have the Bounded Contexts. Ideally one Bounded Context per subdomain. At the core of each BC we have the Domain Model and Domain Services, containing the business logic part of the subdomain. Dependencies can only point inwards, so the Domain model which is at the center cannot depend on anything more to the outside. Around the Domain Model are the Domain Services. These include interfaces for persistence services such as Repositories. The UseCases form the final ring. They can use both the Domain Model and the Domain Services. They also form a boundary around the two, meaning that no code outside of the Bounded Context is allowed to talk to the Domain Model or Domain Services.

The Bounded Contexts include their own Persistence Layer. The Persistence Layer can use a relational database, files on the file system, a remote web API, a combination of these, etc. It has implementations of domain services such as Repositories which are used by the UseCases. These implementations are the only thing that is allowed to talk to and know about the low-level aspects of the Persistence Layer. The only things that can use these service implementations are other Domain Services and the UseCases.

The UseCases, including their Request Models and Response Models, form the public interface of the Bounded Context. This means that there is 0 binding to the persistence mechanisms outside of the Bounded Context. It also means that the code responsible for the domain logic cannot be directly accessed elsewhere, such as in the presentation layer of an application.

The applications and Bounded Contexts contain all the domain specific code. This code can make use of libraries and of course the runtime (ie PHP) itself.

As examples of Bounded Contexts following this approach, see the Donation Context and Membership Context. For an application following this architecture, see the FundraisingFrontend, which uses both the Donation Context and Membership Context. Both these contexts are also used by another application the code of which sadly enough is not currently public. You can also read the stories of how we rewrote the FundraisingFontend to use the Clean Architecture and how we refactored towards Bounded Contexts.

Further reading

If you are not yet familiar with Bounded Contexts or how to design them well, I recommend reading Domain-Driven Design Distilled.

14 Aug 2018 9:42pm GMT

Jeroen De Dauw: Bounded Contexts in the Wikimedia Fundraising Software

In this follow-up to rewriting the Wikimedia Deutschland fundraising I tell the story of how we reorganized our codebases along the lines of the DDD strategic pattern Bounded Contexts.

In 2016 the FUN team at Wikimedia Deutschland rewrote the Wikimedia Deutschland fundraising application. This new codebase uses The Clean Architecture and near the end of the rewrite got reorganized partially towards Bounded Contexts. After adding many new features to the application in 2017, we reorganized further towards Bounded Contexts, this time also including our other fundraising applications. In this post I explain the questions we had and which decisions we ended up making. I also link to the relevant code so you have real world examples of using both The Clean Architecture and Bounded Contexts.

This post is a good primer to my Clean Architecture + Bounded Contexts post, which describes the structure and architecture rules we now have in detail.

Our initial rewrite

Back in 2014, we had two codebases, each in their own git repository and not using code from the other. The first one being a user facing PHP web-app that allows people to make donations and apply for memberships, called FundraisingFrontend. The "frontend" here stands for "user facing". This is the application that we rewrote in 2016. The second codebase mainly contains the Fundraising Operations Center, a PHP web-app used by the fundraising staff to moderate and analyze donations and membership applications. This second codebase also contains some scripts to do exports of the data for communication with third-party systems.

Both the FundraisingFrontend and Fundraising Operations Center (FOC) used the same MySQL database. They each accessed this database in their own way, using PDO, Doctrine or SQL directly. In 2015 we created a Fundraising Store component based on Doctrine, to be used by both applications. Because we rewrote the FundraisingFrontend, all data access code there now uses this component. The FOC codebase we have been gradually migrating away from random SQL or PDO in its logic to also using this component, a process that is still ongoing.

Hence as we started with our rewrite of the FundraisingFrontend in 2016, we had 3 sets of code: FundraisingFrontend, FOC and Fundraising Store. After our rewrite the picture still looked largely the same. What we had done was turning FundraisingFrontend from a big ball of mud into a well designed application with proper architecture and separation of subdomains using Bounded Contexts. So while there where multiple components within the FundraisingFrontend, it was still all in one codebase / git repository. (Except for several small PHP libraries, but they are not relevant here.)

Need for reorganization

During 2017 we did a bunch of work on the FOC application and associated export script, all the while doing gradual refactoring towards sanity. While refactoring we found ourself implementing things such as a DonationRepository, things that already existed in the Bounded Contexts part of the FundraisingFrontend codebase. We realized that at least the subset of the FOC application that does moderation is part of the same subdomains that we created those Bounded Contexts for.

The inability to use these Bounded Contexts in the FOC app was forcing us to do double work by creating a second implementation of perfectly good code we already had. This also forced us to pay a lot of extra attention to avoid inconsistencies. For instance, we ran into issues with the FOC app persisting Donations in a state deemed invalid by the donations Bounded Context.

To address these issues we decided to share the Bounded Contexts that we created during the 2016 rewrite with all relevant applications.

Sharing our Bounded Contexts

We considered several approaches to sharing our Bounded Contexts between our applications. The first approach we considered was having a dedicated git repository per BC. To do this we would need to answer what exactly would go into this repository and what would stay behind. We where concerned that for many changes we'd need to touch both the BC git repo and the application git repo, which requires more work and coordination than being able to make a change in a single repository. This lead us to consider options such as putting all BCs together into a single repo, to minimize this cost, or to simply put all code (BCs and applications) into a single huge repository.

We ended up going with one repo per BC, though started with a single BC to see how well this approach would work before committing to it. With this approach we still faced the question of what exactly should go into the BC repo. Should that just be the UseCases and their dependencies, or also the presentation layer code that uses the UseCases? We decided to leave the presentation layer code in the application repositories, to avoid extra (heavy) dependencies in the BC repo and because the UseCases provide a nice interface to the BC. Following this approach it is easy to tell if code belongs in the BC repo or not: if it binds to the domain model, it belongs in the BC repo.

These are the Bounded Context git repositories:

The BCs still use the FundraisingStore component in their data access services. Since this is not visible from outside of the BC, we can easily refactor towards removing the FundraisingStore and having the data access mechanism of a BC be truly private to it (as it is supposed to be for BCs).

The new BC repos allow us to continue gradual refactoring of the FOC and swap out legacy code with UseCases from the BCs. We can do so without duplicating what we already did and, thanks to the proper encapsulation, also without running into consistency issues.

Clean Architecture + Bounded Contexts

During our initial rewrite we created a diagram to represent the favor of The Clean Architecture that we where using. For an updated version that depicts the structure of The Clean Architecture + Bounded Contexts and describes the rules of this architecture in detail, see my blog post The Clean Architecture + Bounded Contexts.

Diagram depicting Clean Architecture + Bounded Contexts

14 Aug 2018 9:34pm GMT

10 Aug 2018

feedPlanet Grep

Frank Goossens: How to LYTE-n up your WooCommerce video’s

So you have a WooCommerce shop which uses YouTube video's to showcase your products but those same video's are slowing your site down as YouTube embeds typically do? WP YouTube Lyte can go a long way to fix that issue, replacing the "fat embedded YouTube player" with a LYTE alternative.

LYTE will automatically detect and replace YouTube links (oEmbeds) and iFrames in your content (blogposts, pages, product descriptions) but is not active on content that does not hook into WordPress' the_content filter (e.g. category descriptions or short product descriptions). To have LYTE active on those as well, just hook the respective filters up with the lyte_parse-function and you're good to go;

if (function_exists('lyte_parse')) {
add_filter('woocommerce_short_description','lyte_parse');
add_filter('category_description','lyte_parse');
}

And a LYTE video, in case you're wondering, looks like this (in this case beautiful harmonies by David Crosby & Venice, filmed way back in 1999 on Dutch TV);

YouTube Video
Watch this video on YouTube.

Possibly related twitterless twaddle:

10 Aug 2018 1:25pm GMT

09 Aug 2018

feedPlanet Grep

FOSDEM organizers: Call for participation

We now invite proposals for main track presentations, developer rooms, stands and lightning talks. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The nineteenth edition will take place on Saturday 2nd and Sunday 3rd February 2019 at the usual location: ULB Campus Solbosch in Brussels. We will record and stream all main tracks, devrooms and lightning talks live. The recordings will be published under the same licence as all FOSDEM content (CC-BY). If, exceptionally,舰

09 Aug 2018 3:00pm GMT

Lionel Dricot: Campagne Ulule : les aventures d’Aristide, le lapin cosmonaute

Le problème d'Aristide, c'est qu'il passe un peu trop de temps sur Internet. Et que passer du temps sur Internet, ça donne des idées. Du genre des idées d'aller dans l'espace, d'explorer les planètes et de faire un petit pas pour un lapin mais un grand pas pour la lapinité.

C'est dit, Aristide ne sera pas commercial dans l'entreprise d'exportation de carotte familiale. Il sera cosmonaute !

Pour y arriver, il a besoin de votre aide sur notre campagne de financement participatif.

Né dans l'imagination de votre serviteur et mis en image par le talent graphique de Vinch, les aventures d'Aristide a été conçu pour un livre pour enfant à destination des adultes : texte dense, vocabulaire fouillé, humour absurde et second degré.

Mais n'infantilise-t-on pas un peu trop les enfants ? Eux aussi sont capables de se passionner pour une histoire plus longue, d'apprécier la naïveté colorée d'une conquête spatiale pas comme les autres. Aristide est donc un livre pour enfants pour adultes pour enfants. Avec en filigrane une problématique actuelle : faut-il croire tout ce qu'on lit sur Internet ? Parfois oui, parfois non, parfois cela donne des idées…

Bien que ce projet aie nécessité énormément de travail et d'efforts, nous avons choisi la voie de l'auto-édition afin de réaliser un véritable livre pour enfants (mais pour adultes pour enfants, vous suivez ?) de l'époque Internet. Plutôt que d'optimiser les coûts, nous cherchons avant tout à produire un livre de qualité sur tous les aspects (impression, papier recyclé). Le choix définitif de l'imprimeur n'est d'ailleurs pas arrêté, si jamais vous avez des filons, faites nous signe !

Bref, je pourrais vous parler de ce projet pendant des heures mais, aujourd'hui, on a surtout besoin de votre soutien à la fois financier et à la fois pour diffuser le projet sur les réseaux sociaux (surtout ceux hors-internet, genre les amis, la famille, les parents de l'école, toussa).

De la part d'Aristide, un tout grand merci d'avance !

Vous avez aimé votre lecture ? Soutenez l'auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

09 Aug 2018 2:04pm GMT

07 Aug 2018

feedPlanet Grep

Philip Van Hoof: Doing It Right examples on autotools, qmake, cmake and meson

About

I finished my earlier work on build environment examples. Illustrating how to do versioning on shared object files right with autotools, qmake, cmake and meson. You can find it here.

The DIR examples are examples for various build environments on how to create a good project structure that will build libraries that are versioned with libtool or have versioning that is equivalent to what libtool would deliver, have a pkg-config file and have a so called API version in the library's name.

What is right?

Information on this can be found in the autotools mythbuster docs, the libtool docs on versioning and freeBSD's chapter on shared libraries. I tried to ensure that what is written here works with all of the build environments in the examples.

libpackage-4.3.so.2.1.0, what is what?

You'll notice that a library called 'package' will in your LIBDIR often be called something like libpackage-4.3.so.2.1.0

We call the 4.3 part the APIVERSION, and the 2.1.0 part the VERSION (the ABI version).

I will explain these examples using semantic versioning as APIVERSION and either libtool's current:revision:age or a semantic versioning alternative as field for VERSION (like in FreeBSD and for build environments where compatibility with libtool's -version-info feature ain't a requirement).

Noting that with libtool's -version-info feature the values that you fill in for current, age and revision will not necessarily be identical to what ends up as suffix of the soname in LIBDIR. The formula to form the filename's suffix is, for libtool, "(current - age).age.revision". This means that for soname libpackage-APIVERSION.so.2.1.0, you would need current=3, revision=0 and age=1.

The VERSION part

In case you want compatibility with or use libtool's -version-info feature, the document libtool/version.html on autotools.io states:

The rules of thumb, when dealing with these values are:

  • Increase the current value whenever an interface has been added, removed or changed.
  • Always increase the revision value.
  • Increase the age value only if the changes made to the ABI are backward compatible.

The libtool's -version-info feature's updating-version-info part of libtool's docs states:

  1. Start with version information of '0:0:0' for each libtool library.
  2. Update the version information only immediately before a public release of your software. More frequent updates are unnecessary, and only guarantee that the current interface number gets larger faster.
  3. If the library source code has changed at all since the last update, then increment revision ('c:r:a' becomes 'c:r+1:a').
  4. If any interfaces have been added, removed, or changed since the last update, increment current, and set revision to 0.
  5. If any interfaces have been added since the last public release, then increment age.
  6. If any interfaces have been removed or changed since the last public release, then set age to 0.

When you don't care about compatibility with libtool's -version-info feature, then you can take the following simplified rules for VERSION:

  • SOVERSION = Major version
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

Examples when these simplified rules are or can be applicable is in build environments like cmake, meson and qmake. When you use autotools you will be using libtool and then they ain't applicable.

The APIVERSION part

For the API version I will use the rules from semver.org. You can also use the semver rules for your package's version:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

When you have an API, that API can change over time. You typically want to version those API changes so that the users of your library can adopt to newer versions of the API while at the same time other users still use older versions of your API. For this we can follow section 4.3. called "multiple libraries versions" of the autotools mythbuster documentation. It states:

In this situation, the best option is to append part of the library's version information to the library's name, which is exemplified by Glib's libglib-2.0.so.0 > soname. To do so, the declaration in the Makefile.am has to be like this:

lib_LTLIBRARIES = libtest-1.0.la

libtest_1_0_la_LDFLAGS = -version-info 0:0:0

The pkg-config file

Many people use many build environments (autotools, qmake, cmake, meson, you name it). Nowadays almost all of those build environments support pkg-config out of the box. Both for generating the file as for consuming the file for getting information about dependencies.

I consider it a necessity to ship with a useful and correct pkg-config .pc file. The filename should be /usr/lib/pkgconfig/package-APIVERSION.pc for soname libpackage-APIVERSION.so.VERSION. In our example that means /usr/lib/pkgconfig/package-4.3.pc. We'd use the command pkg-config package-4.3 -cflags -libs, for example.

Examples are GLib's pkg-config file, located at /usr/lib/pkgconfig/glib-2.0.pc

The include path

I consider it a necessity to ship API headers in a per API-version different location (like for example GLib's, at /usr/include/glib-2.0). This means that your API version number must be part of the include-path.

For example using earlier mentioned API-version 4.3, /usr/include/package-4.3 for /usr/lib/libpackage-4.3.so(.2.1.0) having /usr/lib/pkg-config/package-4.3.pc

What will the linker typically link with?

The linker will for -lpackage-4.3 typically link with /usr/lib/libpackage-4.3.so.2 or with libpackage-APIVERSION.so.(current - age). Noting that the part that is calculated as (current - age) in this example is often, for example in cmake and meson, referred to as the SOVERSION. With SOVERSION the soname template in LIBDIR is libpackage-APIVERSION.so.SOVERSION.

What is wrong?

Not doing any versioning

Without versioning you can't make any API or ABI changes that wont break all your users' code in a way that could be managable for them. If you do decide not to do any versioning, then at least also don't put anything behind the .so part of your so's filename. That way, at least you wont break things in spectacular ways.

Coming up with your own versioning scheme

Knowing it better than the rest of the world will in spectacular ways make everything you do break with what the entire rest of the world does. You shouldn't congratulate yourself with that. The only thing that can be said about it is that it probably makes little sense, and that others will probably start ignoring your work. Your mileage may vary. Keep in mind that without a correct SOVERSION, certain things will simply not work correct.

In case of libtool: using your package's (semver) release numbering for current, revision, age

This is similarly wrong to 'Coming up with your own versioning scheme'.

The Libtool documentation on updating version info is clear about this:

Never try to set the interface numbers so that they correspond to the release number of your package. This is an abuse that only fosters misunderstanding of the purpose of library versions.

This basically means that once you are using libtool, also use libtool's versioning rules.

Refusing or forgetting to increase the current and/or SOVERSION on breaking ABI changes

The current part of the VERSION (current, revision and age) minus age, or, SOVERSION is/are the most significant field(s). The current and age are usually involved in forming the so called SOVERSION, which in turn is used by the linker to know with which ABI version to link. That makes it … damn important.

Some people think 'all this is just too complicated for me', 'I will just refuse to do anything and always release using the same version numbers'. That goes spectacularly wrong whenever you made ABI incompatible changes. It's similarly wrong to 'Coming up with your own versioning scheme'.

That way, all programs that link with your shared library can after your shared library gets updated easily crash, can corrupt data and might or might not work.

By updating the current and age, or, SOVERSION you will basically trigger people who manage packages and their tooling to rebuild programs that link with your shared library. You actually want that the moment you made breaking ABI changes in a newer version of it.

When you don't want to care about libtool's -version-info feature, then there is also a set of more simple to follow rules. Those rules are for VERSION:

  • SOVERSION = Major version (with these simplified set of rules, no subtracting of current with age is needed)
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

What isn't wrong?

Not using libtool (but nonetheless doing ABI versioning right)

GNU libtool was made to make certain things more easy. Nowadays many popular build environments also make things more easy. Meanwhile has GNU libtool been around for a long time. And its versioning rules, commonly known as the current:revision:age field as parameter for -verison-info, got widely adopted.

What GNU libtool did was, however, not really a standard. It's is one interpretation of how to do it. And a rather complicated one, at that.

Please let it be crystal clear that not using libtool does not mean that you can do ABI versioning wrong. Because very often people seem to think that they can, and think they'll still get out safely while doing ABI versioning completely wrong. This is not the case.

Not having a APIVERSION at all

It isn't wrong not to have an APIVERSION in the soname. It however means that you promise to not ever break API. Because the moment you break API, you disallow your users to stay on the old API for a little longer. They might both have programs that use the old and that use the new API. Now what?

When you have an APIVERSION then you can allow the introduction of a new version of the API while simultaneously the old API remains available on a user's system.

Using a different naming-scheme for APIVERSION

I used the MAJOR.MINOR version numbers from semver to form the APIVERSION. I did this because only the MAJOR and the MINOR are technically involved in API changes (unless you are doing semantic versioning wrong - in which case see 'Coming up with your own versioning scheme').

Some projects only use MAJOR. Examples are Qt which puts the MAJOR number behind the Qt part. For example libQt5Core.so.VERSION (so that's "Qt" + MAJOR + Module). The GLib world, however, uses "g" + Module + "-" + MAJOR + ".0″ as they have releases like 2.2, 2.3, 2.4 that are all called libglib-2.0.so.VERSION. I guess they figured that maybe someday in their 2.x series, they could use that MINOR field?

DBus seems to be using a similar thing to GLib, but then without the MINOR suffix: libdbus-1.so.VERSION. For their GLib integration they also use it as libdbus-glib-1.so.VERSION.

Who is right, who is wrong? It doesn't matter too much for your APIVERSION naming scheme. As long as there is a way to differentiate the API in a) the include path, b) the pkg-config filename and c) the library that will be linked with (the -l parameter during linking/compiling). Maybe someday a standard will be defined? Let's hope so.

Differences in interpretation per platform

FreeBSD

FreeBSD's Shared Libraries of Chapter 5. Source Tree Guidelines and Policies states:

The three principles of shared library building are:

  1. Start from 1.0
  2. If there is a change that is backwards compatible, bump minor number (note that ELF systems ignore the minor number)
  3. If there is an incompatible change, bump major number

For instance, added functions and bugfixes result in the minor version number being bumped, while deleted functions, changed function call syntax, etc. will force the major version number to change.

I think that when using libtool on a FreeBSD (when you use autotools), that the platform will provide a variant of libtool's scripts that will convert earlier mentioned current, revision and age rules to FreeBSD's. The same goes for the VERSION variable in cmake and qmake. Meaning that with those tree build environments, you can just use the rules for GNU libtool's -version-info.

I could be wrong on this, but I did find mailing list E-mails from ~ 2011 stating that this SNAFU is dealt with. Besides, the *BSD porters otherwise know what to do and you could of course always ask them about it.

Note that FreeBSD's rules are or seem to be compatible with the rules for VERSION when you don't want to care about libtool's -version-info compatibility. However, when you are porting from a libtoolized project, then of course you don't want to let newer releases break against releases that have already happened.

Modern Linux distributions

Nowadays you sometimes see things like /usr/lib/$ARCH/libpackage-APIVERSION.so linking to /lib/$ARCH/libpackage-APIVERSION.so.VERSION. I have no idea how this mechanism works. I suppose this is being done by packagers of various Linux distributions? I also don't know if there is a standard for this.

I will update the examples and this document the moment I know more and/or if upstream developers need to worry about it. I think that using GNUInstallDirs in cmake, for example, makes everything go right. I have not found much for this in qmake, meson seems to be doing this by default and in autotools you always use platform variables for such paths.

As usual, I hope standards will be made and that the build environment and packaging community gets to their senses and stops leaving this into the hands of developers. I especially think about qmake, which seems to not have much at all to state that standardized installation paths must be used (not even a proper way to define a prefix).

Questions that I can imagine already exist

Why is there there a difference between APIVERSION and VERSION?

The API version is the version of your programmable interfaces. This means the version of your header files (if your programming language has such header files), the version of your pkgconfig file, the version of your documentation. The API is what software developers need to utilize your library.

The ABI version can definitely be different and it is what programs that are compiled and installable need to utilize your library.

An API breaks when recompiling the program without any changes, that consumes a libpackage-4.3.so.2, is not going to succeed at compile time. The API got broken the moment any possible way package's API was used, wont compile. Yes, any way. It means that a libpackage-5.0.so.0 should be started.

An ABI breaks when without recompiling the program, replacing a libpackage-4.3.so.2.1.0 with a libpackage-4.3.so.2.2.0 or a libpackage-4.3.so.2.1.1 (or later) as libpackage-4.3.so.2 is not going to succeed at runtime. For example because it would crash, or because the results would be wrong (in any way). It implies that libpackage-4.3.so.2 shouldn't be overwritten, but libpackage-4.3.so.3 should be started.

For example when you change the parameter of a function in C to be a floating point from a integer (and/or the other way around), then that's an ABI change but not neccesarily an API change.

What is this SOVERSION about?

In most projects that got ported from an environment that uses GNU libtool (for example autotools) to for example cmake or meson, and in the rare cases that they did anything at all in a qmake based project, I saw people converting the current, revision and age parameters that they passed to the -version-info option of libtool to a string concatenated together using (current - age), age, revision as VERSION, and (current - age) as SOVERSION.

I wanted to use the exact same rules for versioning for all these examples, including autotools and GNU libtool. When you don't have to (or want to) care about libtool's set of (for some people, needlessly complicated) -version-info rules, then it should be fine using just SOVERSION and VERSION using these rules:

  • SOVERSION = Major version
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

I, however, also sometimes saw variations that are incomprehensible with little explanation and magic foo invented on the spot. Those variations are probably wrong.

In the example I made it so that in the root build file of the project you can change the numbers and calculation for the numbers. However. Do follow the rules for those correctly, as this versioning is about ABI compatibility. Doing this wrong can make things blow up in spectacular ways.

The examples

qmake in the qmake-example

Note that the VERSION variable must be filled in as "(current - age).age.revision" for qmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1)

To try this example out, go to the qmake-example directory and type

$ cd qmake-example
$ mkdir=_test
$ qmake PREFIX=$PWD/_test
$ make
$ make install

This should give you this:

$ find _test/
_test/
├── include
│   └── qmake-example-4.3
│       └── qmake-example.h
└── lib
    ├── libqmake-example-4.3.so -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2 -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2.1 -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.la
    └── pkgconfig
        └── qmake-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I'm replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config qmake-example-4.3 --cflags
-I$PWD/_test/include/qmake-example-4.3
$ pkg-config qmake-example-4.3 --libs
-L$PWD/_test/lib -lqmake-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment).

$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ echo -en "#include <qmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config qmake-example-4.3 --libs --cflags`

You can see that it got linked to libqmake-example-4.3.so.2, where that 2 at the end is (current - age).

$ ldd test.o 
    linux-gate.so.1 (0xb77b0000)
    libqmake-example-4.3.so.2 => $PWD/_test/lib/libqmake-example-4.3.so.2 (0xb77a6000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb75f5000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb759e000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb7580000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb73c9000)
    /lib/ld-linux.so.2 (0xb77b2000)

cmake in the cmake-example

Note that the VERSION property on your library target must be filled in with "(current - age).age.revision" for cmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1. Note that in cmake you must also fill in the SOVERSION property as (current - age), so SOVERSION=2 when current=3 and age=1).

To try this example out, go to the cmake-example directory and do

$ cd cmake-example
$ mkdir _test
$ cmake -DCMAKE_INSTALL_PREFIX:PATH=$PWD/_test
-- Configuring done
-- Generating done
-- Build files have been written to: .
$ make
[ 50%] Building CXX object src/libs/cmake-example/CMakeFiles/cmake-example.dir/cmake-example.cpp.o
[100%] Linking CXX shared library libcmake-example-4.3.so
[100%] Built target cmake-example
$ make install
[100%] Built target cmake-example
Install the project...
-- Install configuration: ""
-- Installing: $PWD/_test/lib/libcmake-example-4.3.so.2.1.0
-- Up-to-date: $PWD/_test/lib/libcmake-example-4.3.so.2
-- Up-to-date: $PWD/_test/lib/libcmake-example-4.3.so
-- Up-to-date: $PWD/_test/include/cmake-example-4.3/cmake-example.h
-- Up-to-date: $PWD/_test/lib/pkgconfig/cmake-example-4.3.pc

This should give you this:

$ tree _test/
_test/
├── include
│   └── cmake-example-4.3
│       └── cmake-example.h
└── lib
    ├── libcmake-example-4.3.so -> libcmake-example-4.3.so.2
    ├── libcmake-example-4.3.so.2 -> libcmake-example-4.3.so.2.1.0
    ├── libcmake-example-4.3.so.2.1.0
    └── pkgconfig
        └── cmake-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I'm replacing the current path with $PWD in the output each time):

$ pkg-config cmake-example-4.3 --cflags
-I$PWD/_test/include/cmake-example-4.3
$ pkg-config cmake-example-4.3 --libs
-L$PWD/_test/lib -lcmake-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <cmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config cmake-example-4.3 --libs --cflags`

You can see that it got linked to libcmake-example-4.3.so.2, where that 2 at the end is the SOVERSION. This is (current - age).

$ ldd test.o
    linux-gate.so.1 (0xb7729000)
    libcmake-example-4.3.so.2 => $PWD/_test/lib/libcmake-example-4.3.so.2 (0xb771f000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb756e000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb7517000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74f9000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7342000)
    /lib/ld-linux.so.2 (0xb772b000)

autotools in the autotools-example

Note that you pass -version-info current:revision:age directly with autotools. The libtool will translate that to (current - age).age.revision to form the so's filename (to get 2.1.0 at the end, you need current=3, revision=0, age=1).

To try this example out, go to the autotools-example directory and do

$ cd autotools-example
$ mkdir _test
$ libtoolize
$ aclocal
$ autoheader
$ autoconf
$ automake --add-missing
$ ./configure --prefix=$PWD/_test
$ make
$ make install

This should give you this:

$ tree _test/
_test/
├── include
│   └── autotools-example-4.3
│       └── autotools-example.h
└── lib
    ├── libautotools-example-4.3.a
    ├── libautotools-example-4.3.la
    ├── libautotools-example-4.3.so -> libautotools-example-4.3.so.2.1.0
    ├── libautotools-example-4.3.so.2 -> libautotools-example-4.3.so.2.1.0
    ├── libautotools-example-4.3.so.2.1.0
    └── pkgconfig
        └── autotools-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I'm replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config autotools-example-4.3 --cflags
-I$PWD/_test/include/autotools-example-4.3
$ pkg-config autotools-example-4.3 --libs
-L$PWD/_test/lib -lautotools-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <autotools-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ g++ -fPIC test.cpp -o test.o `pkg-config autotools-example-4.3 --libs --cflags`

You can see that it got linked to libautotools-example-4.3.so.2, where that 2 at the end is (current - age).

$ ldd test.o 
    linux-gate.so.1 (0xb778d000)
    libautotools-example-4.3.so.2 => $PWD/_test/lib/libautotools-example-4.3.so.2 (0xb7783000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb75d2000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb757b000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb755d000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb73a6000)
    /lib/ld-linux.so.2 (0xb778f000)

meson in the meson-example

Note that the version property on your library target must be filled in with "(current - age).age.revision" for meson (to get 2.1.0 at the end, you need version=2.1.0 when current=3, revision=0 and age=1. Note that in meson you must also fill in the soversion property as (current - age), so soversion=2 when current=3 and age=1).

To try this example out, go to the meson-example directory and do

$ cd meson-example
$ mkdir -p _build/_test
$ cd _build
$ meson .. --prefix=$PWD/_test
$ ninja
$ ninja install

This should give you this:

$ tree _test/
_test/
├── include
│   └── meson-example-4.3
│       └── meson-example.h
└── lib
    └── i386-linux-gnu
        ├── libmeson-example-4.3.so -> libmeson-example-4.3.so.2.1.0
        ├── libmeson-example-4.3.so.2 -> libmeson-example-4.3.so.2.1.0
        ├── libmeson-example-4.3.so.2.1.0
        └── pkgconfig
            └── meson-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I'm replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/i386-linux-gnu/pkgconfig
$ pkg-config meson-example-4.3 --cflags
-I$PWD/_test/include/meson-example-4.3
$ pkg-config meson-example-4.3 --libs
-L$PWD/_test/lib -lmeson-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <meson-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib/i386-linux-gnu
$ g++ -fPIC test.cpp -o test.o `pkg-config meson-example-4.3 --libs --cflags`

You can see that it got linked to libmeson-example-4.3.so.2, where that 2 at the end is the soversion. This is (current - age).

$ ldd test.o 
    linux-gate.so.1 (0xb772e000)
    libmeson-example-4.3.so.2 => $PWD/_test/lib/i386-linux-gnu/libmeson-example-4.3.so.2 (0xb7724000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb7573000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb751c000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74fe000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7347000)
    /lib/ld-linux.so.2 (0xb7730000)

07 Aug 2018 2:30pm GMT

03 Aug 2018

feedPlanet Grep

Frank Goossens: Music from Our Tube; Prisencolinensinainciusol plus ultra

So it's probably the crazy nineties and you have Adriano Celentano in a live show joined by the young Manu Chao (then the "leader" of Mano Negra) and they perform a weird mix of "Prisencolinensinainciusol" and "King Kong 5" and they throw in an interview during the song and there's lots of people dancing, a pretty awkward upskirt shot and general craziness.

This must (have) be(en) Italian TV, mustn't it?

YouTube Video
Watch this video on YouTube.

Possibly related twitterless twaddle:

03 Aug 2018 11:19am GMT

01 Aug 2018

feedPlanet Grep

Dries Buytaert: Acquia a leader in 2018 Gartner Magic Quadrant for Web Content Management

Today, Acquia was named a leader in the 2018 Gartner Magic Quadrant for Web Content Management. Acquia has now been recognized as a leader for five years in a row.

The 2018 Gartner Magic Quadrant for Web Content ManagementAcquia recognized as a leader, next to Adobe and Sitecore, in the 2018 Gartner Magic Quadrant for Web Content Management.

Analyst reports like the Gartner Magic Quadrant are important because they introduce organizations to Acquia and Drupal. Last year, I explained it in the following way: "If you want to find a good coffee place, you use Yelp. If you want to find a nice hotel in New York, you use TripAdvisor. Similarly, if a CIO or CMO wants to spend $250,000 or more on enterprise software, they often consult an analyst firm like Gartner.".

Our tenure as a top vendor is not only a strong endorsement of Acquia's strategy and vision, but also underscores our consistency. Drupal and Acquia are here to stay!

What I found interesting about this year's report is the increased emphasis on flexibility and ease of integration. I've been saying this for a few years now, but it's all about innovation through integration, rather than just innovation in the core platform itself.

Marketing technology landscape 2018An image of the Marketing Technology Landscape 2018. For reference, here are the 2011, 2012, 2014, 2015, 2016 and 2017 versions of the landscape. It shows how fast the marketing technology industry is growing.

Just look at the 2018 Martech 5000 - the supergraphic now includes 7,000 marketing technology solutions, which is a 27% increase from a year ago. This accelerated innovation isn't exclusive to marketing technology; its happening across every part of the enterprise technology stack. From headless commerce integrations to the growing adoption of JavaScript frameworks and emerging cross-channel experiences, organizations have the opportunity to re-imagine customer experiences like never before.

It's not surprising that customers are looking for an open platform that allows for open innovation and unlimited integrations. The best way to serve this need is through open APIs, decoupled architectures and an Open Source innovation model. This is why Drupal can offer its users thousands of integrations, more than all of the other Gartner leaders combined.

Acquia Experience Platform

When you marry Drupal's community-driven innovation with Acquia's cloud platform and suite of marketing tools, you get an innovative solution across every layer of your technology stack. It allows our customers to bring powerful new experiences to market, across the web, mobile, native applications, chatbots and more. Most importantly, it gives customers the freedom to build on their own terms.

Thank you to everyone who contributed to this result!

01 Aug 2018 6:49pm GMT

Jeroen De Dauw: Clean Architecture: UseCase tests

When creating an application that follows The Clean Architecture you end up with a number of UseCases that hold your application logic. In this blog post I outline a testing pattern for effectively testing these UseCases and avoiding common pitfalls.

Testing UseCases

A UseCase contains the application logic for a single "action" that your system supports. For instance "cancel a membership". This application logic interacts with the domain and various services. These services and the domain should have their own unit and integration tests. Each UseCase gets used in one or more applications, where it gets invoked from inside the presentation layer. Typically you want to have a few integration or edge-to-edge tests that cover this invocation. In this post I look at how to test the application logic of the UseCase itself.

UseCases tend to have "many" collaborators. I can't recall any that had less than 3. For the typical UseCase the number is likely closer to 6 or 7, with more collaborators being possible even when the design is good. That means constructing a UseCase takes some work: you need to provide working instances of all the collaborators.

Integration Testing

One way to deal with this is to write integration tests for your UseCases. Simply get an instance of the UseCase from your Top Level Factory or Dependency Injection Container.

This approach often requires you to mutate the factory or DIC. Want to test that an exception from the persistence service gets handled properly? You'll need to use some test double instead of the real service, or perhaps mutate the real service in some way. Want to verify a mail got send? Definitely want to use a Spy here instead of the real service. Mutability comes with a cost so is better avoided.

A second issue with using real collaborators is that your tests get slow due to real persistence usage. Even using an in-memory SQLite database (that needs initialization) instead of a simple in-memory fake repository makes for a speed difference of easily two orders of magnitude.

Unit Testing

While there might be some cases where integration tests make sense, normally it is better to write unit tests for UseCases. This means having test doubles for all collaborators. Which leads us to the question of how to best inject these test doubles into our UseCases.

As example I will use the CancelMembershipApplicationUseCase of the Wikimedia Deutschland fundrasing application.

function __construct(ApplicationAuthorizer $authorizer, ApplicationRepository $repository, TemplateMailerInterface $mailer) {
    $this->authorizer = $authorizer;
    $this->repository = $repository;
    $this->mailer = $mailer;
}

This UseCase uses 3 collaborators. An authorization service, a repository (persistence service) and a mailing service. First it checks if the operation is allowed with the authorizer, then it interacts with the persistence and finally, if all went well, it uses the mailing service to send a confirmation email. Our unit test should test all this behavior and needs to inject test doubles for the 3 collaborators.

The most obvious approach is to construct the UseCase in each test method.

public function testGivenIdOfUnknownDonation_cancellationIsNotSuccessful(): void {
    $useCase = new CancelMembershipApplicationUseCase(
        new SucceedingAuthorizer(),
        $this->newRepositoryWithCancellableDonation(),
        new MailerSpy()
    );

    $response = $useCase->cancelApplication(
        new CancellationRequest( self::ID_OF_NON_EXISTING_APPLICATION )
    );

    $this->assertFalse( $response->cancellationWasSuccessful() );
}

public function testGivenIdOfCancellableApplication_cancellationIsSuccessful(): void {
    $useCase = new CancelMembershipApplicationUseCase(
        new SucceedingAuthorizer(),
        $this->newRepositoryWithCancellableDonation(),
        new MailerSpy()
    );
    
    $response = $useCase->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )
    );

    $this->assertTrue( $response->cancellationWasSuccessful() );
}

Note how both these test methods use the same test doubles. This is not always the case, for instance when testing authorization failure, the test double for the authorizer service will differ, and when testing persistence failure, the test double for the persistence service will differ.

public function testWhenAuthorizationFails_cancellationFails(): void {
    $useCase = new CancelMembershipApplicationUseCase(
        new FailingAuthorizer(),
        $this->newRepositoryWithCancellableDonation(),
        new MailerSpy()
    );

    $response = $useCase->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )
    );

    $this->assertFalse( $response->cancellationWasSuccessful() );
}

Normally a test function will only change a single test double.

UseCases tend to have, on average, two or more behaviors (and thus tests) per collaborator. That means for most UseCases you will be repeating the construction of the UseCase in a dozen or more test functions. That is a problem. Ask yourself why.

If the answer you came up with was DRY then think again and read my blog post on DRY 😉 The primary issue is that you couple each of those test methods to the list of collaborators. So when the constructor signature of your UseCase changes, you will need to do Shotgun Surgery and update all test functions. Even if those tests have nothing to do with the changed collaborator. A second issue is that you pollute the test methods with irrelevant details, making them harder to read.

Default Test Doubles Pattern

The pattern is demonstrated using PHP + PHPUnit and will need some adaptation when using a testing framework that does not work with a class based model like that of PHPUnit.

The coupling to the constructor signature and resulting Shotgun Surgery can be avoided by having a default instance of the UseCase filled with the right test doubles. This can be done by having a newUseCase method that constructs the UseCase and returns it. A way to change specific collaborators is needed (ie a FailingAuthorizer to test handling of failing authorization).

private function newUseCase() {
    return new CancelMembershipApplicationUseCase(
        new SucceedingAuthorizer(),
        new InMemoryApplicationRepository(),
        new MailerSpy()
    );
}

Making the UseCase itself mutable is a big no-no. Adding optional parameters to the newUseCase method works in languages that have named parameters. Since PHP does not have named parameters, another solution is needed.

An alternative approach to getting modified collaborators into the newUseCase method is using fields. This is less nice than named parameters, as it introduces mutable state on the level of the test class. Since in PHP this approach gives us named fields and is understandable by tools, it is better than either using a positional list of optional arguments or emulating named arguments with an associative array (key-value map).

The fields can be set in the setUp method, which gets called by PHPUnit before the test methods. For each test method PHPUnit instantiates the test class, then calls setUp, and then calls the test method.

public function setUp() {
    $this->authorizer = new SucceedingAuthorizer();
    $this->repository = new InMemoryApplicationRepository();
    $this->mailer = new MailerSpy();

    $this->cancelableApplication = ValidMembershipApplication::newDomainEntity();
    $this->repository->storeApplication( $this->cancelableApplication );
}
private function newUseCase(): CancelMembershipApplicationUseCase {
    return new CancelMembershipApplicationUseCase(
        $this->authorizer,
        $this->repository,
        $this->mailer
    );
}

With this field-based approach individual test methods can modify a specific collaborator by writing to the field before calling newUseCase.

public function testWhenAuthorizationFails_cancellationFails(): void {
    $this->authorizer = new FailingAuthorizer();

    $response = $this->newUseCase()->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )
    );

    $this->assertFalse( $response->cancellationWasSuccessful() );
}

public function testWhenSaveFails_cancellationFails() {
    $this->repository->throwOnWrite();

    $response = $this->newUseCase()->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )
    );

    $this->assertFalse( $response->cancellationWasSuccessful() );
}

The choice of default collaborators is important. To minimize binding in the test functions, the default collaborators should not cause any failures. This is the case both when using the field-based approach and when using optional named parameters.

If the authorization service failed by default, most test methods would need to modify it, even if they have nothing to do with authorization. And it is not always self-evident they need to modify the unrelated collaborator. Imagine the default authorization service indeed fails and that the testWhenSaveFails_cancellationFails test method forgets to modify it. This test method would end up passing even if the behavior it tests is broken, since the UseCase will return the expected failure result even before getting to the point where it saves something.

This is why inside of the setUp function the example creates a "cancellable application" and puts it inside an in-memory test double of the repository.

I chose the CancelMembershipApplication UseCase as an example because it is short and easy to understand. For most UseCases it is even more important to avoid the constructor signature coupling as this issue becomes more severe with size. And no matter how big or small the UseCase is, you benefit from not polluting your tests with unrelated setup details.

You can view the whole CancelMembershipApplicationUseCase and CancelMembershipApplicationUseCaseTest.

See also:

01 Aug 2018 1:31pm GMT

31 Jul 2018

feedPlanet Grep

Xavier Mertens: [SANS ISC] Exploiting the Power of Curl

I published the following diary on isc.sans.org: "Exploiting the Power of Curl":

Didier explained in a recent diary that it is possible to analyze malicious documents with standard Linux tools. I'm using Linux for more than 20 years and, regularly, I find new commands or new switches that help me to perform recurring (boring?) tasks in a more efficient way. How to use these tools can be found by running them with the flag '-h' or '-help'. They also have a corresponding man page that describes precisely how to use the numerous options available (just type 'man <command>' in your shell)… [Read more]

[The post [SANS ISC] Exploiting the Power of Curl has been first published on /dev/random]

31 Jul 2018 8:58pm GMT

Philip Van Hoof: Scheiding der machten

Dien Francken, heeft die als staatsecretaris niet de eed gezworen op onze Belgische grondwet?

Want beweren dat zijn hypothetische aannamens boven een beslissing van het gerecht staan, gaat tegen één van de wetten van onze grondwet in. Namelijk de scheiding der machten. Iemand die in functie is, gezworen heeft op die grondwet en daar totaal tegen in gaat begaat meineed en is strafbaar.

Een staatssecretarisch die zijn eed niet kan houden en die geen respect heeft voor de Belgische grondwet kan wat mij betreft niet aanblijven. Hoe populair hij door zijn populistische zever ook is.

31 Jul 2018 6:19pm GMT

30 Jul 2018

feedPlanet Grep

Dries Buytaert: Building digital backpacks for Syrian refugees

Digital backpack

I recently heard a heart-warming story from the University of California, Davis. Last month, UC Davis used Drupal to launch Article 26 Backpack, a platform that helps Syrian Refugees document and share their educational credentials.

Over the course of the Syrian civil war, more than 12 million civilians have been displaced. Hundreds of thousands of these refugees are students, who now have to overcome the obstacle of re-entering the workforce or pursuing educational degrees away from home.

Article 26 Backpack addresses this challenge by offering refugees a secure way to share their educational credentials with admissions offices, scholarship agencies, and potentials employers. The program also includes face-to-face counseling to provide participants with academic advisory and career development.

The UC Davis team launched their Drupal 8 application for Article 26 Backpack in four months. On the site, students can securely store their educational data, such as diplomas, transcripts and resumes. The next phase of the project will be to leverage Drupal's multilingual capabilities to offer the site in Arabic as well.

This is a great example of how organizations are using Drupal to prioritize impact. It's always inspiring to hear stories of how Drupal is changing lives for the better. Thank you to the UC Davis team for sharing their story, and continue the good work!

30 Jul 2018 3:16pm GMT

Kristof Willen: M365

Toys

No, this isn't a post about Microsoft, judging by the title. I'm talking about the Xiaomi M365 electric step. I've been looking lately to use my car way less, partially due to the fact that parking space is very limited at the train station. Getting fines for not parking at designated places surely doesn't help either. It took me a while to obtain a M365 before summer, but eventually I got it. The e-step has an autonomy of 20km (in my case), and this just suffices for the round-trip from/to the station. The step has a maximum speed of 25km/h, which is 'acceptable' : I would have preferred a bit faster, as taking over bikes sometimes takes a while.

This e-step is quite high-tech : it features cruise-control, ABS and KERS, which makes me hardly use the brakes. Cruising at 25km/h really is a blast, and I have really become quite fond at my daily ride with it. Additionally, it allows me to explore different routes, is more versatile than a bike, and can be taken with me on the train (although the size, even when folded, is quite large).

There's a quite active group of 'developers' around this e-step, creating custom firmware which allows to change different parameters such as maximum speed or KERS control. I've tested out a few, but additional speed comes with too much impact on the battery, so that I decided to stick with the official firmware.

30 Jul 2018 8:31am GMT