25 Jul 2017

feedPlanet Grep

Claudio Ramirez: Post-It: enable mobile hotspot on Mobile Vikings (BE)

Yesterday, it was the second time I wondered why the hotspot of my MotoG5 phone wasn't working with my tablet (after an OS update on the phone and the tablet).

The tablet connected fine to the phone hotspot through wifi, but the traffic was not routed to the Internet. I forgot about the first time I debugged the problem, so a little post-it not to forget it again. The secret is to change the APN type to:

APN type: supl,default

As reference, the other settings for Mobile Vikings:

APN: web.be
Username: web
Password: web
MCC: 206
MNC: 20
Authentication Type: PAP


Filed under: Uncategorized Tagged: hotspot, mobile, mobile vikings, post-it

25 Jul 2017 10:34am GMT

20 Jul 2017

feedPlanet Grep

Dries Buytaert: Arsenal using Drupal

As a Belgian sports fan, I will always be a loyal to the Belgium National Football Team. However, I am willing to extend my allegiance to Arsenal F.C. because they recently launched their new site in Drupal 8! As one of the most successful teams of England's Premier League, Arsenal has been lacing up for over 130 years. On the new Drupal 8 site, Arsenal fans can access news, club history, ticket services, and live match results. This is also a great example of collaboration with two Drupal companies working together - Inviqa in the UK and Phase2 in the US. If you want to see Drupal 8 on Arsenal's roster, check out https://www.arsenal.com!

Arsenal

20 Jul 2017 7:11pm GMT

19 Jul 2017

feedPlanet Grep

Philip Van Hoof: Zij bestaan voor ons. Wij bestaan niet voor hen.

19 Jul 2017 11:07pm GMT

Xavier Mertens: [SANS ISC] Bots Searching for Keys & Config Files

I published the following diary on isc.sans.org: "Bots Searching for Keys & Config Files".

If you don't know our "404" project, I would definitively recommend having a look at it! The idea is to track HTTP 404 errors returned by your web servers. I like to compare the value of 404 errors found in web sites log files to "dropped" events in firewall logs. They can have a huge value to detect ongoing attacks or attackers performing some reconnaissance… [Read more]

[The post [SANS ISC] Bots Searching for Keys & Config Files has been first published on /dev/random]

19 Jul 2017 11:11am GMT

18 Jul 2017

feedPlanet Grep

Mattias Geniar: Choose source IP with ping to force ARP refreshes

The post Choose source IP with ping to force ARP refreshes appeared first on ma.ttias.be.

This is a little trick you can use to force outgoing traffic via one particular IP address on a server. By default, if your server has multiple IPs, it'll use the default IP address for any outgoing traffic.

However, if you're changing IP addresses or testing failovers, you might want to force traffic to leave your server as if it's coming from one of the other IP addresses. That way, upstream switches learn your server has this IP address and they can update their ARP caches.

ping allows you to do that very easily (sending ICMP traffic).

$ ping 8.8.8.8 -I 10.0.1.4

The -I parameter sets the interface via which packets should be sent, it can accept either an interface name (like eth1) or an IP address. This way, traffic leaves your server with srcip 10.0.1.4.

Docs describe -I like this;

-I interface address
 Set  source address to specified interface address. Argument may be numeric IP
 address or name of device. When pinging IPv6 link-local address this option is
 required.

So works for both individual IP addresses as well as interfaces.

The post Choose source IP with ping to force ARP refreshes appeared first on ma.ttias.be.

18 Jul 2017 7:30pm GMT

Mattias Geniar: Apache httpd 2.2.15-60: underscores in hostnames are now blocked

The post Apache httpd 2.2.15-60: underscores in hostnames are now blocked appeared first on ma.ttias.be.

A minor update to the Apache httpd project on CentOS 6 had an unexpected consequence. The update from 2.2.15-59 to 2.2.15-60, as advised because of a small security issue, started respecting RFC 1123 and as a result, stops allowing underscores in hostnames.

I spent a while debugging a similar problem with IE dropping cookies on hostnames with underscores, because it turns out that's not a valid "hostname" as per the definition in RFC 1123.

Long story short, the minor update to Apache broke these kind of URLs;

These worked fine before, but now started throwing these errors;

Bad Request

Your browser sent a request that this server could not understand.
Additionally, a 400 Bad Request error was encountered while trying to use an ErrorDocument to handle the request.

The error logs showed the message as such;

[error] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23)

The short-term workaround was to downgrade Apache again.

$ yum downgrade httpd-tools httpd mod_ssl

And that allowed the underscores again, long-term plan is to migrate those (sub)domains to versions without underscores.

The post Apache httpd 2.2.15-60: underscores in hostnames are now blocked appeared first on ma.ttias.be.

18 Jul 2017 7:30pm GMT

Sven Vermeulen: Project prioritization

This is a long read, skip to "Prioritizing the projects and changes" for the approach details...

Organizations and companies generally have an IT workload (dare I say, backlog?) which needs to be properly assessed, prioritized and taken up. Sometimes, the IT team(s) get an amount of budget and HR resources to "do their thing", while others need to continuously ask for approval to launch a new project or instantiate a change.

Sizeable organizations even require engineering and development effort on IT projects which are not readily available: specialized teams exist, but they are governance-wise assigned to projects. And as everyone thinks their project is the top-most priority one, many will be disappointed when they hear there are no resources available for their pet project.

So... how should organizations prioritize such projects?

Structure your workload, the SAFe approach

A first exercise you want to implement is to structure the workload, ideas or projects. Some changes are small, others are large. Some are disruptive, others are evolutionary. Trying to prioritize all different types of ideas and changes in the same way is not feasible.

Structuring workload is a common approach. Changes are grouped in projects, projects grouped in programs, programs grouped in strategic tracks. Lately, with the rise in Agile projects, a similar layering approach is suggested in the form of SAFe.

In the Scaled Agile Framework a structure is suggested that uses, as a top-level approach, value streams. These are strategically aligned steps that an organization wants to use to build solutions that provide a continuous flow of value to a customer (which can be internal or external). For instance, for a financial service organization, a value stream could focus on 'Risk Management and Analytics'.

SAFe full framework

SAFe full framework overview, picture courtesy of www.scaledagileframework.com

The value streams are supported through solution trains, which implement particular solutions. This could be a final product for a customer (fitting in a particular value stream) or a set of systems which enable capabilities for a value stream. It is at this level, imo, that the benefits exercises from IT portfolio management and benefits realization management research plays its role (more about that later). For instance, a solution train could focus on an 'Advanced Analytics Platform'.

Within a solution train, agile release trains provide continuous delivery for the various components or services needed within one or more solutions. Here, the necessary solutions are continuously delivered in support of the solution trains. At this level, focus is given on the culture within the organization (think DevOps), and the relatively short-lived delivery delivery periods. This is the level where I see 'projects' come into play.

Finally, you have the individual teams working on deliverables supporting a particular project.

SAFe is just one of the many methods for organization and development/delivery management. It is a good blueprint to look into, although I fear that larger organizations will find it challenging to dedicate resources in a manageable way. For instance, how to deal with specific expertise across solutions which you can't dedicate to a single solution at a time? What if your organization only has two telco experts to support dozens of projects? Keep that in mind, I'll come back to that later...

Get non-content information about the value streams and solutions

Next to the structuring of the workload, you need to obtain information about the solutions that you want to implement (keeping with the SAFe terminology). And bear in mind that seemingly dull things such as ensuring your firewalls are up to date are also deliverables within a larger ecosystem. Now, with information about the solutions, I don't mean the content-wise information, but instead focus on other areas.

Way back, in 1952, Harry Markowitz introduced Modern portfolio theory as a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk (quoted from Wikipedia). This was later used in an IT portfolio approach by McFarlan in his Portfolio Approach to Information Systems article, published in September 1981.

There it was already introduced that risk and return shouldn't be looked at from an individual project viewpoint, but how it contributes to the overall risk and return. A balance, if you wish. His article attempts to categorize projects based on risk profiles on various areas. Personally, I see the suggested categorization more as a way of supporting workload assessments (how many mandays of work will this be), but I digress.

Since then, other publications came up which tried to document frameworks and methodologies that facilitate project portfolio prioritization and management. The focus often boils down to value or benefits realization. In The Information Paradox John Thorp comes up with a benefits realization approach, which enables organizations to better define and track benefits realization - although it again boils down on larger transformation exercises rather than the lower-level backlogs. The realm of IT portfolio management and Benefits realization management gives interesting pointers as to the lecture part of prioritizing projects.

Still, although one can hardly state the resources are incorrect, a common question is how to make this tangible. Personally, I tend to view the above on the value stream level and solution train level. Here, we have a strong alignment with benefits and value for customers, and we can leverage the ideas of past research.

The information needed at this level often boils down to strategic insights and business benefits, coarse-grained resource assessments, with an important focus on quality of the resources. For instance, a solution delivery might take up 500 days of work (rough estimation) but will also require significant back-end development resources.

Handling value streams and solutions

As we implement this on the highest level in the structure, it should be conceivable that the overview of the value streams (a dozen or so) and solutions (a handful per value stream) is manageable, and something that at an executive level is feasible to work with. These are the larger efforts for structuring and making strategic alignment. Formal methods for prioritization are generally not implemented or described.

In my company, there are exercises that are aligning with SAFe, but it isn't company-wide. Still, there is a structure in place that (within IT) one could map to value streams (with some twisting ;-) and, within value streams, there are structures in place that one could map to the solution train exercises.

We could assume that the enterprise knows about its resources (people, budget ...) and makes a high-level suggestion on how to distribute the resources in the mid-term (such as the next 6 months to a year). This distribution is challenged and worked out with the value stream owners. See also "lean budgeting" in the SAFe approach for one way of dealing with this.

There is no prioritization of value streams. The enterprise has already made its decision on what it finds to be the important values and benefits and decided those in value streams.

Within a value stream, the owner works together with the customers (internal or external) to position and bring out solutions. My experience here is that prioritization is generally based on timings and expectations from the customer. In case of resource contention, the most challenging decision to make here is to put a solution down (meaning, not to pursue the delivery of a solution), and such decisions are hardly taken.

Prioritizing the projects and changes

In the lower echelons of the project portfolio structure, we have the projects and changes. Let's say that the levels here are projects (agile release trains) and changes (team-level). Here, I tend to look at prioritization on project level, and this is the level that has a more formal approach for prioritization.

Why? Because unlike the higher levels, where the prioritization is generally quality-oriented on a manageable amount of streams and solutions, we have a large quantity of projects and ideas. Hence, prioritization is more quantity-oriented in which formal methods are more efficient to handle.

The method that is used in my company uses scoring criteria on a per-project level. This is not innovative per se, as past research has also revealed that project categorization and mapping is a powerful approach for handling project portfolio's. Just look for "categorizing priority projects it portfolio" in Google and you'll find ample resources. Kendal's Advanced Project Portfolio Management and the PMO (book) has several example project scoring criteria's. But allow me to explain our approach.

It basically is like so:

  1. Each project selects three value drivers (list decided up front)
  2. For the value drivers, the projects check if they contribute to it slightly (low), moderately (medium) or fully (high)
  3. The value drivers have weights, as do the values. Sum the resulting products to get a priority score
  4. Have the priority score validated by a scoring team

Let's get to the details of it.

For the IT projects within the infrastructure area (which is what I'm active in), we have around 5 scoring criteria (value drivers) that are value-stream agnostic, and then 3 to 5 scoring criteria that are value-stream specific. Each scoring criteria has three potential values: low (2), medium (4) and high (9). The numbers are the weights that are given to the value.

A scoring criteria also has a weight. For instance, we have a scoring criteria on efficiency (read: business case) which has a weight of 15, so a score of medium within that criteria gives a total value of 60 (4 times 15). The potential values here are based on the "return on investment" value, with low being a return less than 2 years, medium within a year, and high within a few months (don't hold me on the actual values, but you get the idea).

The sum of all values gives a priority score. Now, hold your horses, because we're not done yet. There is a scoring rule that says a project can only be scored by at most 3 scoring criteria. Hence, project owners need to see what scoring areas their project is mostly visible in, and use those scoring criteria. This rule supports the notion that people don't bring around ideas that will fix world hunger and make a cure for cancer, but specific, well scoped ideas (the former are generally huge projects, while the latter requires much less resources).

OK, so you have a score - is that your priority? No. As a project always falls within a particular value stream, we have a "scoring team" for each value stream which does a number of things. First, it checks if your project really belongs in the right value stream (but that's generally implied) and has a deliverable that fits the solution or target within that stream. Projects that don't give any value or aren't asked by customers are eliminated.

Next, the team validates if the scoring that was used is correct: did you select the right values (low, medium or high) matching the methodology for said criteria? If not, then the score is adjusted.

Finally, the team validates if the resulting score is perceived to be OK or not. Sometimes, ideas just don't map correctly on scoring criteria, and even though a project has a huge strategic importance or deliverable it might score low. In those cases, the scoring team can adjust the score manually. However, this is more of a fail-safe (due to the methodology) rather than the norm. About one in 20 projects gets its score adjusted. If too many adjustments come up, the scoring team will suggest a change in methodology to rectify the situation.

With the score obtained and validated by the scoring team, the project is given a "go" to move to the project governance. It is the portfolio manager that then uses the scores to see when a project can start.

Providing levers to management

Now, these scoring criteria are not established from a random number generator. An initial suggestion was made on the scoring criteria, and their associated weights, to the higher levels within the organization (read: the people in charge of the prioritization and challenging of value streams and solutions).

The same people are those that approve the weights on the scoring criteria. If management (as this is often the level at which this is decided) feels that business case is, overall, more important than risk reduction, then they will be able to put a higher value in the business case scoring than in the risk reduction.

The only constraint is that the total value of all scoring criteria must be fixed. So an increase on one scoring criteria implies a reduction on at least one other scoring criteria. Also, changing the weights (or even the scoring criteria themselves) cannot be done frequently. There is some inertia in project prioritization: not the implementation (because that is a matter of following through) but the support it will get in the organization itself.

Management can then use external benchmarks and other sources to gauge the level that an organization is at, and then - if needed - adjust the scoring weights to fit their needs.

Resource allocation in teams

Portfolio managers use the scores assigned to the projects to drive their decisions as to when (and which) projects to launch. The trivial approach is to always pick the projects with the highest scores. But that's not all.

Projects can have dependencies on other projects. If these dependencies are "hard" and non-negotiable, then the upstream project (the one being dependent on) inherits the priority of the downstream project (the one depending on the first) if the downstream project has a higher priority. Soft dependencies however need to validate if they can (or have to) wait, or can implement workarounds if needed.

Projects also have specific resource requirements. A project might have a high priority, but if it requires expertise (say DBA knowledge) which is unavailable (because those resources are already assigned to other ongoing projects) then the project will need to wait (once resources are fully allocated and the projects are started, then they need to finish - another reason why projects have a narrow scope and an established timeframe).

For engineers, operators, developers and other roles, this approach allows them to see which workload is more important versus others. When their scope is always within a single value stream, then the mentioned method is sufficient. But what if a resource has two projects, each of a different value stream? As each value stream has its own scoring criteria it can use (and weight), one value stream could systematically have higher scores than others...

Mixing and matching multiple value streams

To allow projects to be somewhat comparable in priority values, an additional rule has been made in the scoring methodology: value streams must have a comparable amount of scoring criteria (value drivers), and the total value of all criteria must be fixed (as was already mentioned before). So if there are four scoring criteria and the total value is fixed at 20, then one value stream can have its criteria at (5,3,8,4) while another has it at (5,5,5,5).

This is still not fully adequate, as one value stream could use a single criteria with the maximum amount (20,0,0,0). However, we elected not to put in an additional constraint, and have management work things out if the situation ever comes out. Luckily, even managers are just human and they tend to follow the notion of well-balanced value drivers.

The result is that two projects will have priority values that are currently sufficiently comparable to allow cross-value-stream experts to be exchangeable without monopolizing these important resources to a single value stream portfolio.

Current state

The scoring methodology has been around for a few years already. Initially, it had fixed scoring criteria used by three value streams (out of seven, the other ones did not use the same methodology), but this year we switched to support both value stream agnostic criteria (like in the past) as well as value stream specific ones.

The methodology is furthest progressed in one value stream (with focus of around 1000 projects) and is being taken up by two others (they are still looking at what their stream-specific criteria are before switching).

18 Jul 2017 6:40pm GMT

17 Jul 2017

feedPlanet Grep

Mattias Geniar: mysqldump without table locks (MyISAM and InnoDB)

The post mysqldump without table locks (MyISAM and InnoDB) appeared first on ma.ttias.be.

It's one of those things I always have to Google again and again, so documenting for my own sanity.

Here's how to run mysqldump without any locks, which works on both MyISAM and InnoDB data engines. There are no READ nor WRITE locks, which means the dump will have little to no influence on the machine (except for CPU & disk I/O for taking the back-up), but your data will also not be consistent.

This is useful for testing migrations though, where you might not need locks, but are more interested in timing back-ups or spotting other timeout-related bugs.

$ mysqldump --compress --quick --triggers --routines --lock-tables=false --single-transaction {YOUR_DATABASE_NAME}

The key parameters are --lock-tables=false (for MyISAM) and --single-transaction (for InnoDB).

If you need more options or flexibility, xtrabackup is a tool to check out.

The post mysqldump without table locks (MyISAM and InnoDB) appeared first on ma.ttias.be.

17 Jul 2017 8:30pm GMT

13 Jul 2017

feedPlanet Grep

Mattias Geniar: Unix time 1.500.000.000

The post Unix time 1.500.000.000 appeared first on ma.ttias.be.

In a couple of hours, we'll roll over Unix time to more than 1.500.000.000.

The current time is: 2017-07-13, 10:35:44 or 1499934944 in UNIX timestamp.

Calculate your own time conversion at time.mattiasgeniar.be.

The post Unix time 1.500.000.000 appeared first on ma.ttias.be.

13 Jul 2017 8:38am GMT

12 Jul 2017

feedPlanet Grep

Philip Van Hoof: Because …

A QEventLoop is a heavy dependency. Not every worker thread wants to require all its consumers to have one. This renders QueuedConnection not always suitable. I get that signals and slots are a useful mechanism, also for thread-communications. But what if your worker thread has no QEventLoop yet wants to wait for a result of what another worker-thread produces?

QWaitCondition is often what you want. Don't be afraid to use it. Also, don't be afraid to use QFuture and QFutureWatcher.

Just be aware that the guys at Qt have not yet decided what the final API for the asynchronous world should be. The KIO guys discussed making a QJob and/or a QAbstractJob. Because QFuture is result (of T) based (and waits and blocks on it, using a condition). And a QJob (derived from what currently KJob is), isn't or wouldn't or shouldn't block (such a QJob should allow for interactive continuation, for example - "overwrite this file? Y/N"). Meanwhile you want a clean API to fetch the result of any asynchronous operation. Blocked waiting for it, or not. It's an uneasy choice for an API designer. Don't all of us want APIs that can withstand the test of time? We do, yes.

Yeah. The world of programming is, at some level, complicated. But I'm also sure something good will come out of it. Meanwhile, form your asynchronous APIs on the principles of QFuture and or KJob: return something that can be waited for.

Sometimes a prediction of how it will be like is worth more than a promise. I honestly can't predict what Thiago will approve, commit or endorse. And I shouldn't.

12 Jul 2017 9:41pm GMT

Xavier Mertens: [SANS ISC] Backup Scripts, the FIM of the Poor

I published the following diary on isc.sans.org: "Backup Scripts, the FIM of the Poor".

File Integrity Management or "FIM" is an interesting security control that can help to detect unusual changes in a file system. By example, on a server, they are directories that do not change often. Example with a UNIX environment:

[Read more]

[The post [SANS ISC] Backup Scripts, the FIM of the Poor has been first published on /dev/random]

12 Jul 2017 11:48am GMT

Dries Buytaert: The reason why Acquia supports Net Neutrality

If you visit Acquia's homepage today, you will be greeted by this banner:

Acquia supports net neutrality

We've published this banner in solidarity with the hundreds of companies who are voicing their support of net neutrality.

Net neutrality regulations ensure that web users are free to enjoy whatever sites they choose without interference from Internet Service Providers (ISPs). These protections establish an open web where people can explore and express their ideas. Under the current administration, the U.S. Federal Communications Commision favors less-strict regulation of net neutrality, which could drastically alter the way that people experience and access the web. Today, Acquia is joining the ranks of companies like Amazon, Atlassian, Netflix and Vimeo to advocate for strong net neutrality regulations.

Why the FCC wants to soften net neutrality regulations

In 2015, the United States implemented strong protections favoring net neutrality after ISPs were classified as common carriers under Title II of the Communications Act of 1934. This classification catalogs broadband as an "essential communication service", which means that services are to be delivered equitably and costs kept reasonable. Title II was the same classification granted to telcos decades ago to ensure consumers had fair access to phone service. Today, the Title II classification of ISPs protects the open internet by making paid prioritization, blocking or throttling of traffic unlawful.

The issue of net neutrality is coming under scrutiny since to the appointment of Ajit Pai as the Chairman of the Federal Communications Commission. Pai favors less regulation and has suggested that the net neutrality laws of 2015 impede the ISP market. He argues that while people may support net neutrality, the market requires more competition to establish faster and cheaper access to the Internet. Pai believes that net neutrality regulations have the potential to curb investment in innovation and could heighten the digital divide. As FCC Chairman, Pai wants to reclassify broadband services under less-restrictive regulations and to eliminate definitive protections for the open internet.

In May 2017, the three members of the Federal Communications Commission voted 2-1 to advance a plan to remove Title II classification from broadband services. That vote launched a public comment period, which is open until mid August. After this period the commission will take a final vote.

Why net neutrality protections are good

I strongly disagree with Pai's proposed reclassification of net neutrality. Without net neutrality, ISPs can determine how users access websites, applications and other digital content. Today, both the free flow of information, and exchange of ideas benefit from 'open highways'. Net neutrality regulations ensure equal access at the point of delivery, and promote what I believe to be the fairest competition for content and service providers.

If the FCC rolls back net neutrality protections, ISPs would be free to charge site owners for priority service. This goes directly against the idea of an open web, which guarantees a unfettered and decentralized platform to share and access information. There are many challenges in maintaining an open web, including "walled gardens" like Facebook and Google. We call them "walled gardens" because they control the applications, content and media on their platform. While these closed web providers have accelerated access and adoption of the web, they also raise concerns around content control and privacy. Issues of net neutrality contribute a similar challenge.

When certain websites have degraded performance because they can't afford the premiums asked by ISPs, it affects how we explore and express ideas online. Not only does it drive up the cost of maintaining a website, but it undermines the internet as an open space where people can explore and express their ideas. It creates a class system that puts smaller sites or less funded organizations at a disadvantage. Dismantling net neutrality regulations raises the barrier for entry when sharing information on the web as ISPs would control what we see and do online. Congruent with the challenge of "walled gardens", when too few organizations control the media and flow of information, we must be concerned.

In the end, net neutrality affects how people, including you and me, experience the web. The internet's vast growth is largely a result of its openness. Contrary to Pai's reasoning, the open web has cultivated creativity, spawned new industries, and protects the free expression of ideas. At Acquia, we believe in supporting choice, competition and free speech on the internet. The "light touch" regulations now proposed by the FCC may threaten that very foundation.

What you can do today

If you're also concerned about the future of net neutrality, you can share your comments with the FCC and the U.S. Congress (it will only take you a minute!). You can do so through Fight for the Future, who organized today's day of action. The 2015 ruling that classified broadband service under Title II came after the FCC received more than 4 million comments on the topic, so let your voice be heard.

12 Jul 2017 10:44am GMT

11 Jul 2017

feedPlanet Grep

Mattias Geniar: Launching the cron.weekly forum

The post Launching the cron.weekly forum appeared first on ma.ttias.be.

I've been writing a weekly newsletter on Linux & open source technologies for nearly 2 years now at cron.weekly and to this day I'm amazed by all the feedback and response I've gotten from it. What's even more fun is watching the subscriber base grow on a weekly basis, to over 6.000 users already!

(That initial big spike is actually from when I imported the MailChimp list into a self-hosted Sendy installation)

At every issue there's good feedback on the projects I list, comments on the stories, ... it's so much fun!

But there's a downside ...

Taking cron.weekly to the next level

All that feedback has been directed at me. I've been mailing thousands of Linux enthousiasts every week and it's been one-way communication. I talk, they listen. That seems like a waste of potential.

Imagine if every one on that newsletter could share their knowledge and expertise and connect to one another? I'm definitely not the smartest person to be mailing them all, there are far more intelligent folks subscribed. I want to get them involved!

To start building a community around cron.weekly I've launched the cron.weekly forum. Yes, there are things like Reddit, Stack Overflow, Quora, ... and all other fun places to discuss topics. But there's one thing they don't have; the like minded people that subscribe to cron.weekly that each have the same interest at heart: caring about Linux, open source & web technologies.

Launching the forum is an experiment, there's already so many places out there to "waste your time" online, but I'm confident it can become an interactive place to discuss new technologies, share ideas or launch new open source projects.

Heck, there already interesting topics on load balancing features & requirements, end-to-end encrypted backups, new open source projects like git-dit, ...

Highlighting questions

If questions posted to the forum remain unanswered, I'll call upon the great powers of cron.weekly subscribers to highlight them and raise awareness, get more eyes on the topic & find the best answer possible.

The last cron.weekly issue #88 already had a "Ask cron.weekly" section to get that started.

I'm confident about the future of that forum, I think the newsletter can be a great way to get more attention to difficult-to-solve questions and allow cron.weekly to actively help its community members.

How can you help?

Now let's build a community together. :-) (1)

(1) I know, it's cheesy -- but I had to end my post with something, right?

The post Launching the cron.weekly forum appeared first on ma.ttias.be.

11 Jul 2017 8:29am GMT

10 Jul 2017

feedPlanet Grep

Frank Goossens: Went Summer Breaking in Scotland

So our summer holiday in Scotland did not resemble below video (Mark Ronson feat. Kevin Parker of Tame Impala), but it was magnificent nonetheless :-)

YouTube Video
Watch this video on YouTube.

Possibly related twitterless twaddle:

10 Jul 2017 4:34pm GMT

09 Jul 2017

feedPlanet Grep

Wim Leers: On simplicity & maintainability: CDN module for Drupal 8

The first release of the CDN module for Drupal was 9.5 years ago yesterday: cdn 5.x-1.0-beta1 was released on January 8, 2008!

Excitement

On January 27, 2008, the first RC followed, with boatloads of new features. Over the years, it was ported to Drupal 61, 7 and 8 and gained more features (I effectively added every single feature that was requested - I loved empowering the site builder). I did the same with my Hierarchical Select module.

I was a Computer Science student for the first half of those 9.5 years, and it was super exciting to see people actually use my code on hundreds, thousands and even tens of thousands of sites! In stark contrast with the assignments at university, where the results were graded, then discarded.

Frustration

Unfortunately this approach resulted in feature-rich modules, with complex UIs to configure them, and many, many bug reports and support requests, because they were so brittle and confusing. Rather than making the 80% case simple, I supported 99% of needed features, and made things confusing and complex for 100% of the users.


CDN settings: 'Details' tab Main CDN module configuration UI in Drupal 7.


Learning

In my job in Acquia's Office of the CTO, my job is effectively "make Drupal better & faster".

In 2012-2013, it was improving the authoring experience by adding in-place editing and tightly integrating CKEditor. Then it shifted in 2014 and 2015 to "make Drupal 8 shippable", first by working on the cache system, then on the render pipeline and finally on the intersection of both: Dynamic Page Cache and BigPipe. After Drupal 8 shipped at the end of 2015, the next thing became "improve Drupal 8's REST APIs", which grew into the API-First Initiative.

All this time (5 years already!), I've been helping to build Drupal itself (the system, the APIs, the infrastructure, the overarching architecture), and have seen the long-term consequences from both up close and afar: the concepts required to understand how it all works, the APIs to extend, override and plug in to. In that half decade, I've often cursed past commits, including my own!

That's what led to:

CDN module in Drupal 8: radically simpler

I started porting the CDN module to Drupal 8 in March 2016 - a few months after the release of Drupal 8. It is much simpler to use (just look at the UI). It has less overhead (the UI is in a separate module, the altering of file URLs has far simpler logic). It has lower technical complexity (File Conveyor support was dropped, it no longer needs to detect HTTP vs HTTPS: it always uses protocol-relative URLs, less unnecessary configurability, the farfuture functionality no longer tries to generate file and no longer has extremely detailed configurability).

In other words: the CDN module in Drupal 8 is much simpler. And has much better test coverage too. (You can see this in the tarball size too: it's about half of the Drupal 7 version of the module, despite significantly more test coverage!)


CDN UI module version 3.0-rc2 on Drupal 8 CDN UI module in Drupal 8.


all the fundamentals
  • the ability to use simple CDN mappings, including conditional ones depending on file extensions, auto-balancing, and complex combinations of all of the above
  • preconnecting (and DNS prefetching for older browsers)
  • a simple UI to set it up - in fact, much simpler than before!
changed/improved
  1. the CDN module now always uses protocol-relative URLs, which means there's no more need to distinguish between HTTP and HTTPS, which simplifies a lot
  2. the UI is now a separate module
  3. the UI is optional: for power users there is a sensible configuration structure with strict config schema validation
  4. complete unit test coverage of the heart of the CDN module, thanks to D8's improved architecture
  5. preconnecting (and DNS prefetching) using headers rather than tags in , which allows a much simpler/cleaner Symfony response subscriber
  6. tours instead of advanced help, which very often was ignored
  7. there is nothing to configure for the SEO (duplicate content prevention) feature anymore
  8. nor is there anything to configure for the Forever cacheable files feature anymore (named Far Future expiration in Drupal 7), and it's a lot more robust
removed
  1. File Conveyor support
  2. separate HTTPS mapping (also mentioned above)
  3. all the exceptions (blacklist, whitelist, based on Drupal path, file path…) - all of them are a maintenance/debugging/cacheability nightmare
  4. configurability of SEO feature
  5. configurability of unique file identifiers for the Forever cacheable files feature
  6. testing mode

For very complex mappings, you must manipulate cdn.settings.yml - there's inline documentation with examples there. Those who need the complex setups don't mind reading three commented examples in a YAML file. This used to be configurable through the UI, but it also was possible to configure it "incorrectly", resulting in broken sites - that's no longer possible.

There's comprehensive test coverage for everything in the critical path, and basic integration test coverage. Together, they ensure peace of mind, and uncover bugs in the next minor Drupal 8 release: BC breaks are detected early and automatically.

The results after 8 months: contributed module maintainer bliss

The first stable release of the CDN module for Drupal 8 was published on December 2, 2016. Today, I released the first patch release: cdn 8.x-3.1. The change log is tiny: a PHP notice fixed, two minor automated testing infrastructure problems fixed, and two new minor features added.

We can now compare the Drupal 7 and 8 versions of the CDN module:

In other words: maintaining this contributed module now requires pretty much zero effort!

Conclusion

For your own Drupal 8 modules, no matter if they're contributed or custom, I recommend a few key rules:

This is more empowering for the Drupal site builder persona, because they can't shoot themselves in the foot anymore. It's no longer necessary to learn the complex edge cases in each contributed module's domain, because they're no longer exposed in the UI. In other words: domain complexities no longer leak into the UI.

At the same time, it hugely decreases the risk of burnout in module maintainers!

And of course: use the CDN module, it's rock solid! :)

Related reading

Finally, read Amitai Burstein's "OG8 Development Mindset"! He makes very similar observations, albeit about a much bigger contributed module (Organic Groups). Some of my favorite quotes:

  1. About edge cases & complexity:

    Edge cases are no longer my concern. I mean, I'm making sure that edge cases can be done and the API will cater to it, but I won't go too far and implement them. […] we've somewhat reduced the flexibility in order to reduce the complexity; but while doing so, made sure edge cases can still hook into the process.

  2. About tests:

    I think there is another hidden merit in tests. By taking the time to carefully go over your own code - and using it - you give yourself some pause to think about the necessity of your recently added code. Do you really need it? If you are not afraid of writing code and then throwing it out the window, and you are true to yourself, you can create a better, less complex, and polished module.

  3. About feature set & UI:

    One of the mistakes that I feel made in OG7 was exposing a lot of the advanced functionality in the UI. […] But these are all advanced use cases. When thinking about how to port them to OG8, I think found the perfect solution: we did't port it.


  1. I also did my bachelor thesis about Drupal + CDN integration, which led to the Drupal 6 version of the module. ↩︎

  2. Unit tests in Drupal 8 are wonderful, they're nigh impossible in Drupal 7. They finish running in seconds. ↩︎

09 Jul 2017 6:02pm GMT

08 Jul 2017

feedPlanet Grep

Xavier Mertens: [SANS ISC] A VBScript with Obfuscated Base64 Data

I published the following diary on isc.sans.org: "A VBScript with Obfuscated Base64 Data".

A few months ago, I posted a diary to explain how to search for (malicious) PE files in Base64 data. Base64 is indeed a common way to distribute binary content in an ASCII form. There are plenty of scripts based on this technique. On my Macbook, I'm using a small service created via Automator to automatically decode highlighted Base64 data and submit them to my Viper instance for further analysis… [Read more]

[The post [SANS ISC] A VBScript with Obfuscated Base64 Data has been first published on /dev/random]

08 Jul 2017 12:43pm GMT