13 Dec 2018

feedPlanet Grep

Frank Goossens: How to fix render-blocking jquery.js in Autoptimize

Autoptimize by default excludes inline JS and jquery.js from optimization. Inline JS is excluded because it is a typical cache-buster (due to changing variables in it) and as inline JS often requires jQuery being available as a consequence that needs to be excluded as well. The result of this "safe default" however is that jquery.js is a render-blocking resource. So even if you're doing "inline & defer CSS" your Start-Render time (or one of the variations thereof) will be sub-optimal.

Jonas, the smart guy behind criticalcss.com, proposed to embed inline JS that requires jQuery in a function that executes after the DomContentLoaded event. And so I created a small code snippet as a proof of concept which hooks into Autoptimize's API and that seems to work just fine;

The next step is having some cutting-edge Autoptimize users test this in the wild. You can view/ download the code from this gist and add it as a code snippet (or if you insist in your theme's functions.php). Your feedback is more then welcome, I'm sure you know where to find me!

Possibly related twitterless twaddle:

13 Dec 2018 2:55pm GMT

Wim Leers: State of JSON:API (December 2018)

Gabe, Mateu and I just released the third RC of JSON:API 2, so time for an update! The last update is from three weeks ago.

What happened since then? In a nutshell:

RC3

Curious about RC3? RC2RC3 has five key changes:

  1. ndobromirov is all over the issue queue to fix performance issues: he fixed a critical performance regression in 2.x vs 1.x that is only noticeable when requesting responses with hundreds of resources (entities); he also fixed another performance problem that manifests itself only in those circumstances, but also exists in 1.x.
  2. One major bug was reported by dagmar: the ?filter syntax that we made less confusing in RC2 was a big step forward, but we had missed one particular edge case!
  3. A pretty obscure broken edge case was discovered, but probably fairly common for those creating custom entity types: optional entity reference base fields that are empty made the JSON:API module stumble. Turns out optional entity reference fields get different default values depending on whether they're base fields or configured fields! Fortunately, three people gave valuable information that led to finding this root cause and the solution! Thanks, olexyy, keesee & caseylau!
  4. A minor bug that only occurs when installing JSON:API Extras and configuring it in a certain way.
  5. Version 1.1 RC1 of the JSON:API spec was published; it includes two clarifications to the existing spec. We already were doing one of them correctly (test coverage added to guarantee it), and the other one we are now complying with too. Everything else in version 1.1 of the spec is additive, this is the only thing that could be disruptive, so we chose to do it ASAP.

So … now is the time to update to 2.0-RC3. We'd love the next release of JSON:API to be the final 2.0 release!

P.S.: if you want fixes to land quickly, follow dagmar's example:

If you don't know how to fix a bug of a #drupal module, providing a failing test usually is really helpful to guide project maintainers. Thanks! @GabeSullice and @wimleers for fixing my bug report https://t.co/bEkkjSrE8U

- Mariano D'Agostino (@cuencodigital) December 11, 2018

  1. Note that usage statistics on drupal.org are an underestimation! Any site can opt out from reporting back, and composer-based installs don't report back by default. ↩︎

  2. Since we're in the RC phase, we're limiting ourselves to only critical issues. ↩︎

  3. This is the first officially proposed JSON:API profile! ↩︎

13 Dec 2018 12:58pm GMT

Xavier Mertens: [SANS ISC] Phishing Attack Through Non-Delivery Notification

I published the following diary on isc.sans.edu: "Phishing Attack Through Non-Delivery Notification":

Here is a nice example of phishing attack that I found while reviewing data captured by my honeypots. We all know that phishing is a pain and attackers are always searching for new tactics to entice the potential victim to click on a link, disclose personal information or more… [Read more]

[The post [SANS ISC] Phishing Attack Through Non-Delivery Notification has been first published on /dev/random]

13 Dec 2018 11:52am GMT

Dries Buytaert: Acquia highway billboards

If you're driving into Boston, you might notice something new on I-90. Acquia has placed ads on two local billboards; more than 120,000 cars drive past these billboards everyday. This is the first time in Acquia's eleven years that we've taken out a highway billboard, and dipped our toes in more traditional media advertising. Personally, I find that exciting, because it means that more and more people will be introduced to Acquia. If you find yourself on the Mass Pike, keep an eye out!

Billboard

13 Dec 2018 3:01am GMT

12 Dec 2018

feedPlanet Grep

Mattias Geniar: Our Gitlab CI pipeline for Laravel applications – Oh Dear! blog

The post Our Gitlab CI pipeline for Laravel applications - Oh Dear! blog appeared first on ma.ttias.be.

We've built an extensive Gitlab CI Pipeline for our testing at Oh Dear! and we're open sourcing our configs.

This can be applied to any Laravel application and will significantly speed up your entire pipeline configurations if you're just getting started.

We're releasing our Gitlab CI pipeline that is optimized for Laravel applications.

It contains all the elements you'd expect: building (composer, yarn & webpack), database seeding, PHPUnit & copy/paste (mess) detectors & some basic security auditing of our 3rd party dependencies.

Source: Our Gitlab CI pipeline for Laravel applications -- Oh Dear! blog

The post Our Gitlab CI pipeline for Laravel applications - Oh Dear! blog appeared first on ma.ttias.be.

12 Dec 2018 7:34pm GMT

Dries Buytaert: Plan for Drupal 9

At Drupal Europe, I announced that Drupal 9 will be released in 2020. Although I explained why we plan to release in 2020, I wasn't very specific about when we plan to release Drupal 9 in 2020. Given that 2020 is less than thirteen months away (gasp!), it's time to be more specific.

Shifting Drupal's six month release cycle

A timeline that shows how we shifted Drupal 8's release windowsWe shifted Drupal 8's minor release windows so we can adopt Symfony's releases faster.

Before I talk about the Drupal 9 release date, I want to explain another change we made, which has a minor impact on the Drupal 9 release date.

As announced over two years ago, Drupal 8 adopted a 6-month release cycle (two releases a year). Symfony, a PHP framework which Drupal depends on, uses a similar release schedule. Unfortunately the timing of Drupal's releases has historically occurred 1-2 months before Symfony's releases, which forces us to wait six months to adopt the latest Symfony release. To be able to adopt the latest Symfony releases faster, we are moving Drupal's minor releases to June and December. This will allow us to adopt the latest Symfony releases within one month. For example, Drupal 8.8.0 is now scheduled for December 2019.

We hope to release Drupal 9 on June 3, 2020

Drupal 8's biggest dependency is Symfony 3, which has an end-of-life date in November 2021. This means that after November 2021, security bugs in Symfony 3 will not get fixed. Therefore, we have to end-of-life Drupal 8 no later than November 2021. Or put differently, by November 2021, everyone should be on Drupal 9.

Working backwards from November 2021, we'd like to give site owners at least one year to upgrade from Drupal 8 to Drupal 9. While we could release Drupal 9 in December 2020, we decided it was better to try to release Drupal 9 on June 3, 2020. This gives site owners 18 months to upgrade. Plus, it also gives the Drupal core contributors an extra buffer in case we can't finish Drupal 9 in time for a summer release.

A timeline that shows we hope to release Drupal 9 in June 2020Planned Drupal 8 and 9 minor release dates.

We are building Drupal 9 in Drupal 8

Instead of working on Drupal 9 in a separate codebase, we are building Drupal 9 in Drupal 8. This means that we are adding new functionality as backwards-compatible code and experimental features. Once the code becomes stable, we deprecate any old functionality.

Let's look at an example. As mentioned, Drupal 8 currently depends on Symfony 3. Our plan is to release Drupal 9 with Symfony 4 or 5. Symfony 5's release is less than one year away, while Symfony 4 was released a year ago. Ideally Drupal 9 would ship with Symfony 5, both for the latest Symfony improvements and for longer support. However, Symfony 5 hasn't been released yet, so we don't know the scope of its changes, and we will have limited time to try to adopt it before Symfony 3's end-of-life.

We are currently working on making it possible to run Drupal 8 with Symfony 4 (without requiring it). Supporting Symfony 4 is a valuable stepping stone to Symfony 5 as it brings new capabilities for sites that choose to use it, and it eases the amount of Symfony 5 upgrade work to do for Drupal core developers. In the end, our goal is for Drupal 8 to work with Symfony 3, 4 or 5 so we can identify and fix any issues before we start requiring Symfony 4 or 5 in Drupal 9.

Another example is our support for reusable media. Drupal 8.0.0 launched without a media library. We are currently working on adding a media library to Drupal 8 so content authors can select pre-existing media from a library and easily embed them in their posts. Once the media library becomes stable, we can deprecate the use of the old file upload functionality and make the new media library the default experience.

The upgrade to Drupal 9 will be easy

Because we are building Drupal 9 in Drupal 8, the technology in Drupal 9 will have been battle-tested in Drupal 8.

For Drupal core contributors, this means that we have a limited set of tasks to do in Drupal 9 itself before we can release it. Releasing Drupal 9 will only depend on removing deprecated functionality and upgrading Drupal's dependencies, such as Symfony. This will make the release timing more predictable and the release quality more robust.

For contributed module authors, it means they already have the new technology at their service, so they can work on Drupal 9 compatibility earlier (e.g. they can start updating their media modules to use the new media library before Drupal 9 is released). Finally, their Drupal 8 know-how will remain highly relevant in Drupal 9, as there will not be a dramatic change in how Drupal is built.

But most importantly, for Drupal site owners, this means that it should be much easier to upgrade to Drupal 9 than it was to upgrade to Drupal 8. Drupal 9 will simply be the last version of Drupal 8, with its deprecations removed. This means we will not introduce new, backwards-compatibility breaking APIs or features in Drupal 9 except for our dependency updates. As long as modules and themes stay up-to-date with the latest Drupal 8 APIs, the upgrade to Drupal 9 should be easy. Therefore, we believe that a 12- to 18-month upgrade period should suffice.

So what is the big deal about Drupal 9, then?

The big deal about Drupal 9 is … that it should not be a big deal. The best way to be ready for Drupal 9 is to keep up with Drupal 8 updates. Make sure you are not using deprecated modules and APIs, and where possible, use the latest versions of dependencies. If you do that, your upgrade experience will be smooth, and that is a big deal for us.

Special thanks to Gábor Hojtsy (Acquia), Angie Byron (Acquia), xjm (Acquia), and catch for their input in this blog post.

12 Dec 2018 1:13pm GMT

11 Dec 2018

feedPlanet Grep

Frank Goossens: Async Javascript plugin birthday update

Maybe not as exciting as WordPress 5 or even Autoptimize, but for my birthday I released a new version of Async Javascript which can now be configured to not Async for logged on users or shop cart/ checkout:

Enjoy :-)

Possibly related twitterless twaddle:

11 Dec 2018 1:32pm GMT

10 Dec 2018

feedPlanet Grep

Xavier Mertens: Nominated for the IT Blog Awards

This morning, I received a mail from Cisco to tell me that I've been nominated as finalist for their IT Blog Awards (Category: "Most Inspirational"). I'm maintaining this blog just for the fun and to share useful (I hope) information with my readers and don't do this to get rewards but it's always nice to get such feedback. The final competition is now open, if you've a few minutes, just vote for me!

Votes are open here. Thank you!

[The post Nominated for the IT Blog Awards has been first published on /dev/random]

10 Dec 2018 7:10am GMT

08 Dec 2018

feedPlanet Grep

Dries Buytaert: The ebbs and flows of software organizations

This week I was in New York for a day. At lunch, Sir Martin Sorrell pointed out that Microsoft overtook Apple as the most valuable software company as measured by market capitalization. It's a close call but Microsoft is now worth $805 billion while Apple is worth $800 billion.

What is interesting to me are the radical "ebbs and flows" of each organization.

In the 80's, Apple's market cap was twice that of Microsoft. Microsoft overtook Apple in the the early 90's, and by the late 90's, Microsoft's valuation was a whopping thirty-five times Apple's. With a 35x difference in valuation, no one would have guessed Apple to ever regain the number-one position. However, Apple did the unthinkable and regained its crown in market capitalization. By 2015, Apple was, once again, valued two times more than Microsoft.

And now, eighteen years after Apple took the lead, Microsoft has taken the lead again. Everything old is new again.

As you'd expect, the change in market capitalization corresponds with the evolution and commercial success of their product portfolios. In the 90s, Microsoft took the lead based on the success of the Windows operating system. Apple regained the crown in the 2000s based on the success of the iPhone. Today, Microsoft benefits from the rise of cloud computing, Software-as-a-Service and Open Source, while Apple is trying to navigate the saturation of the smartphone market.

It's unclear if Microsoft will maintain and extend its lead. On one hand, the market trends are certainly in Microsoft's favor. On the other hand, Apple still makes a lot more money than Microsoft. I believe Apple to be slightly undervalued, and Microsoft is to be overvalued. The current valuation difference is not justified.

At the end of the day, what I find to be most interesting is how both organizations have continued to reinvent themselves. This reinvention has happened roughly every ten years. During these periods of reinvention, organizations can fall out out favor for long stretches of time. However, as both organizations prove, it pays off to reinvent yourself, and to be patient product and market builders.

08 Dec 2018 6:09pm GMT

07 Dec 2018

feedPlanet Grep

Xavier Mertens: Botconf 2018 Wrap-Up Day #3

And the conference is over! I'm flying back to home by tomorrow morning so I've time to write my third wrap-up. The last day of the conference is always harder for many attendees due to the late parties. But I was present on time to attend the last set of presentations. The first one was presented by Wu Tiejun and Zhao Guangyan: "WASM Security Analysis Reverse Engineering". They started with an introduction about WASM or "Web Assembly". It's a portable technology deployed in browsers which helps to provide an efficient binary format available on many different platforms. An interesting URL they mentioned is WAVM, a standalone VM for WebAssembly. They also covered the CVE-2018-4121. I'm sorry for the lack in details but the speakers were reading their slides and it was very hard to follow them. Sad, because I'm sure that they have a deep knowledge about this technology. If you're interesting, have a look at their slides once published or here is another resource.
The next speaker was Charles IBRAHIM who presented an interesting usage of a botnet. The title of his presentation was "Red Teamer 2.0: Automating the C&C Set up Process". Botconf is a conference dedicated to fighting botnets but this time, it was about building a botnet! By definition, a botnet can be very useful: sharing resources, executing commands on remote hosts, collecting data, etc. All those operations can be very interesting while conducting red team exercises. Indeed, red teams need tools to perform operations in a smooth way and have to remain below the radar. There are plenty of tools available for red teamers but there is a lack of aggregation. Charles presented the botnet they developed. The goal is to reduce the time required to build the infrastructure, to easily execute common actions, log operations and reduce OPSEC risks. About the C&C infrastructure, they provide user authentication, logging capabilities, remote agent deployment and administration and cover communication techniques. Steps of the red team process were reviewed:
Very interesting approach to an alternative use of a botnet.
Then, Rommel Joven came on stage to talk about Mirai: "Beyond the Aftermath". Mirai was a huge botnet that affected many IoT tools. Since if was discovered, what's going on? Mirai was also used to DDoS major sites like krebsonsecurity.com, Twitter, Spotify, Reddit, etc. Later the source code was released. Why does it affect IoT devices?
  • Easy to exploit
  • 24/7 availability
  • Powerful enough for DDoS
  • Rarely monitored / patched
  • Low security awareness
  • Malware source code available for reuse
The last point was the key of the presentation. Rommel explained that, since the leak, many new malware were developed reusing some functions or some part of the code present in Mirai. 80K samples were detected in 2018 so far with 49% of the Mirai code. Malware developers are like common developers: why reinvent the wheel if you can borrow some code somewhere else? Then Rommel reviewed some malware samples that are know to re-use the Mirai code (or at least a part of):
  • Hajime - same user/password combinations
  • IoTReaper: user of 9 exploits for infection, integration LUA
  • Persirai/Http81: borrows the port scanner from Mirai as well as utils functions. Found similar strings
  • BashLite: MiraiScanner() , MiraiIPRanges(), …
  • HideNSeek: configuration table similarity, utils functions, capability of data exfiltration
  • ADB.Miner: Port scanning from Mirai, adds a Monero miner,
When you deploy a botnet, the key is to monetize it. How is it achieved with Mirai & alternatives: by performing cryptomining operations, by stealing ETH coins, by installing proxies or booters.
Let's continue with Piotr BIAŁCZAK who presented "Leaving no Stone Unturned - in Search of HTTP Malware Distinctive Features". The idea behind Piotr's research is to analyze HTTP requests to try to identify which ones are performed by a regular browser or a malware (Windows malware samples), then to try to build families. The research was based on a huge number of PCAP files that were analyzed through the following process:
PCAP > HTTP request > IDS > SID assigned to request > Analysis > Save to the database
Data sources of PCAP files are the CERT Polska's sandbox system, mcfp.felk.cvut.cz. HTTP traffic was performed via popular browsers and access to the top-500 Alexa via Selenium. About numbers, 36K+ PCAP file were analyzed and 2.5M+ alerts generated. Traffic from malware samples was from many know families like Locky, Zbot, Ursif, Dreambot, Pony, Nemucod, … 172 families identified in total, 19% of requests of unknown malware. To analyze results, they searched for errors, features inherent to malicious operations (example: obfuscation) and the identification of features which reflect differences in data exchange.
About headers, interesting stuffs are:
About payloads:
Some of the findings:
The research was interesting but I don't see the point for a malware developer to make bad HTTP requests instead of using a standard library to make HTTP request.
Yoshihiro ISHIKAWA & Shinichi NAGANO presented "Let's Go with a Go RAT!". The wellmess malware is written in Go and was not detected by AV before June 2018. Mirai is one of the famous malware written in this language. The performed a deep review of wellmess:
They performed a live demo of the botnet and C&C comms. Very deep analyzis. They also provided Suricata IDS and YARA rules to detect the malware (check the slides).
After the lunch break, James Wyke presented "Tracking Actors through their Webinjects". He started with a recap about banking malware and webinjects. they are not simple because web apps are complex. Off-the-shelf solutions are available. The idea of the research: Can we classify malware families based on web injects? some are popular for years (Zeuxs, Gozi). James reviewed many webinjects:

For each of them, he gave details like the targets, origin, explanation of the name and a YARA rule to detect them and many more information.

Then Łukasz Siewierski presented "Triada: the Past, the Present, the (Hopefully not Existing) Future". He explained in details the history of the Triada malware present in many Android smartphones. It was discovered in 2016 but involved with the time.
Matthieu Faou presented "The Snake Keeps Reinventing Itself". It was a very nice overview of the Turla espionage group. A lot of details were provided, especially about the exploitation of Outlook. I won't give more details here, have a look at my wrap-up from Hack.lu 2018 where Matthieu give the same presentation.
Finally, the scheduled was completed with Ya Liu's presentation: "How many Mirai variants are there?". Again a presentation about Mirai and alternative malware that re-use the same source code. There was some overlapping with Rommel's presentation (see above) but the approach was more technical. Ya explained how automate the extraction of configurations, what are the attack methods and dictionaries. From 21K analyzed samples, they extracted configurations and attack methods. Based on these data, they created five classification schemes. More info was also published here.
As usual, there was a small closing ceremony with more information about this edition: 26 talks for a total of 1080(!) minutes, 400 attendees coming from all over the world. Note already the date of the 2019 edition: 3-6 December. The event will be organized in Bordeaux!

[The post Botconf 2018 Wrap-Up Day #3 has been first published on /dev/random]

07 Dec 2018 5:08pm GMT

Claudio Ramirez: Quo vadis, Perl?

Crossroads

Foto by Carsten Tolkmit

We've had a week of heated discussion within the Perl 6 community. It is the type of debate where everyone seems to lose. It is not the first time we do this and it certainly won't be the last. It seems to me that we have one of those about every six months. I decided not to link to many reiterations of the debate in order not to feed the fire.

Before defining sides in the discussion it is important to identify the problems that drives the fears and hopes of the community. I don't think that the latest round of discussions was about the Perl 6 alias in itself (Raku), but rather about the best strategy to answer to two underlying problems:

These are my observations and I don't present them as facts set in stone. However, to me, they are the two elephants in the room. As an indication we could refer to the likes of TIOBE where Perl 5 fell off the top 10 in just 5 years (from 8th to 16th, see "Very Long Time History" at the bottom) or compare Github stars and Reddit subscribers of Perl 5 and 6 with languages on the same level of popularity on TIOBE.

Perl 5 is not really active on Github and the code mirror there has until today only 378 stars. Rakudo, developed on Github, has understandably more: 1022. On Reddit 11726 people subscribe to Perl threads (mostly Perl 5) and only 1367 to Perl 6. By comparison, Go has 48970 stars on Github and 57536 subscribers on Reddit. CPython, Python's main implementation, has 20724 stars on Github and 290 308 Reddit subscribers. Or put differently: If you don't work in a Perl shop, can you remember when a young colleague knew about Perl 5 besides by reputation? Have you met people in the wild that code in Perl 6?

Maybe this isn't your experience or maybe you don't care about popularity and adoption. If this is the case, you probably shrugged at the discussions, frowned at the personal attacks and just continued hacking. You plan to reopen IRC/Twitter/Facebook/Reddit when the dust settles. Or you may have lost your patience and moved on to a more popular language. If this is the case, this post is not for you.

I am under the impression that the participants of what I would call "cyclical discussions" *do* agree with the evaluation of the situation of Perl 5 and 6. What is discussed most of the time is clearly not a technical issue. The arguments reflect different -and in many cases opposing- strategies to alleviate the aforementioned problems. The strategies I can uncover are as follows:

Perl 5 and Perl 6 carry on as they have for longer than a decade and half

Out of inertia, this a status-quo view is what we've seen in practice today. While this vision honestly subscribes to the "sister languages narrative" (Perl is made up of two languages in order to explain the weird major version situation), it doesn't address the perceived problems. It chooses to ignore them. The flip side is that with every real or perceived threat to the status-quo the debate resurges.

Perl 6 is the next major version of Perl

This is another status-quo view. The "sister languages narrative" is dead: Perl 6 is the next major version of Perl 5. While a lot of work is needed to make this happen, it's work that is already happening: make Perl 6 fast. The target, however, is defined by Perl 5: It must be faster than the previous release. Perl consists of many layers and VMs are interchangeable: it's culturally still Perl if you replace the Perl 5 runtime with one from of Perl 6. This view is not well received by many people in Perl 5 community, and certainly by those emotionally or professionally invested in "Perl", with most of the time Perl meaning Perl 5.

Both Perl 5 and Perl 6 are qualified/renamed

This is a renaming view that looks for a compromise in both communities. The "sister languages narrative" is the real world experience and both languages can stand on their own feet while being one big community. By renaming both projects and keeping Perl in the name (e.g. Rakudo Perl, Pumpkin Perl) the investment in the Perl name is kept, while the next major version number dilemma is dissolved. However this strategy is not an answer for those in the Perl 6 community that experience that the (unjustified) reputation of Perl 5 is hurting Perl 6's adoption. On the Perl 5's side is some resentment why good old "Perl" needs to be renamed when Perl 6 is the newcomer.

Rename Perl 6

Perl 6's adoption is hindered by Perl 5's reputation and, at the same time, Perl 6's major number "squatting" places Perl 5 in limbo. The "sister language narrative" is the real world situation: Perl 5 is not going away and it should not. The unjustified reputation of Perl 5 for some people is not something Perl 6 needs to fix. Only action of the Perl 6 community is required in the view. However, a "sisterly" rename will benefit Perl 5. Liberating the next major version will not fix Perl 5's decline, but it may be a small piece of the puzzle of the recovery. Renaming will result in more loosely coupled communities, but Perl communities are not mutually exclusive and the relation may improve without the version dilemma. The "sister language narrative" becomes a proud origin story. Mostly Perl 6 people heavily invested in the Perl *and* the Perl 6 brand opposed this strategy.

Alias Perl 6

While being very similar to the strategy above, this view is less ambitious as it only cares about Perl 6's adoption is hindered by Perl 5's reputation. It's up to Perl 5 to fix their major version problem. It's a compromise between (a number of) people in the Perl 6 community. It may or may not be a way to proof if an alias catches on, the renaming of Perl 6 should stay on the table.

-

Every single strategy will result in people being angry or disappointed because they honestly believe it hurts the strategy that they feel is necessary to alleviate Perl's problems. We need to acknowledge that the fears and hopes are genuine and often related. Without going in detail to not reignite the fire (again), the tone of many of the arguments I heard this week from people opposing the Raku alias rung very close to me to the arguments Perl 5 users have against the Perl 6 name. Being a victim of injustice by people that don't care for an investment of years and a feeling of not being listen to.

By losing the sight of the strategies in play, I feel the discussion degenerated very early in personal accusations that certainly leave scars while not resulting in even a hint of progress. We are not unique in this situation, see the recent example of the toll it took on Guido van Rossum. I can only sympathize with Larry is feeling these days.

While the heated debates may continue for years to come, it's important to keep an eye on people that silently leave. The way to irrelevance is a choice.

(I disabled comments on this entry, feel free to discuss it on Reddit or the like. However respect the tone of this message and refrain from personal attacks.)

07 Dec 2018 1:15pm GMT

06 Dec 2018

feedPlanet Grep

Xavier Mertens: Botconf 2018 Wrap-Up Day #2

I'm just back from the reception that was held at the Cité de l'Espace, such a great place with animations and exhibitions of space related devices. It's tie for my wrap-up of the second day. This morning, after some coffee refill, the first talk of the day was performed by Jose Miguel ESPARZA: "Internals of a Spam Distribution Botnet". This talk had content flagged as a mix of TLP:Amber and TLP:Red, so no disclosure. Jose started with an introduction about well-known spam distribution botnets like Necurs or Emotet: what are their features, some volumetric statistics and how they behave. Then, he dived into a specific one Onliner, well known for its huge amount of email accounts: 711 millions! He reviewed how the bot is working, how it communicates with its C&C infrastructure, the panel, the people behind the botnet. Nice review and a lot of useful information! The conclusion was that spam bots still remain a threat. They are not only used to drop spam but also to deliver malware.

Then, Jan SIRMER & Adolf STREDA came on stage to present: "Botception: Botnet distributes script with bot capabilities". They presented their research about a bot that it acting like in the "Inception" movie. It was about a bot that distributes a script that acts like… a bot! The "first" bot is Necurs which is has been discovered in 2012. It's one of the largest botnets that was used to distribute huge amount of spams as well as other malware campaigns like ransomware. The explained how the bot behave and, especially, how it communicates with its C&C servers. In a second part, they explained how they created a tracker to learn more about the botnet. Based on the results, they discovered the infection chain:

Spam email > Internet shortcut > VBS control panel > C&C > Download & execute > Flawed Ammyy.

The core component that was analyzed is the VBS control panel that is accessed via SMB (file://) in an Internet shortcut file. Thanks to the SMB protocol, they get access to all the files and grabbed also payloads in advance! The behaviour is classic: hardcoded C&C addresses, features like install, upgrade, kill or execute, watchdog feature. Interesting, the code was properly documented, which is rare for malicious code. Different version were compared. At the end, the malware drops a RAT: Flawy Ammyy.

The next tall was "Stagecraft of Malicious Office Documents - A Look at Recent Campaigns" presented by Deepen DESAI, Tarun DEWAN & Dr. Nirmal SINGH. Malicious Office documents (or "maldocs") are a very common vector of infection for a while. But how do they evolve in time? The speakers focused their research on analyzing many maldocs. Today, approximatively 1 million of documents are used daily in enterprises transactions. The typical infection path is:

Maldoc > Social engineering > Execute macro > Download & execute payload

Why "Social engineering"? Since Office 2007, macros are disabled by default and the attacker must use techniques to lure the victim and force him/her to disable this default protection.

They analyzed ~1200 documents with a low AV detection (both manual and in sandboxes). They looked at URLs, filenames time frames, obfuscation techniques. What are the findings? They categorized documents in campaign that were reviewed one by one:

Campaign 1: "AppRun" - because they used Application.Run
Campaign 2: "ProtectedMacro" because the Powershell code was stored in document elements like boxes
Campaign 3: "LeetMX" - because leet text encoding was used
Campaign 4: "OverlayCode" - because encrypted PowerShell code is accessed using bookmarks
Campaign 5: "xObjectEnum" - because Macro code in the documents were using enum values from different built-in classes in VBA objects
Campaign 6: "PingStatus" - because the document used Win32_PingStatus WMI class to detect sandbox ping to microsoft.com and %userdomain%
Campaign 7: "Multiple embedded macros" - because malicious RTF containing multiple embedded Excel sheets
Campaign 8 "HideInProperlty" - because Powershell code was hidden in the doc properties
Campaign 9: "USR-KL" - because they used specific User-Agents: USR-KL & TST-DC

This was a very nice study and recap about malicious documents.

Then, Tom Ueltschi came to present "Hunting and Detecting APTs using Sysmon and PowerShell Logging". Tom is a recurrent speaker at Botconf and always presents interesting stuff to hunt for bad guys. Today, he came with new recipes (based on Sigma!). But, as he explained, to be able to track bad behaviour, it's mandatory to prepare your environment for investigations (log everything but also specific stuff like auditing, Powershell modules, script block and transcription logging). The MITRE ATT@CK was used as a reference in Tom's presentation. He reviewed three techniques that deserve to be detected:

For each techniques, Tom described what to log and how to search events to spot the bad guys. The third technique was covered deeper with more examples to track many common evasion techniques. They are not easy to describe in a few lines here. My recommendation, if you are dealing with this kind of environment, is to have a look at Tom's slides. Usually, he publish them quickly. Excellent talk, as usual!

The Rustam Mirkasymov's talk was the last one of the first half-day: "Hunting for Silence". There was no abstract given and I was thinking about a presentation on threat hunting. Nope, it was a review of the "Silence" trojan which targeted financial institutions in Ukraine in 2017. After a first analyze, the trojan was attributed to APT-28 but it was not the case. The attacker did not have the exploit builder but was able to modify an existing sample. Rustam did a classic review of the malware: available commands, communications with the C&C infrastructure, persistence mechanism, … An interesting common point of many presentations for this edition: slides usually contained some mistakes performed by the malware developers.

After the lunch break, the keynote was performed by the Colonel Jean-Dominique Nollet from the French Gendarmerie. The title was "Cybercrime fighting in the Gendarmerie". He explained the role of law enforcement authorities in France and how they work to improve the security of all citizens. This is not an easy task because they have to explain to non-technical people (citizens as well as other members of the Gendarmerie) very technical information (like botnets!). Their missions are:

Coordination is key! For example, to fight against child pornography, they have a database of 11 millions of pictures that can help to identify victims or bad guys. They already rescued eight children! The evolution cycle is also important:

Information > R&D > Experience > Validate -> Industrialize

The key is speed! Finally, another key point was to request for more collaboration between security researchers and law enforcement.

The next speaker was Dennis Schwarz who presented "Everything Panda Banker". The name is coming from references to Panda in the code and the control panel. The first sample was found in 2016 and uploaded from Norway. But the malware is still alive and new releases were found until June 2018. Dennis explained the protections in place like Windows API calls resolved via hash function (obfuscation technique), encrypted strings, how configurations are stored, the DGA mechanism and other features like Man-in-the-Browser and Web-Inject. Good content with a huge amount of data that deserve to be re-read because the talk was given at light speed! I even had no time to read all the information present on each slides!

Thomas Siebert came to present "Judgement Day". Here again, no abstract was provided but the content of the talk was amazing but released as TLP:Red, sorry! But, trust me, it was awesome!

After the afternoon break, Romain Dumont and Hugo Porcher presented "The Dark Side of the ForSSHe". The presentation covered the Windigo malware, well-known for attacking UNIX servers through a SSH backdoor. Once connected to the victim, the bot used a Perl script piped through the connection (so, without any file stored on disk). The malware was Ebury. They deployed honeypots to collect samples and review the script features. The common OpenSSH backdoor features found are:

Then, they reviewed specific families like:

From a remediation perspective, the advices are always the same: use keys instead of passwords, disable root login, enable 2FA, monitor file descriptors and outbound connections from the SSH daemon.

The day ended with the classic lightning talks session. The principle remains the same: 3 minutes max and any topic (but related to malware, botnets or security in general). Here is a quick list of covered topics:

The best lightning talk was (according to the audience) the TLP:Red one (it was crazy!). I really liked boot_check.py, a simple Python script that can detect if your computer was rebooted without your consent.

That's all for today, see you tomorrow for the third wrap-up!

[The post Botconf 2018 Wrap-Up Day #2 has been first published on /dev/random]

06 Dec 2018 11:27pm GMT

Mattias Geniar: PHP 7.3.0 Release Announcement

The post PHP 7.3.0 Release Announcement appeared first on ma.ttias.be.

A new PHP release is born: 7.3!

The PHP development team announces the immediate availability of PHP 7.3.0. This release marks the third feature update to the PHP 7 series.

PHP 7.3.0 comes with numerous improvements and new features such as;

-- Flexible Heredoc and Nowdoc Syntax
-- PCRE2 Migration
-- Multiple MBString Improvements
-- LDAP Controls Support
-- Improved FPM Logging
-- Windows File Deletion Improvements
-- Several Deprecations

Source: PHP: PHP 7.3.0 Release Announcement

The post PHP 7.3.0 Release Announcement appeared first on ma.ttias.be.

06 Dec 2018 4:11pm GMT

05 Dec 2018

feedPlanet Grep

Xavier Mertens: Botconf 2018 Wrap-Up Day #1

Here is my first wrap-up for the 6th edition of the Botconf security conference. Like the previous editions, the event is organized in a different location in France. This year, the beautiful city of Toulouse saw 400 people flying from all over the world to attend the conference dedicated to botnets and how to fight them. Attendees are coming from many countries like USA, Canada, Brazil, Japan, China, Israel, etc). The opening session was performed by Eric Freyssinet. Same rules as usual, no harassment, respect of the TLP policy. Let's start with the review of the first talks.

No keynote on the first day (the keynote speaker has been scheduled tomorrow). The first talk was assigned to Emilien LE JAMTEL from the CERT EU. He presented his research about cryptominers: "Swimming in the Monero Pools". Attackers have two key requirements: the obfuscation of data and to perform efficient mining on all kinds of hardware. Monero, being obfuscated by default and not requiring specific ASICs CPU, is a nice choice for attackers. Event a smartphone can be used as a miner. Criminals are very creative to drop more miners everywhere but the common attacks remain phishing (emails) and exploiting vulnerabilities in application (like WebLogic). Emilien explained how he's hunting for new samples. He wrote a bunch of scripts (available here) as well as YARA rules. Once the collection process is done, he extracts information like hardcoded wallet addresses, search for outbound connections to mining pools. Right now, he collected 15K samples and is able to generate IOCs like: C2 communications, persistence mechanism, specific strings and TTP's. The next step was to explain how your can deobfuscate data hidden on the code and configuration files (config.js or global.js). He concluded with more funny examples of malware samples that killed themselves or another that contained usernames in the compilation path of the source code. Nice topic to smoothly start the day.

The next talk was performed by Aseel KAYAL: "APT Attack against the Middle East: The Big Bang". She gave many details about a malware sample they found targeting the Middle-East. The campaign was assigned to APT-C-23, a threat group targeting Palestinians. She explained in a very educational way how the malware was delivered and its behaviour to infect the victim's computer. The malware was delivered as a fake Word document that was in fact a self-extracting archive containing a decoy document and a malicious PE file. The gave details about the malware itself then more about the "context". It was called "The Big Bang" due to the unusual module names. Assel and her team also tracked the people behind the campaign and found many references to TV shows. It was a nice presentation not simply delivering (arte)facts but also telling a story.

Daniel PLOHMANN presented last year at Botconf the Malpedia project (see my previous wrap-up). This year , he came back with more news about the project, how it evolved during 12 months. The presentation was called "Code Cartographer's Diary".The platform has now 850 users, 2900+ contributions. The new version has now a REST API (which helps to integrates Malpedia with third party tools like TheHive - just saying). The second part of the talk was based on ApiScout. This tools helps to detect how the Windows API is used in malware samples. Based on many samples, Daniel gave statistics about the API usage. If you don't know Malpedia, have a look, it's an interesting tool for security analysts and malware researchers.

The next speaker was Renato MARINHO, a fellow SANS Internet Storm Center Handler, who presented "Cutting the Wrong Wire: how a Clumsy Attacker Revealed a Global Cryptojacking Campaign". This was a second talk about cryptominers in a half day. After a quick recap about this kind of attacks, Renato explained how he discovered a new campaign affecting servers. During the analysis of a timeline, he found suspicious files in /tmp (config.json) as well as a binary file. This binary was running with the privileges of the WebLogic server running on the box. This box was compromized using the WebLogic exploit. He tracked the attacker using the hardcoded wallet address found in the binary. The bad guy generated $204K in two months! How the malware was detected? Due to a stupid mistake of the developer, the malware killed automatically running Java processes… so the WebLogic application too!

After the lunch break, Brett STONE-GROSS & Tillmann WERNER presented "Chess with Pyotr". This talk was a resume of a blog post they published). Basically, the reviewed previous botnets like Storm Worm, Waledac, Storm 2.0 and… Kelihos. They gave multiple details about them. Kelihos was offering many services: spam, credential theft, DDoS, FFlux DNS, click fraud, SOCKs proxy, mining, Pay-per-install (PPI). The next part of the talk was dedicated to the attribution. The main threat actor behind this botnet is Peter Yuryevich Levashov, a member of an underground forum where is communicated about his botnet.

Then, Rémi JULLIAN came to present: "In-depth Formbook Malware Analysis". In-depth was really the key word of the presentation! FormBook is a well-know malware that is very popular and still active! It targets 92(!) different applications (via password-stealer or form-grabber). It is proposed also on demand in a MaaS model ("Malware as a Service"). The price for a full version is around $29/week. This malware is often on the top-10 of threats detected by security solutions like sandboxes. Rémi reviewed the multiple anti-analysis techniques deployed by FormBook like string obfuscation and encryption, manually mapping NTDLL (to defeat tools like Cuckoo), check for debuggers, check for inline hooks etc. The techniques of code injection and process hollowing were also explained. About the features, we have: browser hooking to access the data before being encrypted, a key-logger, clipboard data stealer, passwords harvesting from the filesystems. Communication with the C&C was also explained. Interesting finding: FormBook uses fake C&C servers during sandbox analysis to defeat the analyst. This was a great presentation full of useful details!

The next speaker was Antoine REBSTOCK who presented: "How Much Should You Pay for your own Botnet ?". This was not a technical presentation (though - with plenty of mathematical formules) but more a legal talk. The idea presented by Antoine was interesting: Let's assume that we decide to build a botnet to DDoS a target, what will be the total price (hosting, bandwidth, etc). After the theory, he compared different providers: Orange, Amazon, Microsoft and Google. Event if the approach is not easy to put in the context of a real attacker, the idea was interesting. But way too much formulas for me 😉

After the welcomed coffee break, Jakub SOUČEK & Jakub TOMANEK: "Collecting Malicious Particles from Neutrino Botnets". The Neutrino bot is not new. It was discovered in 2014 but still alive today, with many changes. Lot of articles have been written about this botnet but, according to the speakers, there was some information missing like how behaves the bot during investigation, how configuration files are received. Many bots are still running in parallel and they wanted to learn more about them. Newly introduced features are: modular structure, obfuscated API call, network data stealer, CC scraper, encryption of modules, new control flow, persistence and support for new web injects. The botnet is sold to many cybercriminals, there are many builds. How to classify them in groups? What can be collected and useful to classify the botnet?

Only the Build ID is relevant. The name, by example, is "NONE" in 95% of the cases. They found 120 different build ID's classified in 41 unique botnets, 18 really active and 3 special cases. They reviewed some botnets and named them with their own convention. Of course they found some funny stories like a botnet injected "Yaaaaaar" in front of all strings in the web inject module. They also found misused commands, disclosure of data, debugging information left in the code. Conclusion: malware developers make mistakes too.

The next slot was assigned to Joie SALVIO & Floser BACURIO Jr. with "Trickbot The Trick is On You!". They performed the same kind of presentation as today but this time on the banking malware Trickbot. Discovered in 2016, it also evolved with new features. They gave more attention on the communication channels used by the malware.

Finally, the day ended with Ivan KWIATKOWSKI & Ronan MOUCHOUX who presented "Automation, structured knowledge in Tactical Threat Intelligence". After an introduction and definition of "intelligence" (it's a consumer-driven activity), they explained what is the Tactical Threat Intelligence and how to implement it. Just a mention about the slides, designed with a wrong palette, making them difficult to read.

That's all for today, be ready for my second wrap-up tomorrow!

[The post Botconf 2018 Wrap-Up Day #1 has been first published on /dev/random]

05 Dec 2018 11:41pm GMT

Dries Buytaert: Drupal's commitment to accessibility

A figure opening doors, lit from behind with a bright light.

Last week, WordPress Tavern picked up my blog post about Drupal 8's upcoming Layout Builder.

While I'm grateful that WordPress Tavern covered Drupal's Layout Builder, it is not surprising that the majority of WordPress Tavern's blog post alludes to the potential challenges with accessibility. After all, Gutenberg's lack of accessibility has been a big topic of debate, and a point of frustration in the WordPress community.

I understand why organizations might be tempted to de-prioritize accessibility. Making a complex web application accessible can be a lot of work, and the pressure to ship early can be high.

In the past, I've been tempted to skip accessibility features myself. I believed that because accessibility features benefited a small group of people only, they could come in a follow-up release.

Today, I've come to believe that accessibility is not something you do for a small group of people. Accessibility is about promoting inclusion. When the product you use daily is accessible, it means that we all get to work with a greater number and a greater variety of colleagues. Accessibility benefits everyone.

As you can see in Drupal's Values and Principles, we are committed to building software that everyone can use. Accessibility should always be a priority. Making capabilities like the Layout Builder accessible is core to Drupal's DNA.

Drupal's Values and Principles translate into our development process, as what we call an accessibility gate, where we set a clearly defined "must-have bar". Prioritizing accessibility also means that we commit to trying to iteratively improve accessibility beyond that minimum over time.

Together with the accessibility maintainers, we jointly agreed that:

  1. Our first priority is WCAG 2.0 AA conformance. This means that in order to be released as a stable system, the Layout Builder must reach Level AA conformance with WCAG. Without WCAG 2.0 AA conformance, we won't release a stable version of Layout Builder.
  2. Our next priority is WCAG 2.1 AA conformance. We're thrilled at the greater inclusion provided by these new guidelines, and will strive to achieve as much of it as we can before release. Because these guidelines are still new (formally approved in June 2018), we won't hold up releasing the stable version of Layout Builder on them, but are committed to implementing them as quickly as we're able to, even if some of the items are after initial release.
  3. While WCAG AAA conformance is not something currently being pursued, there are aspects of AAA that we are discussing adopting in the future. For example, the new 2.1 AAA "Animations from Interactions", which can be framed as an achievable design constraint: anywhere an animation is used, we must ensure designs are understandable/operable for those who cannot or choose not to use animations.

Drupal's commitment to accessibility is one of the things that makes Drupal's upcoming Layout Builder special: it will not only bring tremendous and new capabilities to Drupal, it will also do so without excluding a large portion of current and potential users. We all benefit from that!

05 Dec 2018 10:56am GMT

04 Dec 2018

feedPlanet Grep

Mattias Geniar: Deploying laravel-websockets with Nginx reverse proxy and supervisord

The post Deploying laravel-websockets with Nginx reverse proxy and supervisord appeared first on ma.ttias.be.

There is a new PHP package available for Laravel users called laravel-websockets that allows you to quickly start a websocket server for your applications.

The added benefit is that it's fully written in PHP, which means it will run on pretty much any system that already runs your Laravel code, without additional tools. Once installed, you can start a websocket server as easily as this:

$ php artisan websocket:serve

That'll open a locally available websocket server, running on 127.0.0.1:6001.

This is great for development, but it also performs pretty well in production. To make that more manageable, we'll run this as a supervisor job with an Nginx proxy in front of it, to handle the SSL part.

Supervisor job for laravel-websockets

The first thing we'll do is make sure that process keeps running forever. If it were to crash (out of memory, killed by someone, throwing exceptions, ...), we want it to automatically restart.

For this, we'll use supervisor, a versatile task runner that is ideally suited for this. Technically, systemd would work equally good for this purpose, as you could quickly add a unit file to run this job.

First, install supervisord.

# On Debian / Ubuntu
apt install supervisor

# On Red Hat / CentOS
yum install supervisor
systemctl enable supervisor

Once installed, add a job for managing this websocket server.

$ cat /etc/supervisord.d/ohdear_websocket_server.ini
[ohdear_websocket_server]
command=/usr/bin/php /var/www/vhosts/ohdear.app/htdocs/artisan websocket:start
numprocs=1
autostart=true
autorestart=true
user=ohdear_prod

This example is taken from ohdear.app, where it's running in production.

Once the config has been made, instruct supervisord to load the configuration and start the job.

$ supervisorctl update
$ supervisorctl start websockets

Now you have a running websocket server, but it will still only listen to 127.0.0.1:6001, not very useful for your public visitors that want to connect to that websocket.

Note: if you are expecting a higher number of users on this websocket server, you'll need to increase the maximum number of open files supervisord can open. See this blog post: Increase the number of open files for jobs managed by supervisord.

Add an Nginx proxy to handle the TLS

Let your websocket server run locally and add an Nginx configuration in front of it, to handle the TLS portion. Oh, and while you're at it, add that domain to Oh Dear! to monitor your certificate expiration dates. ;-)

The configuration looks like this, assuming you already have Nginx installed.

$ cat /etc/nginx/conf.d/socket.ohdear.app.conf
server {
  listen        443 ssl;
  listen        [::]:443 ssl;
  server_name   socket.ohdear.app;

  access_log    /var/log/nginx/socket.ohdear.app/proxy-access.log main;
  error_log     /var/log/nginx/socket.ohdear.app/proxy-error.log error;

  # Start the SSL configurations
  ssl                         on;
  ssl_certificate             /etc/letsencrypt/live/socket.ohdear.app/fullchain.pem;
  ssl_certificate_key         /etc/letsencrypt/live/socket.ohdear.app/privkey.pem;
  ssl_session_timeout         3m;
  ssl_session_cache           shared:SSL:30m;
  ssl_protocols               TLSv1.1 TLSv1.2;

  # Diffie Hellmann performance improvements
  ssl_ecdh_curve              secp384r1;

  location / {
    proxy_pass                          http://127.0.0.1:6001;
    proxy_set_header Host               $host;
    proxy_set_header X-Real-IP          $remote_addr;

    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto  https;
    proxy_set_header X-VerifiedViaNginx yes;
    proxy_read_timeout                  60;
    proxy_connect_timeout               60;
    proxy_redirect                      off;

    # Specific for websockets: force the use of HTTP/1.1 and set the Upgrade header
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
  }
}

Everything that connects to socket.ohdear.app over TLS will be proxied to a local service on port 6001, in plain text. This offloads all the TLS (and certificate management) to Nginx, keeping your websocket server configuration as clean and simple as possible.

This also makes automation via Let's Encrypt a lot easier, as there are already implementations that will manage the certificate configuration in your Nginx and reload them when needed.

The post Deploying laravel-websockets with Nginx reverse proxy and supervisord appeared first on ma.ttias.be.

04 Dec 2018 10:18pm GMT