25 Apr 2017

feedPlanet Grep

Dries Buytaert: Drupal is API-first, not API-only

More and more developers are choosing content-as-a-service solutions known as headless CMSes - content repositories which offer no-frills editorial interfaces and expose content APIs for consumption by an expanding array of applications. Headless CMSes share a few common traits: they lack end-user front ends, provide few to no editorial tools for display and layout, and as such leave presentational concerns almost entirely up to the front-end developer. Headless CMSes have gained popularity because:

Due to this trend among developers, many are rightfully asking whether headless CMSes are challenging the market for traditional CMSes. I'm not convinced that headless CMSes as they stand today are where the CMS world in general is headed. In fact, I believe a nuanced view is needed.

In this blog post, I'll explain why Drupal has one crucial advantage that propels it beyond the emerging headless competitors: it can be an exceptional CMS for editors who need control over the presentation of their content and a rich headless CMS for developers building out large content ecosystems in a single package.

As Drupal continues to power the websites that have long been its bread and butter, it is also used more and more to serve content to other back-end systems, single-page applications, native applications, and even conversational interfaces - all at the same time.

Headless CMSes are leaving editors behind

This diagram illustrates the differences between a traditional Drupal website and a headless CMS with various front ends receiving content.

Some claim that headless CMSes will replace traditional CMSes like Drupal and WordPress when it comes to content editors and marketers. I'm not so sure.

Where headless CMSes fall flat is in the areas of in-context administration and in-place editing of content. Our outside-in efforts, in contrast, aim to allow an editor to administer content and page structure in an interface alongside a live preview rather than in an interface that is completely separate from the end user experience. Some examples of this paradigm include dragging blocks directly into regions or reordering menu items and then seeing both of these changes apply live.

By their nature, headless CMSes lack full-fledged editorial experience integrated into the front ends to which they serve content. Unless they expose a content editing interface tied to each front end, in-context administration and in-place editing are impossible. In other words, to provide an editorial experience on the front end, that front end must be aware of that content editing interface - hence the necessity of coupling.

Display and layout manipulation is another area that is key to making marketers successful. One of Drupal's key features is the ability to control where content appears in a layout structure. Headless CMSes are unopinionated about display and layout settings. But just like in-place editing and in-context administration, editorial tools that enable this need to be integrated into the front end that faces the end user in order to be useful.

In addition, editors and marketers are particularly concerned about how content will look once it's published. Access to an easy end-to-end preview system, especially for unpublished content, is essential to many editors' workflows. In the headless CMS paradigm, developers have to jump through fairly significant hoops to enable seamless preview, including setting up a new API endpoint or staging environment and deploying a separate version of their application that issues requests against new paths. As a result, I believe seamless preview - without having to tap on a developer's shoulder - is still necessary.

Features like in-place editing, in-context administration, layout manipulation, and seamless but faithful preview are essential building blocks for an optimal editorial experience for content creators and marketers. For some use cases, these drawbacks are totally manageable, especially where an application needs little editorial interaction and is more developer-focused. But for content editors, headless CMSes simply don't offer the toolkits they have come to expect; they fall short where Drupal shines.

Drupal empowers both editors and application developers

This diagram illustrates the differences between a coupled - but headless-enabled - Drupal website and a headless CMS with various front ends receiving content.

All of this isn't to say that headless isn't important. Headless is important, but supporting both headless and traditional approaches is one of the biggest advantages of Drupal. After all, content management systems need to serve content beyond editor-focused websites to single-page applications, native applications, and even emerging devices such as wearables, conversational interfaces, and IoT devices.

Fortunately, the ongoing API-first initiative is actively working to advance existing and new web services efforts that make using Drupal as a content service much easier and more optimal for developers. We're working on making developers of these applications more productive, whether through web services that provide a great developer experience like JSON API and GraphQL or through tooling that accelerates headless application development like the Waterwheel ecosystem.

For me, the key takeaway of this discussion is: Drupal is great for both editors and developers. But there are some caveats. For web experiences that need significant focus on the editor or assembler experience, you should use a coupled Drupal front end which gives you the ability to edit and manipulate the front end without involving a developer. For web experiences where you don't need editors to be involved, Drupal is still ideal. In an API-first approach, Drupal provides for other digital experiences that it can't explicitly support (those that aren't web-based). This keeps both options open to you.

Drupal for your site, headless Drupal for your apps

This diagram illustrates the ideal architecture for Drupal, which should be leveraged as both a front end in and of itself as well as a content service for other front ends.

In this day and age, having all channels served by a single source of truth for content is important. But what architecture is optimal for this approach? While reading this you might have also experienced some déjà-vu from a blog post I wrote last year about how you should decouple Drupal, which is still solid advice nearly a year after I first posted it.

Ultimately, I recommend an architecture where Drupal is simultaneously coupled and decoupled; in short, Drupal shines when it's positioned both for editors and for application developers, because Drupal is great at both roles. In other words, your content repository should also be your public-facing website - a contiguous site with full editorial capabilities. At the same time, it should be the centerpiece for your collection of applications, which don't necessitate editorial tools but do offer your developers the experience they want. Keeping Drupal as a coupled website, while concurrently adding decoupled applications, isn't a limitation; it's an enhancement.

Conclusion

Today's goal isn't to make Drupal API-only, but rather API-first. It doesn't limit you to a coupled approach like CMSes without APIs, and it doesn't limit you to an API-only approach like Contentful and other headless CMSes. To me, that is the most important conclusion to draw from this: Drupal supports an entire spectrum of possibilities. This allows you to make the proper trade-off between optimizing for your editors and marketers, or for your developers, and to shift elsewhere on that spectrum as your needs change.

It's a spectrum that encompasses both extremes of the scenarios that a coupled approach and headless approach represent. You can use Drupal to power a single website as we have for many years. At the same time, you can use Drupal to power a long list of applications beyond a traditional website. In doing so, Drupal can be adjusted up and down along this spectrum according to the requirements of your developers and editors.

In other words, Drupal is API-first, not API-only, and rather than leave editors and marketers behind in favor of developers, it gives everyone what they need in one single package.

Special thanks to Preston So for contributions to this blog post and to Wim Leers, Ted Bowman, Chris Hamper and Matt Grill for their feedback during the writing process.

25 Apr 2017 4:59pm GMT

Frank Goossens: Music from Our Tube; Laura Marling’s Soothing

What a song, what a voice, what an atmosphere. And two basses? Wow, just wow!

YouTube Video
Watch this video on YouTube.

And if you find the clip too distracting, the song works great live as well.

Possibly related twitterless twaddle:

25 Apr 2017 4:33pm GMT

Jeroen De Dauw: PHP development with Docker

I'm the kind of dev that dreads configuring webservers and that rather does not have to put up with random ops stuff before being able to get work done. Docker is one of those things I've never looked into, cause clearly it's evil annoying boring evil confusing evil ops stuff. Two of my colleagues just introduced me to a one-line docker command that kind off blew my mind.

Want to run tests for a project but don't have PHP7 installed? Want to execute a custom Composer script that runs both these tests and the linters without having Composer installed? Don't want to execute code you are not that familiar with on your machine that contains your private keys, etc? Assuming you have Docker installed, this command is all you need:

docker run --rm --interactive --tty --volume $PWD:/app -w /app\
 --volume ~/.composer:/composer --user $(id -u):$(id -g) composer composer ci

This command uses the Composer Docker image, as indicated by the first composer at the end of the command. After that you can specify whatever you want to execute, in this case composer ci, where ci is a custom composer Script. (If you want to know what the Docker image is doing behind the scenes, check its entry point file.)

This works without having PHP or Composer installed, and is very fast after the initial dependencies have been pulled. And each time you execute the command, the environment is destroyed, avoiding state leakage. You can create a composer alias in your .bash_aliases as follows, and then execute composer on your host just as you would do if it was actually installed (and running) there.

alias composer='docker run --rm --interactive --tty --volume $PWD:/app -w /app\
 --volume ~/.composer:/composer --user $(id -u):$(id -g) composer composer'

Of course you are not limited to running Composer commands, you can also invoke PHPUnit

...(id -g) composer vendor/bin/phpunit

or indeed any PHP code.

...(id -g) composer php -r 'echo "hi";'

This one liner is not sufficient if you require additional dependencies, such as PHP extensions, databases or webservers. In those cases you probably want to create your own Docker file. Though to run the tests of most PHP libraries you should be good. I've now uninstalled my local Composer and PHP.

25 Apr 2017 4:14pm GMT

Wouter Verhelst: Removing git-lfs

Git is cool, for reasons I won't go into here.

It doesn't deal very well with very large files, but that's fine; when using things like git-annex or git-lfs, it's possible to deal with very large files.

But what if you've added a file to git-lfs which didn't need to be there? Let's say you installed git-lfs and told it to track all *.zip files, but then it turned out that some of those files were really small, and that the extra overhead of tracking them in git-lfs is causing a lot of grief with your users. What do you do now?

With git-annex, the solution is simple: you just run git annex unannex <filename>, and you're done. You may also need to tell the assistant to no longer automatically add that file to the annex, but beyond that, all is well.

With git-lfs, this works slightly differently. It's not much more complicated, but it's not documented in the man page. The naive way to do that would be to just run git lfs untrack; but when I tried that it didn't work. Instead, I found that the following does work:

25 Apr 2017 9:56am GMT

Jan De Dobbeleer: Running Mastodon

Mastonaut's log, tootdate 10. We started out by travelling on board of mastodon.social. Being the largest one, we met with people from all over the fediverse. Some we could understand, others we couldn't. Those were interesting days, I encountered a lot of people fleeing from other places to feel free and be themselves, while others were simply enjoying the ride. It wasn't until we encountered the Pawoo, who turned out to have peculiar tastes when it comes to imagery, that the order in the fediverse got disturbed. But, as we can't expect to get freedom while restricting others', I fetched the plans to build my own instance. Ready to explore the fediverse and its inhabitants on my own, I set out on an exciting journey.

As I do not own a server myself, and still had $55 credit on Digital Ocean, I decided to setup a simple $5 Ubuntu 16.04 droplet to get started. This setup assumes you've got a domain name and I will even show you how to run Mastodon on a subdomain while identifying on the root domain. I suggest following the initial server setup to make sure you get started the right way. Once you're all set, grab a refreshment and connect to your server through SSH.

Let's start by ensuring we have everything we need to proceed. There are a few dependencies to run Mastodon. We need docker to run the different applications and tools in containers (easiest approach) and nginx to expose the apps to the outside world. Luckily, Digital Ocean has an insane amount of up-to-date documentation we can use. Follow these two guides and report back.

At this point, we're ready to grab the source code. Do this in your location of choice.

git clone https://github.com/tootsuite/mastodon.git

Change to that location and checkout the latest release (1.2.2 at the time of writing).

cd mastodon
git checkout 1.2.2

Now that we've got all this setup, we can build our containers. There's a useful guide made by the Mastodon community I suggest you follow. Before we make this available to the outside world, we want to tweak our .env.production file to configure the instance. There are a few keys in there we need to adjust, and some we could adjust. In my case, Mastodon runs as a single user instance, meaning only one user is allowed in. Nobody can register and the home page redirects to that user's profile instead of the login page. Below are the settings I adjusted, remember I run Mastodon on a subdomain mastodon.herebedragons.io, but my user identifies as @jan@herebedragons.io. The config changes below illustrate that behavior. If you have no use for that, just leave the WEB_DOMAIN key commented out. If you do need it however, you'll still have to enter a redirect rule for your root domain that points https://rootdomain/.well-known/host-meta to https://subdomain.rootdomain/.well-known/host-meta. I added a rule on Cloudflare to achieve this, but any approach will do.

# Federation
LOCAL_DOMAIN=herebedragons.io
LOCAL_HTTPS=true

# Use this only if you need to run mastodon on a different domain than the one used for federation.
# Do not use this unless you know exactly what you are doing.
WEB_DOMAIN=mastodon.herebedragons.io

# Registrations
# Single user mode will disable registrations and redirect frontpage to the first profile
SINGLE_USER_MODE=true

As we can't run a site without configuring SSL, we'll use Let's Encrypt to secure nginx. Follow the brilliant guide over at Digital Ocean and report back for the last part. Once setup, we need to configure nginx (and the DNS settings for your domain) to make Mastodon available for the world to enjoy. You can find my settings here. Just make sure to adjust the key file's name and DNS settings. As I redirect all http traffic to https using Cloudflare, I did not bother to add port 80 to the config, be sure to add it if needed.

Alright, we're ready to start exploring the fediverse! Make sure to restart nginx to apply the latest settings using sudo service nginx restart and update the containers to reflect your settings via docker-compose up -d. If all went according to plan, you should see your brand new shiny instance on your domain name. Create your first user and get ready to toot! In case you did not bother to add an smtp server, manually confirm your user:

docker-compose run --rm web rails mastodon:confirm_email USER_EMAIL=alice@alice.com

And make sure to give yourself ultimate admin powers to be able to configure your intance:

docker-compose run --rm web rails mastodon:make_admin USERNAME=alice

Updating is a straightforward process too. Fetch the latest changes from the remote, checkout the tag you want and update your containers:

docker-compose stop
docker-compose build
docker-compose run --rm web rails db:migrate
docker-compose run --rm web rails assets:precompile
docker-compose up -d

Happy tooting!

25 Apr 2017 12:00am GMT

24 Apr 2017

feedPlanet Grep

Philip Van Hoof: RE: Bye Facebook

Wim made a stir in the land of the web. Good for Wim that he rid himself of the shackles of social media.

But how will we bring a generation of people, who are now more or less addicted to social media, to a new platform? And what should that platform look like?

I'm not a anthropologist, but I believe human nature of organizing around new concepts and techniques is that we, humans, start central and monolithic. Then we fine-tune it. We figure out that the central organization and monolithic implementation of it becomes a limiting factor. Then we decentralize it.

The next step for all those existing and potential so-called 'online services' is to become fully decentralized.

Every family or home should have its own IMAP and SMTP server. Should that be JMAP instead? Probably. But that ain't the point. The fact that every family or home will have its own, is. For chat, XMPP's s2s is like SMTP. Postfix is an implementation of SMTP like ejabberd is for XMPP's s2s. We have Cyrus, Dovecot and others for IMAP, which is the c2s of course. And soon we'll probably have JMAP, too. Addressability? IPv6.

Why not something like this for social media? For the next online appliance, too? Augmented reality worlds can be negotiated in a distributed fashion. Why must Second Life necessarily be centralized? Surely we can run Linden Lab's server software, locally.

Simple, because money is not interested in anything non-centralized. Not yet.

In the other news, the Internet stopped working truly well ever since money became its driving factor.

ps. The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think different. Quote by Friedrich Nietzsche.

24 Apr 2017 9:56pm GMT

21 Apr 2017

feedPlanet Grep

Wim Leers: Bye Facebook

I deleted my Facebook account because in the past three years, I barely used it. It's ironic, considering I worked there. 1

More irony: I never used it as much as I did when I worked there.

Yet more irony: a huge portion of my Facebook news feed was activity by a handful of Facebook employees. 2

No longer useful

I used to like Facebook because it delivered on its original mission:

Facebook helps you connect and share with the people in your life.

They're clearly no longer true to that mission. 3

When I joined in November 2007, the news feed chronologically listed status updates from your friends. Great!

Since then, they've done every imaginable thing to increase time spent, also known as the euphemistic "engagement". They've done this by surfacing friends' likes, suggested likes, friends' replies, suggested friends, suggested pages to like based on prior likes, and of course: ads. Those things are not only distractions, they're actively annoying.

Instead of minimizing time spent so users can get back to their lives, Facebook has sacrificed users at the altar of advertising: more time spent = more ads shown.

No longer trustworthy

An entire spectrum of concerns to choose from, along two axes: privacy and walled garden. And of course, the interesting intersection of minimized privacy and maximized walled gardenness: the filter bubble.

If you want to know more, see Vicki Boykis' well-researched article.

No thanks

Long story short: Facebook is not for me anymore.

It's okay to not know everything that's been going on. It makes for more interesting conversations when you do get to see each other again.

The older I get, the more I prefer the one communication medium that is not a walled garden, that I can control, back up and search: e-mail.


  1. To be clear: I'm still very grateful for that opportunity. It was a great working environment and it helped my career! ↩︎

  2. Before deleting my Facebook account, I scrolled through my entire news feed - apparently this time it was seemingly endless, and I stopped after the first 202 items in it, by then I'd had enough. Of those 202 items, 58 (28%) were by former or current Facebook employees. 40% (81) of it was reporting on a like or reply by somebody - which I could not care less about 99% of the time. And the remainder, well… the vast majority of it is mildly interesting at best. Knowing all the trivia in everybody's lives is fatiguing and wasteful, not fascinating and useful. ↩︎

  3. They've changed it since then, to: give people the power to share and make the world more open and connected. ↩︎

21 Apr 2017 8:28pm GMT

Xavier Mertens: [SANS ISC] Analysis of a Maldoc with Multiple Layers of Obfuscation

I published the following diary on isc.sans.org: "Analysis of a Maldoc with Multiple Layers of Obfuscation".

Thanks to our readers, we get often interesting samples to analyze. This time, Frederick sent us a malicious Microsoft Word document called "Invoice_6083.doc" (which was delivered in a zip archive). I had a quick look at it and it was interesting enough for a quick diary… [Read more]

[The post [SANS ISC] Analysis of a Maldoc with Multiple Layers of Obfuscation has been first published on /dev/random]

21 Apr 2017 9:30am GMT

Dries Buytaert: Thoughts as we head to DrupalCon Baltimore

The past weeks have been difficult. I'm well aware that the community is struggling, and it really pains me. I respect the various opinions expressed, including opinions different from my own. I want you to know that I'm listening and that I'm carefully considering the different aspects of this situation. I'm doing my best to progress through the issues and support the work that needs to happen to evolve our governance model. For those that are attending DrupalCon Baltimore and want to help, we just added a community discussions track.

There is a lot to figure out, and I know that it's difficult when there are unresolved questions. Leading up to DrupalCon Baltimore next week, it may be helpful for people to know that Larry Garfield and I are talking. As members of the Community Working Group reported this week, Larry remains a member of the community. While we figure out Larry's future roles, Larry is attending DrupalCon as a regular community member with the opportunity to participate in sessions, code sprints and issue queues.

As we are about to kick off DrupalCon Baltimore, please know that my wish for this conference is for it to be everything you've made it over the years; a time for bringing out the best in each other, for learning and sharing our knowledge, and for great minds to work together to move the project forward. We owe it to the 3,000 people who will be in attendance to make DrupalCon about Drupal. To that end, I ask for your patience towards me, so I can do my part in helping to achieve these goals. It can only happen with your help, support, patience and understanding. Please join me in making DrupalCon Baltimore an amazing time to connect, collaborate and learn, like the many DrupalCons before it.

(I have received a lot of comments and at this time I just want to respond with an update. I decided to close the comments on this post.)

21 Apr 2017 2:20am GMT

20 Apr 2017

feedPlanet Grep

Xavier Mertens: Archive.org Abused to Deliver Phishing Pages

The Internet Archive is a well-known website and more precisely for its "WaybackMachine" service. It allows you to search for and display old versions of websites. The current Alexa ranking is 262 which makes it a "popular and trusted" website. Indeed, like I explained in a recent SANS ISC diary, whitelists of websites are very important for attackers! The phishing attempt that I detected was also using the URL shortener bit.ly (Position 9380 in the Alexa list).

The phishing is based on a DHL notification email. The mail has a PDF attached to it:

DHL Notification

This PDF has no malicious content and is therefore not blocked by antispam/antivirus. The link "Click here" points to a bit.ly short URL:

hxxps://bitly.com/2jXl8GJ

Note that HTTPS is used which already make the traffic non-inspected by many security solutions.


Tip: If you append a "+" at the end of the URL, bit.ly will not directly redirect you to the hidden URL but will display you an information page where you can read this URL!


The URL behind the short URL is:

hxxps://archive.org/download/gxzdhsh/gxzdhsh.html

Bit.ly also maintains statistics about the visitors:

bit.ly Statistics

It's impressive to see how many people visited the malicious link. The phishing campaign was also active since the end of March. Thank you bit.ly for this useful information!

This URL returns the following HTML code:

<html>
<head>
<title></title>
<META http-equiv="refresh" content="0;URL=data:text/html;base64, ... (base64 data) ... "
</head>
<body bgcolor="#fffff">
<center>
</center>
</body>
</html>

The refresh META tag displays the decoded HTML code:

<script language="Javascript">
document.write(unescape('%0A%3C%68%74%6D%6C%20%68%6F%6C%61%5F%65%78%74%5F%69%6E%6A%65%63
%74%3D%22%69%6E%69%74%65%64%22%3E%3C%68%65%61%64%3E%0A%3C%6D%65%74%61%20%68%74%74%70%2D
%65%71%75%69%76%3D%22%63%6F%6E%74%65%6E%74%2D%74%79%70%65%22%20%63%6F%6E%74%65%6E%74%3D
%22%74%65%78%74%2F%68%74%6D%6C%3B%20%63%68%61%72%73%65%74%3D%77%69%6E%64%6F%77%73%2D%31
%32%35%32%22%3E%0A%3C%6C%69%6E%6B%20%72%65%6C%3D%22%73%68%6F%72%74%63%75%74%20%69%63%6F
%6E%22%20%68%72%65%66%3D%22%68%74%74%70%3A%2F%2F%77%77%77%2E%64%68%6C%2E%63%6F%6D%2F%69
%6D%67%2F%66%61%76%69%63%6F%6E%2E%67%69%6
...
%3E%0A%09%3C%69%6D%67%20%73%72%63%3D%22%68%74%74%70%3A%2F%2F%77%77%77%2E%66%65%64%61%67
%72%6F%6C%74%64%2E%63%6F%6D%2F%6D%6F%62%2F%44%48%4C%5F%66%69%6C%65%73%2F%61%6C%69%62%61
%62%61%2E%70%6E%67%22%20%68%65%69%67%68%74%3D%22%32%37%22%20%0A%0A%77%69%64%74%68%3D%22
%31%33%30%22%3E%0A%09%3C%2F%74%64%3E%0A%0A%09%3C%2F%74%72%3E%3C%2F%74%62%6F%64%79%3E%3C
%2F%74%61%62%6C%65%3E%3C%2F%74%64%3E%3C%2F%74%72%3E%0A%0A%0A%0A%0A%3C%74%72%3E%3C%74%64
%20%68%65%69%67%68%74%3D%22%35%25%22%20%62%67%63%6F%6C%6F%72%3D%22%23%30%30%30%30%30%30
%22%3E%0A%3C%2F%74%64%3E%3C%2F%74%72%3E%0A%0A%3C%2F%74%62%6F%64%79%3E%3C%2F%74%61%62%6C
%65%3E%0A%0A%0A%0A%3C%2F%62%6F%64%79%3E%3C%2F%68%74%6D%6C%3E'));
</Script>

The deobfuscated script displays the following page:

DHL Phishing Page

The pictures are stored on a remote website but it has already been cleaned:

hxxp://www.fedagroltd.com/mob/DHL_files/

Stolen data are sent to another website: (This one is still alive)

hxxp://www.magnacartapeace.org.ng/wp/stevedhl/kenbeet.php

The question is: how this phishing page was stored on archive.org? If you visit the upper level on the malicious URL (https://archive.org/download/gxzdhsh/), you find this:

archive.org Files

Go again to the upper directory ('../') and you will find the owner of this page: alextray. This guy has many phishing pages available:

alextray's Projects

Indeed, the Internet Archives website allows registered users to upload content as stated in the FAQ. If you search for 'archive.org/download' on Google, you will find a lot of references to multiple contents (most of them are harmless) but on VT, there are references to malicious content hosted on archive.org.

Here is the list of phishing sites hosted by "alextray". You can use them as IOC's:

hxxps://archive.org/download/gjvkrduef/gjvkrduef.html
hxxps://archive.org/download/Jfojasfkjafkj/jfojas;fkj;afkj;.html
hxxps://archive.org/download/ygluiigii/ygluiigii.html (Yahoo!)
hxxps://archive.org/download/ugjufhugyj/ugjufhugyj.html (Microsoft)
hxxps://archive.org/download/khgjfhfdh/khgjfhfdh.html (DHL)
hxxps://archive.org/download/iojopkok/iojopkok.html (Adobe)
hxxps://archive.org/download/Lkmpk/lkm[pk[.html (Microsoft)
hxxps://archive.org/download/vhjjjkgkgk/vhjjjkgkgk.html (TNT)
hxxps://archive.org/download/ukryjfdjhy/ukryjfdjhy.html (TNT)
hxxps://archive.org/download/ojodvs/ojodvs.html (Adobe)
hxxps://archive.org/download/sfsgwg/sfsgwg.html (DHL)
hxxps://archive.org/download/ngmdlxzf/ngmdlxzf.html (Microsoft)
hxxps://archive.org/download/zvcmxlvm/zvcmxlvm.html (Microsoft)
hxxps://archive.org/download/ugiutiyiio/ugiutiyiio.html (Yahoo!)
hxxps://archive.org/download/ufytuyu/ufytuyu.html (Microsoft Excel)
hxxps://archive.org/download/xgfdhfdh/xgfdhfdh.html (Adobe)
hxxps://archive.org/download/itiiyiyo/itiiyiyo.html (DHL)
hxxps://archive.org/download/hgvhghg/hgvhghg.html (Google Drive)
hxxps://archive.org/download/sagsdg_201701/sagsdg.html (Microsoft)
hxxps://archive.org/download/bljlol/bljlol.html (Microsoft)
hxxps://archive.org/download/gxzdhsh/gxzdhsh.html (DHL)
hxxps://archive.org/download/bygih_201701/bygih.html (DHL)
hxxps://archive.org/download/bygih/bygih.html (DHL)
hxxps://archive.org/download/ygi9j9u9/ygi9j9u9.html (Yahoo!)
hxxps://archive.org/download/78yt88/78yt88.html (Microsoft)
hxxps://archive.org/download/vfhyfu/vfhyfu.html (Yahoo!)
hxxps://archive.org/download/yfuyj/yfuyj.html (DHL)
hxxps://archive.org/download/afegwe/afegwe.html (Microsoft)
hxxps://archive.org/download/nalxJL/nalxJL.html (DHL)
hxxps://archive.org/download/jfleg/jfleg.html (DHL)
hxxps://archive.org/download/yfigio/yfigio.html (Microsoft)
hxxps://archive.org/download/gjbyk/gjbyk.html (Microsoft)
hxxps://archive.org/download/nfdnkh/nfdnkh.html (Yahoo!)
hxxps://archive.org/download/GfhdtYry/gfhdt%20yry.html (Microsoft)
hxxps://archive.org/download/fhdfxhdh/fhdfxhdh.html (Microsoft)
hxxps://archive.org/download/iohbo6vu5/iohbo6vu5.html (DHL)
hxxps://archive.org/download/sgsdgh/sgsdgh.html (Adobe)
hxxps://archive.org/download/mailiantrewl/mailiantrewl.html (Google)
hxxps://archive.org/download/ihiyi/ihiyi.html (Microsoft)
hxxps://archive.org/download/glkgjhtrku/glkgjhtrku.html (Microsoft)
hxxps://archive.org/download/pn8n8t7r/pn8n8t7r.html (Microsoft)
hxxps://archive.org/download/aEQWGG/aEQWGG.html (Yahoo!)
hxxps://archive.org/download/isajcow/isajcow.html (Yahoo!)
hxxps://archive.org/download/pontiffdata_yahoo_Kfdk/;kfd;k.html (Yahoo!)
hxxps://archive.org/download/vuivi/vuivi.html (TNT)
hxxps://archive.org/download/lmmkn/lmmkn.html (Microsoft)
hxxps://archive.org/download/ksafaF/ksafaF.html (Google)
hxxps://archive.org/download/fsdgs/fsdgs.html (Microsoft)
hxxps://archive.org/download/joomlm/joomlm.html (Microsoft)
hxxps://archive.org/download/rdgdh/rdgdh.html (Adobe)
hxxps://archive.org/download/pontiffdata_yahoo_Bsga/bsga.html (Microsoft)
hxxps://archive.org/download/ihgoiybot/ihgoiybot.html (Microsoft)
hxxps://archive.org/download/dfhrf/dfhrf.html (Microsoft)
hxxps://archive.org/download/pontiffdata_yahoo_Kgfk_201701/kgfk.html (Microsoft)
hxxps://archive.org/download/jhlhj/jhlhj.html (Yahoo!)
hxxps://archive.org/download/pontiffdata_yahoo_Kgfk/kgfk.html (Microsoft)
hxxps://archive.org/download/pontiffdata_yahoo_Gege/gege.html (Microsoft)
hxxps://archive.org/download/him8ouh/him8ouh.html (DHL)
hxxps://archive.org/download/maiikillll/maiikillll.html (Google)
hxxps://archive.org/download/pontiffdata_yahoo_Mlv/mlv;.html (Microsoft)
hxxps://archive.org/download/oiopo_201701/oiopo.html (Microsoft)
hxxps://archive.org/download/ircyily/ircyily.html (Microsoft)
hxxps://archive.org/download/vuyvii/vuyvii.html (DHL)
hxxps://archive.org/download/fcvbt_201612/fcvbt.html (Microsoft)
hxxps://archive.org/download/poksfcps/poksfcps.html (Yahoo!)
hxxps://archive.org/download/tretr_201612/tretr.html
hxxps://archive.org/download/eldotrivoloto_201612/eldotrivoloto.html (Microsoft)
hxxps://archive.org/download/babalito_201612/babalito.html (Microsoft)
hxxps://archive.org/download/katolito_201612/katolito.html (Microsoft)
hxxps://archive.org/download/kingshotties_201612/kingshotties.html (Microsoft)
hxxps://archive.org/download/fcvbt/fcvbt.html (Microsoft)
hxxps://archive.org/download/vkvkk/vkvkk.html (DHL)
hxxps://archive.org/download/pontiffdata_yahoo_Vkm/vkm;.html (Microsoft)
hxxps://archive.org/download/hiluoogi/hiluoogi.html (Microsoft)
hxxps://archive.org/download/ipiojlj/ipiojlj.html (Microsoft)

[The post Archive.org Abused to Deliver Phishing Pages has been first published on /dev/random]

20 Apr 2017 9:18pm GMT

Xavier Mertens: [SANS ISC] DNS Query Length… Because Size Does Matter

I published the following diary on isc.sans.org: "DNS Query Length… Because Size Does Matter".

In many cases, DNS remains a goldmine to detect potentially malicious activity. DNS can be used in multiple ways to bypass security controls. DNS tunnelling is a common way to establish connections with remote systems. It is often based on "TXT" records used to deliver the encoded payload. "TXT" records are also used for good reasons, like delivering SPF records but, too many TXT DNS request could mean that something weird is happening on your network… [Read more]

[The post [SANS ISC] DNS Query Length… Because Size Does Matter has been first published on /dev/random]

20 Apr 2017 10:55am GMT

Claudio Ramirez: Notes from my Unity -> Gnome3 migration

Updated: 20170419: gnome-shell extension browser integration.
Updated: 20170420: natural scrolling on X instead of Wayland.

Introduction

Mark Shuttleworth, founder of Ubuntu and Canonical, dropped a bombshell: Ubuntu drops Unity 8 and -by extension- also the Mir graphical server on the desktop. Starting from the 18.04 release, Ubuntu will use Gnome 3 as the default Desktop environment.

Sadly, the desktop environment used by millions of Ubuntu users -Unity 7- has no path forward now. Unity 7 runs on the X.org graphical stack, while the Linux world -including Ubuntu now- is slowly but surely moving to Wayland (it will be the default on Ubuntu 18.04 LTS). It's clear that Unity has its detractors, and it's true that the first releases (6 years ago!) were limited and buggy. However, today, Unity 7 is a beautiful and functional desktop environment. I happily use it at home and at work.

Soon-to-be-dead code is dead code, so even as a happy user I don't see the interest in staying with Unity. I prefer to make the jump now instead of sticking a year with a desktop on life support. Among other environments, I have been a full time user of CDE, Window Maker, Gnome 1.*, KDE 2.*, Java Desktop System, OpenSolaris Desktop, LXDE and XFCE. I'll survive :).

The idea of these lines is to collect changes I felt I needed to make to a vanilla Ubuntu Gnome 3 setup to make it work for me. I made the jump 1 week before the release of 17.04, so I'll stick with 17.04 and skip the 16.10 instructions (in short: you'll need to install gnome-shell-extension-dashtodock from an external source instead of the Ubuntu repos).

The easiest way to make the use Gnome on Ubuntu is, of course, installing the Ubuntu Gnome distribution. If you're upgrading, you can do it manually. In case you want to remove Unity and install Gnome at the same time:
$ sudo apt-get remove --purge ubuntu-desktop lightdm && sudo apt-get install ubuntu-gnome-desktop && apt-get remove --purge $(dpkg -l |grep -i unity |awk '{print $2}') && sudo apt-get autoremove -y

Changes

Add Extensions:

  1. Install Gnome 3 extensions to customize the desktop experience:
    $ sudo apt-get install -y gnome-tweak-tool gnome-shell-extension-top-icons-plus gnome-shell-extension-dashtodock gnome-shell-extension-better-volume gnome-shell-extension-refreshwifi gnome-shell-extension-disconnect-wifi
  2. Install the gnome-shell integration (the one on the main Ubuntu repos does not work):
    $ sudo add-apt-repository ppa:ne0sight/chrome-gnome-shell && sudo apt-get update && sudo apt-get install chrome-gnome-shell
  3. Install the "Refresh wifi" extension by going with Firefox or Chrome to the Gnome Extensions website. You'll need to install a browser plugin. Refresh the page after installing the plugin.
  4. Log off in order to activate the extensions.
  5. Start gnome-tweak-tool and enable "Better volume indicator" (scroll wheel to change volume), "Dash to dock" (a more Unity-like Dock, configurable. I set the "Icon size limit" to 24 and "Behavior-Click Action" to "minimize"), "Disconnect wifi" (allow disconnection of network without setting Wifi to off), "Refresh Wifi connections" (auto refresh wifi list) and "Topicons plus" (put non-Gnome icons like Dropbox and pidgin on the top menu).

Change window size and buttons:

  1. On the Windows tab, I enabled the Maximise and Minise Titlebar Buttons.
  2. Make the window top bars smaller if you wish. Just create ~/.config/gtk-3.0/gtk.css with these lines:
    /* From: http://blog.samalik.com/make-your-gnome-title-bar-smaller-fedora-24-update/ */
    window.ssd headerbar.titlebar {
    padding-top: 4px;
    padding-bottom: 4px;
    min-height: 0;
    }
    window.ssd headerbar.titlebar button.titlebutton {
    padding: 0px;
    min-height: 0;
    min-width: 0;
    }

Disable "natural scrolling" for mouse wheel:

While I like "natural scrolling" with the touchpad (enable it in the mouse preferences), I don't like it on the mouse wheel. To disable it only on the mouse:
$ gsettings set org.gnome.desktop.peripherals.mouse natural-scroll false

If you run Gnome on good old X instead of Wayland (e.g. for driver support of more stability while Wayland matures), you need to use libinput instead of the synaptic driver to make "natural scrolling" possible:

$ sudo mkdir -p /etc/X11/xorg.conf.d && sudo cp -rp /usr/share/X11/xorg.conf.d/40-libinput.conf /etc/X11/xorg.conf.d/

Log out.

Enable Thunderbird notifications:

For Thunderbird new mail notification I installed the gnotifier Thunderbird add-on: https://addons.mozilla.org/en-us/thunderbird/addon/gnotifier/

Extensions that I tried, liked but ended not using:

That's it (so far 🙂 ).

Thx to @sil, @adsamalik and Jonathan Carter.


Filed under: Uncategorized Tagged: gnome, Gnome3, Linux, Linux Desktop, Thanks for all the fish, Ubuntu, unity

20 Apr 2017 9:57am GMT

Claudio Ramirez: MS Office 365 (Click-to-Run): Remove unused applications

Too many MS Office 365 appsUpdate 20160421:
- update for MS Office 2016.
- fix configuration.xml view on WordPress.

If you install Microsoft Office trough click-to-run you'll end with the full suite installed. You can no longer select what application you want to install. That's kind of OK because you pay for the complete suit. Or at least the organisation (school, work, etc.) offering the subscription does. But maybe you are like me and you dislike installing applications you don't use. Or even more like me: you're a Linux user with a Windows VM you boot once in a while out of necessity. And unused applications in a VM residing on your disk is *really* annoying.

The Microsoft documentation to remove the unused applications (Access as a DB? Yeah, right…) wasn't very straightforward so I post what worked for me after the needed trial-and-error routines. This is a small howto:

  • Install the Office Deployment Toolkit (download for MS Office 2013, 2016). The installer asks for a installation location. I put it in C:\Users\nxadm\OfficeDeployTool (change the username accordingly). If you're short on space (or in a VM), you can put it in a mounted shared.
  • Create a configuration.xml with the applications you want to add. The file should reside in the directory you chose for the Office Deployment Tookit (e.g. C:\Users\nxadm\OfficeDeployTool\configuration.xml) or you should refer to the file with its full path name. You can find the full list op AppIDs here (more info about other settings)/ Add or remove ExcludeApps as desired. My configuration file is as follows (wordpress removes the xml code below, hence the image):
    configuration.xml
  • If you run the 64-bit Office version change OfficeClientEdition="32" to OfficeClientEdition="64".
  • Download the office components. Type in a cmd box:
    C:\Users\\OfficeDeployTool>setup.exe /download configuration.xml
  • Remove the unwanted applications:
    C:\Users\\OfficeDeployTool>setup.exe /configure configuration.xml
  • Delete (if you want) the Office Deployment Toolkit directory. Certainly the cached installation files in the "Office" directory take a lot of space.

Enjoy the space and faster updates. If you are using a VM don't forget to defragment and compact the Virtual Hard Disk to reclaim the space.


Filed under: Uncategorized Tagged: Click-to-Run, MS Office 365, VirtualBox, vm, VMWare, Windows

20 Apr 2017 8:49am GMT

19 Apr 2017

feedPlanet Grep

Xavier Mertens: [SANS ISC] Hunting for Malicious Excel Sheets

I published the following diary on isc.sans.org: "Hunting for Malicious Excel Sheets".

Recently, I found a malicious Excel sheet which contained a VBA macro. One particularity of this file was that useful information was stored in cells. The VBA macro read and used them to download the malicious PE file. The Excel file looked classic, asking the user to enable macros… [Read more]

[The post [SANS ISC] Hunting for Malicious Excel Sheets has been first published on /dev/random]

19 Apr 2017 10:58am GMT

Mattias Geniar: DNS Spy has launched!

The post DNS Spy has launched! appeared first on ma.ttias.be.

I started to created a DNS monitoring & validation solution called DNS Spy and I'm happy to report: it has launched!

It's been in private beta starting in 2016 and in public beta since March 2017. After almost 6 months of feedback, features and bugfixes, I think it's ready to kick the tires.

What's DNS Spy?

In case you haven't been following me the last few months, here's a quick rundown of DNS Spy.

There's many more features, like CNAME resolving, public domain scanning, offline & change notifications, ... that all make DNS Spy what it is: a reliable & stable DNS monitoring solution.

A new look & logo

The beta design of DNS Spy was built using a Font Awesome icon and some copy/paste bootstrap templates, just to validate the idea. I've gotten enough feedback to feel confident that DNS Spy adds actual value, so it was time to make the look & feel match that sentiment.

This was the first design:

Here's the new & improved look.

It's go a brand new look, a custom logo and a way to publicly scan & rate your domain configuration.

Public scoring system

You've probably heard of tools like SSL Labs' test & Security Headers, free webservices that allow you to rate and check your server configurations. Each with focus on their domain.

From now on, DNS Spy also has such a feature.

Above is the DNS Spy scan report for StackOverflow.com, which as a rock solid DNS setup.

We rate things like the connectivity (IPv4 & IPv6, records synced, ...), performance, resilience & security (how many providers, domains, DNSSEC & CAA support, ...) & DNS records (how is SPF/DMARC set up, are your TTLs long enough, do your NS records match your nameservers, ...).

The aim is to have DNS Spy become the SSL Labs of DNS configurations. To make that a continuous improvement, I encourage any feedback from you!

If you're curious how your domain scores, scan it via dnsspy.io.

Help me promote it?

Next up, of course, is promotion. There are a lot of ways to promote a service, and advertising is surely going to be one of them.

But if you've used DNS Spy and like it or if you've scanned your domain and are proud of your results, feel free to spread word of DNS Spy to your friends, coworkers, online followers, ... You'd have my eternal gratitude! :-)

DNS Spy is available on dnsspy.io or via @dnsspy on Twitter.

The post DNS Spy has launched! appeared first on ma.ttias.be.

19 Apr 2017 8:30am GMT

18 Apr 2017

feedPlanet Grep

Lionel Dricot: Mastodon, le premier réseau social véritablement social ?

Vous avez peut-être entendu parler de Mastodon, ce nouveau réseau social qui fait de la concurrence à Twitter. Ses avantages ? Une limite par post qui passe de 140 à 500 caractères et une approche orientée communauté et respect de l'autre là où Twitter a trop souvent été le terrain de cyber-harcèlements.

Mais une des particularités majeures de Mastodon est la décentralisation : ce n'est pas un seul et unique service appartenant à une entreprise mais bien un réseau, comme le mail.

Si chacun peut en théorie créer son instance Mastodon, la plupart d'entre nous rejoindrons des instances existantes. J'ai personnellement rejoint mamot.fr, l'instance gérée par La Quadrature du Net car j'ai confiance dans la pérennité de l'association, sa compétence technique et, surtout, je suis aligné avec ses valeurs de neutralité et de liberté d'expression. Je recommande également framapiaf.org, qui est administré par Framasoft.

Mais vous trouverez pléthore d'instances : depuis celles des partis pirate français et belge aux instances à thème. Il existe même des instances payantes et, pourquoi pas, il pourrait un jour y avoir des instances avec de la pub.

La beauté de tout ça réside bien entendu dans le choix. Les instances de La Quadrature du Net et de Framasoft sont ouvertes et libres, je conseille donc de faire un petit paiement libre récurrent à l'association de 2€, 5€ ou 10€ par mois, selon vos moyens.

Mastodon est décentralisé ? En fait, il faudrait plutôt parler de "distribué". Il y'a 5 ans, je dénonçais les problèmes des solutions décentralisées/distribuées. Le principal étant qu'on est soumis au bon vouloir ou aux maladresses de l'administrateur de son instance.

Force est de constater que Mastodon n'a techniquement résolu aucun de ces problèmes. Mais semble créer une belle dynamique communautaire qui fait plaisir à voir. Contrairement à son ancêtre Identi.ca, les instances se sont rapidement multipliées. Les conversations se sont lancées et des usages ont spontanément apparu : accueillir les nouveaux, suivre ceux qui n'ont que peu de followers pour les motiver, discuter de manière transparente des bonnes pratiques à adopter, utilisation d'un CW, Content Warning, masquant les messages potentiellement inappropriés, débats sur les règles de modération.

Toute cette énergie donne l'impression d'un espace à part, d'une liberté de discussion éloignée de l'omniprésente et omnisciente surveillance publicitaire indissociable des outils Facebook, Twitter ou Google.

D'ailleurs, un utilisateur proposait qu'on ne parle pas d'utilisateurs ("users") pour Mastodon mais bien de personnes ("people").

Dans un précédent article, je soulignais que les réseaux sociaux sont les prémisses d'une conscience globale de l'humanité. Mais comme le souligne Neil Jomunsi, le media est une part indissociable du message que l'on développe. Veut-on réellement que l'humanité soit représentée par une plateforme publicitaire où l'on cherche à exploiter le temps de cerveau des utilisateurs ?

Mastodon est donc selon moi l'expression d'un réel besoin, d'un manque. Une partie de notre humanité est étouffée par la publicité, la consommation, le conformisme et cherche un espace où s'exprimer.

Mastodon serait-il donc le premier réseau social distribué populaire ? Saura-t-il convaincre les utilisateurs moins techniques et se démarquer pour ne pas être « un énième clone libre » (comme l'est malheureusement Diaspora pour Facebook) ?

Mastodon va-t-il durer ? Tant qu'il y'aura des volontaires pour faire tourner des instances, Mastodon continuera d'exister sans se soucier du cours de la bourse, des gouvernements, des lois d'un pays particuliers ou des desiderata d'investisseurs. On ne peut pas en dire autant de Facebook ou Twitter.

Mais, surtout, il souffle sur Mastodon un vent de fraîche utopie, un air de naïve liberté, un sentiment de collaborative humanité où la qualité des échanges supplante la course à l'audience. C'est bon et ça fait du bien.

N'hésitez pas à nous rejoindre, à lire le mode d'emploi de Funambuline et poster votre premier « toot » présentant vos intérêts. Si vous dîtes que vous venez de ma part ( @ploum@mamot.fr ), je vous « boosterais » (l'équivalent du retweet) et la communauté vous suggérera des personnes à suivre.

Au fond, peu importe que Mastodon soit un succès ou disparaisse dans quelques mois. Nous devons continuons à essayer, à tester, à expérimenter jusqu'à ce que cela fonctionne. Si ce n'est pas Diaspora ou Mastodon, ce sera le prochain. Notre conscience globale, notre expression et nos échanges méritent mieux que d'être de simple encarts entre deux publicités sur une plateforme soumise à des lois sur lesquelles nous n'avons aucune prise.

Mastodon est un réseau social. Twitter et Facebook sont des réseaux publicitaires. Ne nous y trompons plus.

Photo par Daniel Mennerich.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

18 Apr 2017 10:10pm GMT