22 Feb 2018

feedPlanet Grep

Jeroen De Dauw: Collaboration Best Practices

During 2016 and 2017 I worked in a small 3 people dev team at Wikimedia Deutschland, aptly named the FUN team. This is the team that rewrote our fundraising application and implemented the Clean Architecture. Shortly after we started working on new features we noticed room for improvement in many areas of our collaboration. We talked about these and created a Best Practices document, the content of which I share in this blog post.

We created this document in October 2016 and it focused on the issues we where having at the time or things we where otherwise concerned about. Hence it is by no means a comprehensive list, and it might contain things not applicable to other teams due to culture/value differences or different modes of working. That said I think it can serve as a source of inspiration for other teams.

Make sure others can pick up a task where it was left

Make sure others can (start) work(ing) on tasks

Make sure others know what is going on

Shared commitment

22 Feb 2018 2:59am GMT

21 Feb 2018

feedPlanet Grep

Frank Goossens: Music from Our Tube; Jordan Rakei Eye to Eye

What starts out as a somewhat Buckley-esque guitar & voice dreamscape evolves into a modern broken beat inspired song. The whole is quite intense and somber and has me listening to it on repeat for the last hour or so. And now it's your turn! YouTube Video
Watch this video on YouTube.

Possibly related twitterless twaddle:

21 Feb 2018 1:16pm GMT

Claudio Ramirez: Ubuntu 17.10 + Gnome: some hidden configurations

Update 20180216: remark about non-Ubuntu extensions & stability.

Gnome logoI like what the Ubuntu people did when adopting Gnome as the new Desktop after the dismissal of Unity. When the change was announced some months ago, I decided to move to Gnome and see if I liked it. I did.

It's a good idea to benefit of the small changes Ubuntu did to Gnome 3. Forking dash-to-dock was a great idea so untested updates (e.g. upstream) don't break the desktop. I won't discuss settings you can change through the "Settings" application (Ubuntu Dock settings) or through "Tweaks":

$ sudo apt-get install gnome-tweak-tool

It's a good idea, though, to remove third party extensions so you are sure you're using the ones provided and adapted by Ubuntu. You can always add new extensions later (the most important ones are even packaged).
$ rm -rf ~/.local/share/gnome-shell/extensions/*

Working with Gnome 3, and in less extent with MacOS, taught me that I prefer bars and docks to autohide. I never did in the past, but I feel that Gnome (and MacOS) got this right. I certainly don't like the full height dock: make it so small as needed. You can use the graphical "dconf Editor" tool to make the changes, but I prefer the safer command line (you won't make a change by accident).

To prevent Ubuntu Dock to take all the vertical space (i.e., most of it is just an empty bar):

$ dconf write /org/gnome/shell/extensions/dash-to-dock/extend-height false

A neat Dock trick: when hovering over a icon on the dock, cycle through windows of the application while scrolling (or using two fingers). Way faster than click + select:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/scroll-action "'cycle-windows'"

I set the dock to autohide in the regular "Settings" application. An extension is needed to do the same for the Top Bar (you need to log out, and the enable it through the "Tweaks" application):

$ sudo apt-get install gnome-shell-extension-autohidetopbar

Update: I don't install extensions any more besides the ones forked by Ubuntu. In my experience, they make the desktop unstable under Wayland. That's said, I haven't seen crashes related to autohidetopbar. That said, I moved back to Xorg (option at login screen) because Wayland feels less stable. The next Ubuntu release (18.04) will default to Xorg as well meaning that at least until 18.10 Wayland won't be the default session.

Oh, just to be safe (e.g., in case you broke something), you can reset all the gnome settings with:

$ dconf reset -f /

Have a look at the comments for some extra settings (that I personally do not use, but many do).

Some options that I don't use far people have asked me about (here and elsewhere)

Specially with the setting that allows scrolling above, you may want to only switch between windows of the same application in the active workspace. You can isolate workspaces with:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/isolate-workspaces true

Hide the dock all the time, instead of only when needed. You can do this by disabling "intellihide":

$ dconf write /org/gnome/shell/extensions/dash-to-dock/intellihide false

21 Feb 2018 12:47pm GMT

20 Feb 2018

feedPlanet Grep

Mattias Geniar: Update a docker container to the latest version

The post Update a docker container to the latest version appeared first on ma.ttias.be.

Here's a simple one, but if you're new to Docker something you might have to look up. On this server, I run Nginx as a Docker container using the official nginx:alpine version.

I was running a fairly outdated version:

$ docker images | grep nginx
nginx    none                5a35015d93e9        10 months ago       15.5MB
nginx    latest              46102226f2fd        10 months ago       109MB
nginx    1.11-alpine         935bd7bf8ea6        18 months ago       54.8MB

In order to make sure I had the latest version, I ran pull:

$ docker pull nginx:alpine
alpine: Pulling from library/nginx
550fe1bea624: Pull complete
d421ba34525b: Pull complete
fdcbcb327323: Pull complete
bfbcec2fc4d5: Pull complete
Digest: sha256:c8ff0187cc75e1f5002c7ca9841cb191d33c4080f38140b9d6f07902ababbe66
Status: Downloaded newer image for nginx:alpine

Now, my local repository contains an up-to-date Nginx version:

$ docker images | grep nginx
nginx    alpine              bb00c21b4edf        5 weeks ago         16.8MB

To use it, you have to launch a new container based on that particular image. The currently running container will still be using the original (old) image.

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED
4d9de6c0fba1        5a35015d93e9        "nginx -g 'daemon ..."   9 months ago

In my case, I re-created my HTTP/2 nginx container like this;

$ docker stop nginx-container
$ docker rm nginx-container
$ docker run --name nginx-container \ 
    --net="host" \
    -v /etc/nginx/:/etc/nginx/ \
    -v /etc/ssl/certs/:/etc/ssl/certs/ \
    -v /etc/letsencrypt/:/etc/letsencrypt/ \
    -v /var/log/nginx/:/var/log/nginx/ \
    --restart=always \
    -d nginx:alpine

And the Nginx/container upgrade was completed.

The post Update a docker container to the latest version appeared first on ma.ttias.be.

20 Feb 2018 7:13pm GMT

19 Feb 2018

feedPlanet Grep

Fabian Arrotin: Using newer PHP stack (built and distributed by CentOS) on CentOS 7

One thing that one has to like with Entreprise distribution is the same stable api/abi during the distro lifetime. If you have one application that works, you'll know that it will continue to work.

But in parallel, one can't always decide the application to run on that distro, with the built-in components. I was personally faced with this recently, when I was in a need to migrate our Bug Tracker to a new version. Let's so use that example to see how we can use "newer" php pkgs distributed through the distro itself.

The application that we use for https://bugs.centos.org is MantisBT, and by reading their requirements list it was clear than a CentOS 7 default setup would not work : as a reminder the default php pkg for .el7 is 5.4.16 , so not supported anymore by "modern" application[s].

That's where SCLs come to the rescue ! With such "collections", one can install those, without overwriting the base pkgs, and so can even run multiple parallel instances of such "stack", based on configuration.

Let's just start simple with our MantisBT example : forget about the traditional php-* packages (including "php" which provides the mod_php for Apache) : it's up to you to let those installed if you need it, but on my case, I'll default to php 7.1.x for the whole vhost, and also worth knowing that I wanted to integrate php with the default httpd from the distro (to ease the configuration management side, to expect finding the .conf files at $usual_place)

The good news is that those collections are built and so then tested and released through our CentOS Infra, so you don't have to care about anything else ! (kudos to the SCLo SIG ! ). You can see the available collections here

So, how do we proceed ? easy ! First let's add the repository :

yum install centos-release-scl

And from that point, you can just install what you need. For our case, MantisBT needs php, php-xml, php-mbstring, php-gd (for the captcha, if you want to use it), and a DB driver, so php-mysql (if you targets mysql of course). You just have to "translate" that into SCLs pkgs : in our case, php becomes rh-php71 (meta pkg), php-xml becomes rh-php71-php-xml and so on (one remark though, php-mysql became rh-php71-php-mysqlnd !)

So here we go :

yum install httpd rh-php71 rh-php71-php-xml rh-php71-php-mbstring rh-php71-php-gd rh-php71-php-soap rh-php71-php-mysqlnd rh-php71-php-fpm

As said earlier, we'll target the default httpd pkg from the distro , so we just have to "link" php and httpd. Remember that mod_php isn't available anymore, but instead we'll use the php-fpm pkg (see rh-php71-php-fpm) for this (so all requests are sent to that FastCGI Process Manager daemon)

Let's do this :

systemctl enable httpd --now
systemctl enable rh-php71-php-fpm --now
cat > /etc/httpd/conf.d/php-fpm.conf << EOF
AddType text/html .php 
DirectoryIndex index.php
<FilesMatch \.php$>
      SetHandler "proxy:fcgi://127.0.0.1:9000"
</FilesMatch>
EOF
systemctl restart httpd

And from this point, it's all basic, and application is now using php 7.1.x stack. That's a basic "howto" but you can also run multiple versions in parallel, and also tune php-fpm itself. If you're interested, I'll let you read Remi Collet's blog post about this (Thank you again Remi !)

Hope this helps, as strangely I couldn't easily find a simple howto for this, as "scl enable rh-php71 bash" wouldn't help a lot with httpd (which is probably the most used scenario)

19 Feb 2018 11:00pm GMT

Dries Buytaert: Twenty years later and I am still at my desk learning CSS

I was working on my POSSE plan when Vanessa called and asked if I wanted to meet for a coffee. Of course, I said yes. In the car ride over, I was thinking about how I made my first website over twenty years ago. HTML table layouts were still cool and it wasn't clear if CSS was going to be widely adopted. I decided to learn CSS anyway. More than twenty years later, the workflows, the automated toolchains, and the development methods have become increasingly powerful, but also a lot more complex. Today, you simply npm your webpack via grunt with vue babel or bower to react asdfjkl;lkdhgxdlciuhw. Everything is different now, except that I'm still at my desk learning CSS.

19 Feb 2018 9:24pm GMT

Mattias Geniar: Show IDN punycode in Firefox to avoid phishing URLs

The post Show IDN punycode in Firefox to avoid phishing URLs appeared first on ma.ttias.be.

Pop quiz: can you tell the difference between these 2 domains?

Both host a version of the popular crypto exchange Binance.

The second image is the correct one, the first one is a phishing link with the letter 'n' replaced by 'n with a dot below it' (U+1E47). It's not a piece of dirt on your screen, it's an attempt to trick you to believe it's the official site.

Firefox has a very interesting option called IDN_show_punycode. You can enable it in about:config`.

Once enabled, it'll make that phishing domain look like this:

Doesn't look that legit now anymore, does it?

I wish Chrome offered a similar option though, could prevent quite a few phishing attempts.

The post Show IDN punycode in Firefox to avoid phishing URLs appeared first on ma.ttias.be.

19 Feb 2018 7:52pm GMT

Dries Buytaert: Get your HTTPS on

"Get your HTTPS on" because Chrome will mark all HTTP sites as "not secure" starting in July 2018. Chrome currently displays a neutral icon for sites that aren't using HTTPS, but starting with Chrome 68, the browser will warn users in the address bar.

Fortunately, HTTPS has become easier to implement through services like Let's Encrypt, who provide free certificates and aim to eliminate to complexity of setting up and maintaining HTTPS encryption.

19 Feb 2018 6:26pm GMT

18 Feb 2018

feedPlanet Grep

Frank Goossens: Introducing zytzagoo’s major changes for Autoptimize 2.4

TL;DR Autoptimize 2.4 will be a major change. Tomaš Trkulja (aka zytzagoo) has cleaned up and modernized the code significantly, making it easier to read and maintain, switched to the latest and greatest minifiers and added automated testing. And as if that isn't enough we're also adding new optimization options! The downside: we will be dropping support for PHP < 5.3 and for the "legacy minifiers". AO 2.4 will first be made available as a separate "Autoptimize Beta" plugin on wordpress.org via GitHub and will see more releases/ iterations with new options/ optimizations there before being promoted to the official "Autoptimize". Back in March 2015 zytzagoo forked Autoptimize, rewriting the CDN-replacement logic (and a lot of autoptimizeStyles::minify really) and started adding automated testing. I kept an eye on the fork and later that year I contacted Tomas via Github to see how we could collaborate. We have been in touch ever since; some of his improvements have already been integrated and he is my go-to man to discuss coding best practices, bugfixes and security. FFWD to the nearby future; Autoptimize 2.4 will be based on Tomaš' fork and will include the following major changes:

These improvements will be released in a separate "Autoptimize Beta" plugin soon (albeit not on wordpress.org as "beta"-plugins are not allowed). You can already download from GitHub here. We will start adding additional optimization options there, releasing at a higher pace. The goal is to create a healthy Beta-user pool allowing us to move code from AO Beta to AO proper with more confidence. So what new optimization options would you like to see added to Autoptimize 2.4 and beyond? :-) [corrected 19/02; wordpress.org does not allow beta-plugins]

Possibly related twitterless twaddle:

18 Feb 2018 2:18pm GMT

17 Feb 2018

feedPlanet Grep

Xavier Mertens: [SANS ISC] Malware Delivered via Windows Installer Files

I published the following diary on isc.sans.org: "Malware Delivered via Windows Installer Files":

For some days, I collected a few samples of malicious MSI files. MSI files are Windows installer files that users can execute to install software on a Microsoft Windows system. Of course, you can replace "software" with "malware". MSI files look less suspicious and they could bypass simple filters based on file extensions like "(com|exe|dll|js|vbs|…)". They also look less dangerous because they are Composite Document Files… [Read more]

[The post [SANS ISC] Malware Delivered via Windows Installer Files has been first published on /dev/random]

17 Feb 2018 11:48am GMT

16 Feb 2018

feedPlanet Grep

Dries Buytaert: My POSSE plan for evolving my site

My website plan

In an effort to reclaim my blog as my thought space and take back control over my data, I want to share how I plan to evolve my website. Given the incredible feedback on my previous blog posts, I want to continue the conversation and ask for feedback.

First, I need to find a way to combine longer blog posts and status updates on one site:

  1. Update my site navigation menu to include sections for "Blog" and "Notes". The "Notes" section would resemble a Twitter or Facebook livestream that catalogs short status updates, replies, interesting links, photos and more. Instead of posting these on third-party social media sites, I want to post them on my site first (POSSE). The "Blog" section would continue to feature longer, more in-depth blog posts. The front page of my website will combine both blog posts and notes in one stream.
  2. Add support for Webmention, a web standard for tracking comments, likes, reposts and other rich interactions across the web. This way, when users retweet a post on Twitter or cite a blog post, mentions are tracked on my own website.
  3. Automatically syndicate to 3rd party services, such as syndicating photo posts to Facebook and Instagram or syndicating quick Drupal updates to Twitter. To start, I can do this manually, but it would be nice to automate this process over time.
  4. Streamline the ability to post updates from my phone. Sharing photos or updates in real-time only becomes a habit if you can publish something in 30 seconds or less. It's why I use Facebook and Twitter often. I'd like to explore building a simple iOS application to remove any friction from posting updates on the go.
  5. Streamline the ability to share other people's content. I'd like to create a browser extension to share interesting links along with some commentary. I'm a small investor in Buffer, a social media management platform, and I use their tool often. Buffer makes it incredibly easy to share interesting articles on social media, without having to actually open any social media sites. I'd like to be able to share articles on my blog that way.

Second, as I begin to introduce a larger variety of content to my site, I'd like to find a way for readers to filter content:

  1. Expand the site navigation so readers can filter by topic. If you want to read about Drupal, click "Drupal". If you just want to see some of my photos, click "Photos".
  2. Allow people to subscribe by interests. Drupal 8 make it easy to offer an RSS feed by topic. However, it doesn't look nearly as easy to allow email subscribers to receive updates by interest. Mailchimp's RSS-to-email feature, my current mailing list solution, doesn't seem to support this and neither do the obvious alternatives.

Implementing this plan is going to take me some time, especially because it's hard to prioritize this over other things. Some of the steps I've outlined are easy to implement thanks to the fact that I use Drupal. For example, creating new content types for the "Notes" section, adding new RSS feeds and integrating "Blogs" and "Notes" into one stream on my homepage are all easy - I should be able to get those done my next free evening. Other steps, like building an iPhone application, building a browser extension, or figuring out how to filter email subscriptions by topics are going to take more time. Setting up my POSSE system is a nice personal challenge for 2018. I'll keep you posted on my progress - much of that might happen via short status updates, rather than on the main blog. ;)

16 Feb 2018 9:23am GMT

15 Feb 2018

feedPlanet Grep

Xavier Mertens: Imap2TheHive: Support of Attachments

I just published a quick update of my imap2thehive tool. Files attached to an email can now be processed and uploaded as an observable attached to a case. It is possible to specify which MIME types to process via the configuration file. The example below will process PDF & EML files:

[case]
files: application/pdf,messages/rfc822

The script is available here.

[The post Imap2TheHive: Support of Attachments has been first published on /dev/random]

15 Feb 2018 9:11pm GMT

Les Jeudis du Libre: Mons, le 15 mars : ReactJS – Présentation de la librairie et prototypage d’un jeu de type “clicker”

Logo ReactJSCe jeudi 15 mars 2018 à 19h se déroulera la 67ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : ReactJS - Présentation de la librairie et prototypage d'un jeu de type "clicker"

Thématique : Internet|Graphisme|sysadmin|communauté

Public : Tout public|sysadmin|entreprises|étudiants|…

L'animateur conférencier : Michaël Hoste (80LIMIT)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l'ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l'entrée de la séance. Merci d'indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d'un verre de l'amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n'hésitez pas à consulter l'agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d'échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d'informaticiens (UMONS, HEH et Condorcet), et avec le concours de l'A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Le développement JavaScript est un véritable fouillis de solutions techniques. Il ne se passe pas un mois sans voir apparaître un nouveau framework soi-disant révolutionnaire.

Lorsque ReactJS est sorti en 2013, soutenu par un Facebook peu coutumier de la diffusion de solutions open-source, peu s'attendaient à voir arriver un réel challenger. Sa particularité de mélanger du code HTML et JavaScript au sein d'un seul fichier allait même à l'encontre du bon sens. Pourtant, 4 ans plus tard, cette solution reste idéale pour proposer des expériences web toujours plus fluides et complexes. ReactJS et ses mécanismes sous-jacents ont apporté un vent frais dans le monde du web front-end.

Lors de cette séance, nous reprendrons l'historique de ReactJS et expliquerons pourquoi cette librairie, pourtant minimaliste, aborde les problèmatiques du web d'une manière sensée.

Une fois les bases logiques détaillées, nous mettrons les mains sur le clavier pour développer un exemple concret sous la forme d'un jeu de type "clicker" (Clicker Heroes, Cookie Clicker, etc.).

Short bio : Michaël Hoste est diplômé d'une Licence en Sciences Informatiques à l'Université de Mons. En 2012, il fonde 80LIMIT, une société de développement d'applications web et mobile, centrée sur les langages Ruby et JavaScript. Il aime partager son temps entre des projets clients et des projets internes de type Software-as-a-Service. En outre, il préside l'ASBL Creative Monkeys qui permet à plusieurs sociétés de s'épanouir au sein d'un open space dans le centre de Mons.

15 Feb 2018 6:23am GMT

14 Feb 2018

feedPlanet Grep

Dries Buytaert: Reclaiming my blog as my thought space

My blog as my thought space

Last week, I shared my frustration with using social media websites like Facebook or Twitter as my primary platform for sharing photos and status updates. As an advocate of the open web, this has bothered me for some time so I made a commitment to prioritize publishing photos, updates and more to my own site.

I'm excited to share my plan for how I'd like to accomplish this, but before I do, I'd like to share two additional challenges I face on my blog. These struggles factor into some of the changes I'm considering implementing, so I feel compelled to share them with you.

First, I've struggled to cover a wide variety of topics lately. I've been primarily writing about Drupal, Acquia and the Open Web. However, I'm also interested in sharing insights on startups, investing, travel, photography and life outside of work. I often feel inspired to write about these topics, but over the years I've grown reluctant to expand outside of professional interests. My blog is primarily read by technology professionals - from Drupal users and developers, to industry analysts and technology leaders - and in my mind, they do not read my blog to learn about a wider range of topics. I'm conflicted because I would like my l blog to reflect both my personal and professional interests.

Secondly, I've been hesitant to share short updates, such as a two sentence announcement about a new Drupal feature or an Acquia milestone. I used to publish these kinds of short updates quite frequently. It's not that I don't want to share them anymore, it's that I struggle to post them. Every time I publish a new post, it goes out to more than 5,000 people that subscribe to my blog by email. I've been reluctant to share short status updates because I don't want to flood people's inbox.

Throughout the years, I worked around these two struggles by relying on social media; while I used my blog for in-depth blog posts specific to my professional life, I used social media for short updates, sharing photos and starting conversation about wider variety of topics.

But I never loved this division.

I've always written for myself, first. Writing pushes me to think, and it is the process I rely on to flesh out ideas. This blog is my space to think out loud, and to start conversations with people considering the same problems, opportunities or ideas. In the early days of my blog, I never considered restricting my blog to certain topics or making it fit specific editorial standards.

Om Malik published a blog last week that echoes my frustration. For Malik, blogs are thought spaces: a place for writers to share original opinions that reflect "how they view the world and how they are thinking". As my blog has grown, it has evolved, and along the way it has become less of a public thought space.

My commitment to implementing a POSSE approach on my site has brought these struggles to the forefront. I'm glad it did because it requires me to rethink my approach and to return to my blogging roots. After some consideration, here is what I want to do:

  1. Take back control of more of my data; I want to share more of my photos and social media data on my own site.
  2. Find a way to combine longer in-depth blog posts and shorter status updates.
  3. Enable readers and subscribers to filter content based on their own interests so that I can cover a larger variety of topics.

In my next blog post, I plan to outline more details of how I'd like to approach this. Stay tuned!

14 Feb 2018 1:33pm GMT

10 Feb 2018

feedPlanet Grep

Philip Van Hoof: Verkoop met verlies

Vandaag wil ik de aandacht op een Belgische wet over het verkopen met verlies. Ons land verbiedt, bij wet, elke handelaar een goed met verlies te verkopen. Dat is de regel, in ons België.

Die regel heeft (terecht) uitzonderingen. De definitie van de uitzondering wil zeggen dat ze niet de regel zijn: de verkoop met verlies is in België slechts per uitzondering toegestaan:

Ik vermoed dat onze wet bestaat om oneerlijke concurrentie te bestrijden. Een handelaar kan dus niet een bepaald product (bv. een game console) tegen verlies verkopen om zo marktdominantie te verkrijgen voor een ander product uit zijn gamma (bv. games), bv. met als doel concurrenten uit de markt te weren.

Volgens mij is het daarom zo dat, moest een game console -producent met verlies een console verkopen, dit illegaal is in België.

Laten we aannemen dat game console producenten, die actief zijn in (de verkoop in) België, de Belgische wet volgen. Dan volgt dat ze hun game consoles niet tegen verlies verkopen. Ze maken dus winst. Moesten ze dat niet doen dan moeten ze voldoen aan uitzonderlijke voorwaarden, in de (eerder vermelde) Belgische wet, die hen toelaat wel verlies te maken. In alle andere gevallen zouden ze in de ontwettigheid verkeren. Dat is de Belgische wet.

Dat maakt dat de aanschaf van zo'n game console, als Belgisch consument, betekent dat de producent -en verkoper een zekere winst hebben gemaakt door mijn aankoop. Er is dus geen sprake van verlies. Tenzij de producent -of verkoper in België betrokken is bij onwettige zaken.

Laten we aannemen dat we op zo'n console, na aanschaf, een andere software willen draaien. Dan kan de producent/verkoper dus niet beweren dat zijn winst gemaakt wordt door zaken die naderhand verkocht zouden worden (a.d.h.v. bv. originele software).

Hun winst is met andere woorden al gemaakt. Op de game console zelf. Indien niet, dan zou de producent of verkoper in onwettigheid verkeren (in België). Daarvan nemen we aan dat dit zo niet verlopen is. Want anders zou men het goed niet mogen verkopen. Het goed is wel verkocht. Volgens Belgische wetgeving (toch?).

Indien niet, dan is de producent -en of verkoper verantwoordelijk. In geen geval de consument.

10 Feb 2018 11:57pm GMT

09 Feb 2018

feedPlanet Grep

Xavier Mertens: Viper and ReversingLabs A1000 Integration

A quick blog post about a module that I wrote to interconnect the malware analysis framework Viper and the malware analysis platform A1000 from ReversingLabs.

The module can perform two actions at the moment: to submit a new sample for analysis and to retrieve the analysis results (categorization):

viper sample.exe > a1000 -h
usage: a1000 [-h] [-s] [-c]

Submit files and retrieve reports from a ReversingLab A1000

optional arguments:
-h, --help show this help message and exit
-s, --submit Submit file to A1000
-c, --classification Get classification of current file from A1000
 
viper sample.exe > a1000 -s
[*] Successfully submitted file to A1000, task ID: 393846

viper sample.exe > a1000 -c
[*] Classification
- Threat status : malicious
- Threat name : Win32.Trojan.Fareit dw eldorado
- Trust factor : 5
- Threat level : 2
- First seen : 2018-02-09T13:03:26Z
- Last seen : 2018-02-09T13:07:00Z

The module is available on my GitHub repository.

[The post Viper and ReversingLabs A1000 Integration has been first published on /dev/random]

09 Feb 2018 1:48pm GMT