30 Oct 2020

feedPlanet Grep

Xavier Mertens: [SANS ISC] Quick Status of the CAA DNS Record Adoption

I published the following diary on isc.sans.edu: "Quick Status of the CAA DNS Record Adoption":

In 2017, we already published a guest diary about "CAA" or "Certification Authority Authorization". I was curious about the status of this technique and the adoption level in 2020. Has it been adopted massively since this diary? The initial RFC describing CAA has been issued in 2013 (RFC6844). Since 2019, it is obsolete and has been replaced by RFC8659. Just a quick reminder about the purpose of this DNS record. It is used to specify which certificate authority(ies) (CAs) is(are) allowed to issue certificates for a domain… [Read more]

The post [SANS ISC] Quick Status of the CAA DNS Record Adoption appeared first on /dev/random.

30 Oct 2020 11:15am GMT

Xavier Mertens: To Automate or To Reduce the Noise?

If you follow my blog for a while, you probably noticed that I'm not really active with new content. Most articles are published through the SANS ISC Website but it does not mean I don't have content to publish. It's just a question of time like many of us!

Recently, I listened to an interesting conversation in a SOC ("Security Operation Center"). The topic was about assigning more development time to the SOC engineers to process the number of pending alerts. In a SOC, it's important to rotate tasks across available people like first-line, ongoing investigations, hunting, and… development/maintenance of the platform (tools, processes, playbooks, etc)

About automation, it's a key element in processing data collected by the SOC. We are all lazy people and recurrent tasks are boring to perform. After all, we are facing huge amounts of data generated by computers, why not use computers to handle it? I like "automation"! I like the fact that you can automate everything to, while drinking your morning coffee, review what has been processed during the night and, more important, what deserves more investigation. A perfect example of this process is malware triage. You collect samples from multiple sources, parse them, extract the payload, perform basic checks, label it, and notify only if something "juicy" is detected:

Example of Automation Flow

Another scenario to implement more automation is the processing of "Alerts". When an alert is received, you have to acknowledge it and start investigating it. Let's take a practical example with TheHive, a popular tool used in SOCs. When an alert is received and relevant, a case is created with many observables (IP addresses, hashes, domains, files, etc). The key feature of TheHive is the link to Cortex instance that allows us to search for observables across a long list of analyzers that query online services. For example, an IP address will be tested against blocklists, passive DNS services. The first automation that can be implemented here is to automatically run analyzers against freshly added observables. TheHive can do this perfectly through webhooks. Great! When the SOC engineer opens the case, observables have already been enriched:

Example of IP address Enrichment

This kind of automation is nice because it speeds up the investigations, optimizes the engineer's time, and avoids "fatigue".

Problems arise when an extension of automation is suggested to face a (too) huge amount of data to process. If you need more automation to process alerts received daily in your SOC then you're facing an issue. If you have one rule in your SIEM that generates 150 alerts/day (just an example), the rule is probably not optimized and should be reviewed.

Do we need to automate or to reduce the noise? Both! Automation is a key point and must be used but the noise reduction must be applied as soon as possible in the data flow. Automation should not be used as a counter-measure to manage a flood of alerts…

The post To Automate or To Reduce the Noise? appeared first on /dev/random.

30 Oct 2020 10:25am GMT

29 Oct 2020

feedPlanet Debian

Ulrike Uhlig: Better handling emergencies

We all know these situations when we receive an email asking Can you check the design of X, I need a reply by tonight. Or an instant message: My website went down, can you check? Another email: I canceled a plan at the hosting company, can you restore my website as fast as possible? A phone call: The TLS certificate didn't get updated, and now we can't access service Y. Yet another email: Our super important medical advice website is suddenly being censored in country Z, can you help?

Everyone knows those messages that have "URGENT" in capital letters in the email subject. It might be that some of them really are urgent. Others are the written signs of someone having a hard time properly planning their own work and passing their delays on to someone who comes later in the creation or production chain. And others again come from people who are overworked and try to delegate some of their tasks to a friendly soul who is likely to help.

How emergencies create more emergencies

In the past, my first reflex when I received an urgent request was to start rushing into solutions. This happened partly out of empathy, partly because I like to be challenged into solving problems, and I'm fairly good at that. This has proven to be unsustainable, and here is why.

Emergencies create unplanned work

The first issue is that emergencies create a lot of unplanned work. Which in turn means not getting other, scheduled, things done. This can create a backlog, end up in working late, or working on weekends.

Emergencies can create a permanent state of exception

Unplanned work can also create a lot of frustration, out of the feeling of not getting the things done that one planned to do. We might even get a feeling of being nonautonomous (in German I would say fremdbestimmt, which roughly translates to "being directed by others").

On the long term, this can generate unsustainable situations: higher work loads, and burnout. When working in a team of several people, A might have to take over the work of B because B has not enough capacities. Then A gets overloaded in turn, and C and D have to take over A's work. Suddenly the team is stuck in a permanent state of exception. This state of exception will produce more backlog. The team might start to deprioritize social issues over getting technical things done. They might not be able to recruit new people anymore because they have no capacity left to onboard newcomers.

One emergency can result in a variety of emergencies for many people

The second issue produced by urgent requests is that if I cannot solve the initial emergency by myself, I might try to involve colleagues, other people who are skilled in the area, or people who work in another relevant organization to help with this. Suddenly, the initial emergency has become my emergency as well as the emergency of a whole bunch of other people.

A sidenote about working with friends

This might be less of an issue in a classical work setup than in a situation in which a bunch of freelancers work together, or in setups in which work and friendships are intertwined. This is a problem, because the boundaries between friend and worker role, and the expectations that go along with these roles, can get easily confused. If a colleague asks me to help with task X, I might say no; if a friend asks, I might be less likely to say no.

What I learnt about handling emergencies

I came up with some guidelines that help me to better handle emergencies.

Plan for unplanned work

It doesn't matter and it doesn't help to distinguish if urgent requests are legitimate or if they come from people who have not done their homework on time.

What matters is to make one's weekly todo list sustainable. After reading Making work visible by Domenica de Grandis, I understood the need to add free slots for unplanned work into one's weekly schedule. Slots for unplanned work can take up to 25% of the total work time!

Take time to make plans

Now that there are some free slots to handle emergencies, one can take some time to think when an urgent request comes in. A German saying proposes to wait and have some tea ("abwarten und Tee trinken"). I think this is actually really good advice, and works for any non-obvious problem. Sit down and let the situation sink in. Have a tea, take a shower, go for a walk. It's never that urgent. Really, never. If possible, one can talk about the issue with another person, rubberduck style. Then one can make a plan on how to address the emergency properly, it could be that the solution is easier than at first thought.

Affirming boundaries: Saying no

Is the emergency that I'm asked to solve really my problem? Or is someone trying to involve me because they know I'm likely to help? Take a deep breath and think about it. No? It's not my job, not my role? I have no time for this right now? I don't want to do it? Maybe I'm not even paid for it? A colleague is pushing my boundaries to get some task on their own todo list done? Then I might want to say no. I can't help with this. or I can help you in two weeks. I don't need to give a reason. No. is a sentence. And: Saying no doesn't make me an arse.

Affirming boundaries: Clearly defining one's role

Clearly defining one's role is something that is often overlooked. In many discussions I have with friends it appears that this is a major cause of overwork and underpayment. Lots of people are skilled, intelligent, and curious, and easily get challenged into putting on their super hero dress. But they're certainly not the only person that can help-even if an urgent request makes them think that at first.

To clearly define our role, we need to make clear which part of the job is our work, and which part needs to be done by other people. We should stop trying to accomodate people and their requests to the detriment of our own sanity. You're a language interpreter and are being asked to mediate a bi-lingual conflict between the people you are interpreting for? It's not your job. You're the graphic designer for a poster, but the text you've been given is not good enough? Send back a recommendation to change the text; don't do these changes yourself: it's not your job. But you can and want to do this yourself and it would make your client's life easier? Then ask them to get paid for the extra time, and make sure to renegotiate your deadline!

Affirming boundaries: Defining expectations

Along with our role, we need to define expectations: in which timeframe am I willing to do the job? Under which contract, which agreement, which conditions? For which payment?

People who work in a salary office job generally do have a work contract in which their role and the expectations that come with this role are clearly defined. Nevertheless, I hear from friends that their superiors regularly try to make them do tasks that are not part of their role definition. So, here too, role and expectations sometimes need to be renegotiated, and the boundaries of these roles need to be clearly affirmed.

Random conclusive thoughts

If you've read until here, you might have experienced similar things.

Or, on the contrary, maybe you're already good at communicating your boundaries and people around you have learnt to respect them? Congratulations.

In any case, for improving one's own approach to such requests, it can be useful to find out which inner dynamics are at play when we interact with other people. Additionally, it can be useful to understand the differences between Asker and Guesser culture:

when an Asker meets a Guesser, unpleasantness results. An Asker won't think it's rude to request two weeks in your spare room, but a Guess culture person will hear it as presumptuous and resent the agony involved in saying no. Your boss, asking for a project to be finished early, may be an overdemanding boor - or just an Asker, who's assuming you might decline. If you're a Guesser, you'll hear it as an expectation.

Askers should also be aware that there might be Guessers in their team. It can help to define clear guidelines about making requests (when do I expect an answer, under which budget/contract/responsibility does the request fall, what other task can be put aside to handle the urgent task?) Last, but not least, Making work visible has a lot of other proposals on how to visibilize and then deal with unplanned work.

29 Oct 2020 11:00pm GMT

feedPlanet Grep

Lionel Dricot: Lectures 6: Épanadiplose sur la peur, la sécurité et la résistance

Quelques liens et conseils de lecture pour réfléchir sur notre rapport à l'autre, à la peur et à notre désir de sécurité.

La peur et la fiction

Moi qui n'accroche que rarement à la fantasy, je suis en train de dévorer une préversion d'« Adjaï aux mille visages » dont le crowdfunding se termine bientôt (et qui a grand besoin de votre soutien). Je le conseille chaudement !

https://fr.ulule.com/adjai/

D'ailleurs, si vous avez raté la campagne Printeurs, vous pouvez en profiter pour vous rattraper. 20€ pour le livre Printeurs en version papier et les versions électroniques d'Adjaï et de Printeurs, tome 2 (que je dois encore écrire). Vous aurez aussi la possibilité de recevoir les chapitres de Printeurs 2 par mail au fur et à mesure de l'écriture !

https://fr.ulule.com/adjai/?reward=660832

Sous le couvert d'une palpitante histoire de voleuse amorale qui change d'apparence et de sexe à volonté, Adjaï explore le concept d'identité, de fluidité de genre et de parentalité. Mais un autre aspect revient régulièrement (notamment dans une scène qui fera sourire les rôlistes chevronnés) : celui de la peur de l'autre, de la peur de l'inconnu et de l'insécurité.

Une peur complètement irrationnelle qui nous pousse vers les comportements les plus dangereux. Une peur le plus souvent attisée par les auteurs de fiction, car… c'est plus facile pour faire avancer l'histoire.

https://slate.com/technology/2020/10/cory-docotorow-sci-fi-intuition-pumps.html

Un exemple parmi tant d'autres : notre peur du nucléaire est complètement absurde, surtout quand on compare le nombre de morts et les dégâts causés par les alternatives au nucléaire (le charbon, notamment, mais le solaire et l'éolien restent beaucoup plus dangereux que le nucléaire). Mais c'est une peur qui a été nourrie par une quantité impressionnante de science-fiction au sortir d'Hiroshima.

https://sceptom.wordpress.com/2014/08/25/la-vraie-raison-pour-laquelle-certains-detestent-le-nucleaire-brave-new-climate/

Comme le souligne Cory Doctorrow plus haut, les auteurs de science-fiction ont une réelle responsabilité. Ursula Le Guin enfonçait le clou il y a quelques années : on a besoin d'écrivains qui savent faire la différence entre écrire et faire un produit qui se vend bien. Je pense qu'Adjaï est un exemple magnifique de livre qui nous ouvre à la différence.

https://parkerhiggins.net/2014/11/will-need-writers-can-remember-freedom-ursula-k-le-guin-national-book-awards/

L'intuition et la peur

Il faut bien avouer qu'il est difficile de lutter contre une peur intuitive (prendre l'avion) tout en risquant notre vie tous les jours sans réfléchir (prendre la voiture). C'est le piège de l'intuition.

https://ploum.net/mon-second-velo-et-le-piege-de-lintuition/

Karl Popper le résume magnifiquement dans son « Plaidoyer pour l'indéterminisme ».

« Je considère l'intuition et l'imagination comme extrêmement importantes : nous en avons besoin pour inventer une théorie. Mais l'intuition, précisément parce qu'elle peut nous persuader et nous convaincre de la vérité de ce que nous avons saisi par son intermédiaire, peut nous égarer très gravement : elle est une aide dont la valeur est inappréciable, mais aussi un secours qui n'est pas sans danger, car elle tend à nous rendre non critiques. Nous devons toujours la rencontrer avec respect et gratitude, et aussi avec un effort pour la critique très sévèrement. »

La sécurité et l'intuition

L'intuition est rarement aussi mise à mal que lorsqu'on parle de sécurité.

Pour tenter d'analyser les mesures de sécurité, j'ai mis en place un framework dit des « 3 piliers ». Le principe est simple : toute mesure qui n'améliore pas l'un des piliers n'est pas sécuritaire. Elle est au mieux inutile, au pire nuisible.

https://ploum.net/les-3-piliers-de-la-securite/

L'exemple le plus frappant : les militaires en rue ? Phénomène dont je me moquais à outrance dans la nouvelle « Petit manuel d'antiterrorisme ».

https://ploum.net/petit-manuel-dantiterrorisme/

Une autre mesure absurde qui fait pire que bien ? Les contrôles dans les aéroports.

https://www.vox.com/2016/5/17/11687014/tsa-against-airport-security

Oh, et vous savez quoi ? L'espionnage de toutes nos communications pendant plus d'une décennie, un espionnage dénoncé par Snowden et qui lui vaut, aujourd'hui encore, de devoir se cacher. Et bien ça n'a servi à rien. Pas un prout de terroriste n'a été empêché.

https://tutanota.com/blog/posts/nsa-phone-surveillance-illegal-expensive/

Les abus et la sécurité

Servi à rien ? C'est vite dit. Parce que c'est notamment grâce à ce programme et cette volonté politique post-11 septembre que Google existe. Comme le raconte Shoshana Zuboff dans « The Age of Surveillance Capitalism », les pratiques de Google, qui préfiguraient l'analyse des données à des fins publicitaires, ont failli être interdites en 2001. Puis le 11 septembre a eu lieu. Et les services secrets américains se sont tournés vers Google en disant : « On vous laisse espionner tout le monde à la condition que vous nous aidiez à faire pareil. »

https://en.wikipedia.org/wiki/The_Age_of_Surveillance_Capitalism

Le tout grâce à des recherches financées par les mêmes institutions publiques quelques années plus tôt.

https://qz.com/1145669/googles-true-origin-partly-lies-in-cia-and-nsa-research-grants-for-mass-surveillance/

Dans un tout autre registre, la bande dessinée « Inhumain », du trio Bajram/Mangin/Rochebrune illustre à quel point la sécurité absolue est le pire des cauchemars. Des navigateurs spatiaux s'écrasent sur une planète occupée par une tribu humaine amicale, bienveillante et totalement soumise au « Grand Tout ». Sans être un chef-d'œuvre, la lecture est plaisante (je suis un homme simple. Je vois Bajram sur la couverture, j'achète sans discuter).

https://www.bajram.com/livres/inhumain/

La résistance et les abus

Lire, c'est résister. C'est peut-être pour cela que les plateformes qui promeuvent réellement la lecture sont désormais illégales. Heureusement, Orel Auwen vous invite pour une petite visite guidée.

https://serveur410.com/dans-lombre-dinternet-des-bibliotheques-illegales/

J'espère de tout cœur que Printeurs et Aïdja aux mille visages seront rapidement disponibles sur ces plateformes !

Bonne fin de semaine et bonnes lectures. Car, si vous ne prenez pas le temps de lire, vous n'avez pas le temps de résister.

Photo by Apollo Reyes on Unsplash

Je suis @ploum, ingénieur écrivain. Printeurs, mon dernier roman de science-fiction, est disponible en précommande. Abonnez-vous pour recevoir mes billets, partagez-les autour de vous et n'hésitez pas à me soutenir sur Paypal. Votre soutien, même symbolique, compte beaucoup pour moi. Merci !

Vérifiez votre boite de réception ou votre répertoire d'indésirables pour confirmer votre abonnement.

Ce texte est publié sous la licence CC-By BE.

29 Oct 2020 3:01pm GMT

feedPlanet Debian

Norbert Preining: Deleting many files from an S3 bucket

So we found ourselves in the need to delete a considerable amount of files (around 500000, amounting to 1.6T) from an S3 bucket. With the list of files in hand my first shot was calling

aws s3 rm s3://BUCKET/FILE

for each file. That wasn't the best idea I have to say, since first of all, it makes 500000 requests, and then it takes a looong time. And this command does not allow to pass in multiple files.

Fortunately there is aws s3api delete-objects which takes a json input and can delete multiple files:

aws 3api delete-objects --bucket BUCKET --delete '{"Objects": [ { "Key" : "FILE1" }, { "Key" : "FILE2"} ... ]}}'

That did help, and with a bit of magic from bash (mapfile which can read in lines from stdin in batches) and jq, at the end it was a business of some 20min or so:

cat files-to-be-deleted |  while mapfile -t -n 500 ary && ((${#ary[@]})); do
        objdef=$(printf '%s\n' "${ary[@]}" | jq -nR '{Objects: (reduce inputs as $line ([]; . + [{"Key":$line}]))}')
        aws s3api --no-cli-pager  delete-objects --bucket BUCKET --delete "$objdef"
done

This reads 500 files a time, and reformats it using jq into the proper json format: reduce inputs is a jq filter that iterates over the input lines and does a map/reduce step. In this case we use an empty array as start and add new key/filename pairs on the go. Finally, the whole bunch is send to AWS with the above API call.

Puuuh, 500000 files and 1.6T less, in 20min.

29 Oct 2020 3:31am GMT

28 Oct 2020

feedPlanet Debian

Daniel Lange: Git shared hosting quirk

Show https://github.com/torvalds/linux/blob/b4061a10fc29010a610ff2b5b20160d7335e69bf/drivers/hid/hid-samsung.c#L113-L118 to a friend.

Oops 'eh? Yep, Linux has been backdoored.

Well, or not.

Konstantin Ryabitsev explains it nicely in a cgit mailing list email:

It is common for git hosting environments to configure all forks of the same repo to use an "object storage" repository. For example, this is what allows git.kernel.org's 600+ forks of linux.git to take up only 10GB on disk as opposed to 800GB. One of the side-effects of this setup is that any object in the shared repository can be accessed from any of the forks, which periodically confuses people into believing that something terrible has happened.

The hack was discussed on Github in Dec 2018 when it was discovered. I forgot about it again but Konstantin's mail brought the memory back and I think it deserves more attention.

I'm sure putting some illegal content into a fork and sending a made up "blob" URL to law enforcement would go quite far. Good luck explaining the issue. "Yes this is my repo" but "no, no that's not my data" ... "yes, it is my repo but not my data" ... "no we don't want that data either, really" ... "but, but there is nothing we can do, we host on github...1".


  1. Actually there is something you can do. Making a repo private takes it out of the shared "object storage". You can make it public again afterwards. Seems to work at least for now.

28 Oct 2020 9:30pm GMT

08 Nov 2011

feedfosdem - Google Blog Search

papupapu39 (papupapu39)'s status on Tuesday, 08-Nov-11 00:28 ...

papupapu39 · http://identi.ca/url/56409795 #fosdem #freeknowledge #usamabinladen · about a day ago from web. Help · About · FAQ · TOS · Privacy · Source · Version · Contact. Identi.ca is a microblogging service brought to you by Status.net. ...

08 Nov 2011 12:28am GMT

05 Nov 2011

feedfosdem - Google Blog Search

Write and Submit your first Linux kernel Patch | HowLinux.Tk ...

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the world. ...

05 Nov 2011 1:19am GMT

03 Nov 2011

feedfosdem - Google Blog Search

Silicon Valley Linux Users Group – Kernel Walkthrough | Digital Tux

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the ...

03 Nov 2011 3:45pm GMT

26 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Update your RSS link

If you see this message in your RSS reader, please correct your RSS link to the following URL: http://fosdem.org/rss.xml.

26 Jul 2008 5:55am GMT

25 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Archive of FOSDEM 2008

These pages have been archived.
For information about the latest FOSDEM edition please check this url: http://fosdem.org

25 Jul 2008 4:43pm GMT

09 Mar 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Slides and videos online

Two weeks after FOSDEM and we are proud to publish most of the slides and videos from this year's edition.

All of the material from the Lightning Talks has been put online. We are still missing some slides and videos from the Main Tracks but we are working hard on getting those completed too.

We would like to thank our mirrors: HEAnet (IE) and Unixheads (US) for hosting our videos, and NamurLUG for quick recording and encoding.

The videos from the Janson room were live-streamed during the event and are also online on the Linux Magazin site.

We are having some synchronisation issues with Belnet (BE) at the moment. We're working to sort these out.

09 Mar 2008 3:12pm GMT