19 Oct 2017

feedPlanet Debian

Clint Adams: Scuttlebutt at the precinct

So who's getting patchwork into Debian?

Posted on 2017-10-19
Tags: bamamba

19 Oct 2017 10:10pm GMT

Lior Kaplan: Hacktoberfest 2017 @ Tel Aviv

I gave my "Midburn - creating an open source community" talk in Hacktoberfest 2017 @ Tel Aviv. This is the local version of an initiative by DigitalOcean and GitHub.

I was surprised about the vibe both the location and the organizers gave the event, and the fact they could easily arrange t-shirts, pizzas and beer.

Looking at the github search, it seem the worldwide event brings many new contributions. I think we have something to learn how to do a mass concept like that. Especially when it crosses both project limits and country limits. (e.g. not a local bug squashing party).


Filed under: Open source businesses

19 Oct 2017 9:22pm GMT

Steinar H. Gunderson: Introducing Narabu, part 2: Meet the GPU

Narabu is a new intraframe video codec. You may or may not want to read part 1 first.

The GPU, despite being extremely more flexible than it was fifteen years ago, is still a very different beast from your CPU, and not all problems map well to it performance-wise. Thus, before designing a codec, it's useful to know what our platform looks like.

A GPU has lots of special functionality for graphics (well, duh), but we'll be concentrating on the compute shader subset in this context, ie., we won't be drawing any polygons. Roughly, a GPU (as I understand it!) is built up about as follows:

A GPU contains 1-20 cores; NVIDIA calls them SMs (shader multiprocessors), Intel calls them subslices. (Trivia: A typical mid-range Intel GPU contains two cores, and thus is designated GT2.) One such core usually runs the same program, although on different data; there are exceptions, but typically, if your program can't fill an entire core with parallelism, you're wasting energy. Each core, in addition to tons (thousands!) of registers, also has some "shared memory" (also called "local memory" sometimes, although that term is overloaded), typically 32-64 kB, which you can think of in two ways: Either as a sort-of explicit L1 cache, or as a way to communicate internally on a core. Shared memory is a limited, precious resource in many algorithms.

Each core/SM/subslice contains about 8 execution units (Intel calls them EUs, NVIDIA/AMD calls them something else) and some memory access logic. These multiplex a bunch of threads (say, 32) and run in a round-robin-ish fashion. This means that a GPU can handle memory stalls much better than a typical CPU, since it has so many streams to pick from; even though each thread runs in-order, it can just kick off an operation and then go to the next thread while the previous one is working.

Each execution unit has a bunch of ALUs (typically 16) and executes code in a SIMD fashion. NVIDIA calls these ALUs "CUDA cores", AMD calls them "stream processors". Unlike on CPU, this SIMD has full scatter/gather support (although sequential access, especially in certain patterns, is much more efficient than random access), lane enable/disable so it can work with conditional code, etc.. The typically fastest operation is a 32-bit float muladd; usually that's single-cycle. GPUs love 32-bit FP code. (In fact, in some GPU languages, you won't even have 8-, 16-bit or 64-bit types. This is annoying, but not the end of the world.)

The vectorization is not exposed to the user in typical code (GLSL has some vector types, but they're usually just broken up into scalars, so that's a red herring), although in some programming languages you can get to swizzle the SIMD stuff internally to gain advantage of that (there's also schemes for broadcasting bits by "voting" etc.). However, it is crucially important to performance; if you have divergence within a warp, this means the GPU needs to execute both sides of the if. So less divergent code is good.

Such a SIMD group is called a warp by NVIDIA (I don't know if the others have names for it). NVIDIA has SIMD/warp width always 32; AMD used to be 64 but is now 16. Intel supports 4-32 (the compiler will autoselect based on a bunch of factors), although 16 is the most common.

The upshot of all of this is that you need massive amounts of parallelism to be able to get useful performance out of a CPU. A rule of thumb is that if you could have launched about a thousand threads for your problem on CPU, it's a good fit for a GPU, although this is of course just a guideline.

There's a ton of APIs available to write compute shaders. There's CUDA (NVIDIA-only, but the dominant player), D3D compute (Windows-only, but multi-vendor), OpenCL (multi-vendor, but highly variable implementation quality), OpenGL compute shaders (all platforms except macOS, which has too old drivers), Metal (Apple-only) and probably some that I forgot. I've chosen to go for OpenGL compute shaders since I already use OpenGL shaders a lot, and this saves on interop issues. CUDA probably is more mature, but my laptop is Intel. :-) No matter which one you choose, the programming model looks very roughly like this pseudocode:

for (size_t workgroup_idx = 0; workgroup_idx < NUM_WORKGROUPS; ++workgroup_idx) {   // in parallel over cores
        char shared_mem[REQUESTED_SHARED_MEM];  // private for each workgroup
        for (size_t local_idx = 0; local_idx < WORKGROUP_SIZE; ++local_idx) {  // in parallel on each core
                main(workgroup_idx, local_idx, shared_mem);
        }
}

except in reality, the indices will be split in x/y/z for your convenience (you control all six dimensions, of course), and if you haven't asked for too much shared memory, the driver can silently make larger workgroups if it helps increase parallelity (this is totally transparent to you). main() doesn't return anything, but you can do reads and writes as you wish; GPUs have large amounts of memory these days, and staggering amounts of memory bandwidth.

Now for the bad part: Generally, you will have no debuggers, no way of logging and no real profilers (if you're lucky, you can get to know how long each compute shader invocation takes, but not what takes time within the shader itself). Especially the latter is maddening; the only real recourse you have is some timers, and then placing timer probes or trying to comment out sections of your code to see if something goes faster. If you don't get the answers you're looking for, forget printf-you need to set up a separate buffer, write some numbers into it and pull that buffer down to the GPU. Profilers are an essential part of optimization, and I had really hoped the world would be more mature here by now. Even CUDA doesn't give you all that much insight-sometimes I wonder if all of this is because GPU drivers and architectures are meant to be shrouded in mystery for competitiveness reasons, but I'm honestly not sure.

So that's it for a crash course in GPU architecture. Next time, we'll start looking at the Narabu codec itself.

19 Oct 2017 5:45pm GMT

feedPlanet Grep

Xavier Mertens: Hack.lu 2017 Wrap-Up Day 3

Hack.lu is already over and I'm currently waiting for my connecting flight in Munich, that's the perfect opportunity to write my wrap-up. This one is shorter because I had to leave early to catch my flight to Hacktivity and I missed some talks scheduled in the afternoon. Thank Lufthansa for rebooking my flight so early in the afternoon… Anyway, it started again early (8AM) and John Bambenek opened the day with a talk called "How I've Broken Every Threat Intel Platform I've Ever Had (And Settled on MISP)". The title was well chosen because John is a big fan of OSINT. He collects a lot of data and provides them for free via feeds (available here). He started to extract useful information from malware samples because the main problem today is the flood of samples that are constantly discovered. But how to find relevant information? He explained some of the dataset he's generating. The first one is DGA or "Domain Generation Algorithm". DNS is a key indicator and is used everywhere. Checking a domain name may also reveal interesting information via the Whois databases. Even if data are fake, they can be helpful to link different campaigns or malware families together and get more intelligence about the attacker. If you can reverse the algorithm, you can predict the upcoming domains, prepare yourself better and also start takedown operations. The second dataset was the malware configurations. Yes, a malware is configurable (example: kill-switch domains, Bitcoin wallets, C2, campaign ID's, etc). Mutex can be useful to correlated malware from different campaigns like DGA. John is also working on a new dataset based on the tool Yalda. In the second part of his presentation, he explained why most solutions he tested to handle this amount of data failed (CIF, CRITS, ThreatConnect, STIX, TAXII). The problem with XML (and also an advantage at the same time): XML can be very verbose to describe events. Finally, he explained how he's now using MISP. If you're interested in OSINT, John is definitively a key person to follow and he is also a SANS ISC handler.

The next talk was "Automation Attacks at Scale" by Will Glazier & Mayank Dhiman. Databases of stolen credentials are a goldmine for bad guys. They are available everywhere on the Internet. Ex: Just by crawling Pastebin, it is possible to collect ~20K passwords per day (note: but most of them are duplicates). It is tempting to test them but this requires a lot of resources. A valid password has a value on the black market but to test them, attackers must spend some bucks to buy resources when not available for free or can't be abused). Will and Mayank explained how they are working to make some profit. They need tools to test credentials and collect information (Ex: Sentra, MBA, Hydra, PhantomJS, Curl, Wget, …). They need fresh meat (credentials), IP addresses (to make the rotation and avoid blacklists) and of course CPU resources. About IP rotation, they use often big cloud service providers (Amazon, Azure) because those big players on the Internet will almost never be blacklisted. They can also use compromised servers or IoT botnets. In the second part of the talk, some pieces of advice were provided to help to detect them (ex: most of them can be fingerprinted just via the User-Agent they use). A good advice is also to keep an idea on your API logs to see if some malicious activity is ongoing (bruteforce attacks).

Then we switched to pure hardware session with Obiwan666 who presented "Front door Nightmares. When smart is not secure". The research started from a broken lock he found. The talk did not cover the "connected" locks that can manage with a smartphone but real security locks found in many enterprises and restricted environments. Such locks are useful because the key management is easier. No need to replace the lock if a key is lost, the access-rights must just be adapted on the lock. It is also possible to play with time constraints. They offer multiple ways to interact via the user: with something you have (a RFID token), something you are (biometrics) or something you know (a PIN code). Obiwan666 explained in details how such locks are built and, thanks to his job and background in electronics, he has access to plenty of nice devices to analyze the target. He showed X-ray pictures of the lock. X-Ray scanner isn't very common! Then he explained different scenarios of attack. The first one was trivial: sometimes, the lock is mounted in the wrong way and the inner part is outside ("in the wild"). The second attack was a signal replay. Locks use a serial protocol that can be sniffed and replayed - no protection). I liked the "brain implant" attack: you just buy a new lock (same model), you program it to grant your access and replace the electronic part of the victim with yours…Of course, traditional lock-picking can be tested. Also, a thermal camera can reveal the PIN code if the local has a pinpad. I know some organizations which could be very interested to test their locks against all these attacks! 🙂

After an expected coffee break, another awesome research was presented by Aaron Kaplan and Éireann Leverett: "What is the max Reflected Distributed Denial of Service (rDDoS) potential of IPv4?". DDoS attacks based on UDP amplification are not new but remain quite effective. The four protocols in the scope of the research were: DNS, NTP, SSDP and SNMP. But in theory, what could be the effect of a massive DDoS over the IPv4 network? They started the talk with one simple number:

108.49Tb/s

The idea was to scan the Internet for vulnerable services and to classify them. Based on the location of the server, they were able to estimate the bandwidth available (ex: per countries) and to calculate the total amount of bandwidth that could be wasted by a massive attack. They showed nice statistics and findings. One of them was a relation between the bandwidth increase and the risk to affects other people on the Internet.

Then, the first half-day ended with the third keynote. This one was presented by Vladimir Kropotov, Fyodor Yarochkin: "Information Flows and Leaks in Social Media". Social media are used everywhere today… for the good or the bad. They demonstrated how social network can react in case of a major event in the world (nothing related to computers). Some examples:

They mainly focused on the Twitter social network. They have tools to analyze the traffic and relations between people and the usage of specific hashtags. In the beginning of the keynote, many slides had content in Russian, no easy to read but the second part was interesting with the tracking of bots and how to detect them.

After the lunch break, there was again a lightning talk session then Eleanor Saitta came to present "On Strategy". I did not follow them. The last talk I attended was a cool one: "Digital Vengeance: Exploiting Notorious C&C Toolkits" by Waylon Grange. The idea of the research was to try to compromize the attackers by applying the principle of offensive security. Big disclosure here: hacking back is often illegal and does not provide any gain but risks of liability, reputation… Waylon focused on RAT ("Remote Access Tools") like Poison Ivy, Dark Comet or Xtreme RAT. Some of them already have known vulnerabilities. He demonstrated his finding and how he was able to compromise the computer of remote attackers. But what do when you are "in"? Search for interesting IP addresses (via netstat), browser the filesystem, install persistence, a keylogger or steal credentials, pivot, etc.

Sorry for the last presentation that I was unable to follow and report here. I had to leave for Hacktivity in Budapest. I'll also miss the first edition of BSidesLuxembourg, any volunteer to write a wrap-up for me? So to recap this edition of Hack.lu:

You can still expect more wrap-ups tomorrow but for another conference!

[The post Hack.lu 2017 Wrap-Up Day 3 has been first published on /dev/random]

19 Oct 2017 5:00pm GMT

18 Oct 2017

feedPlanet Grep

Xavier Mertens: Hack.lu 2017 Wrap-Up Day 2

As said yesterday, the second day started very (too?) early… The winner of the first slot was Aaron Zauner who talked about pseudo-random numbers generators. The complete title of the talk was "Because 'User Random' isn't everything: a deep dive into CSPRGNs in Operating Systems & Programming Languages". He started with an overview of random numbers generators and why we need them. They are used almost everywhere even in the Bash shell where you can use ${RANDOM}. CSPRNG is also known as RNG or "Random Number Generator". It is implemented at operating system level via /dev/urandom on Linux on RtlGenRandom() on Windows but also in programming languages. And sometimes, with security issues like CVE-2017-11671 (GCC fails to generate incorrect code for RDRAND/RDSEED. /dev/random & /dev/urandom devices are using really old code! (fro mid-90's). According to Aaron, it was a pure luck if no major incident arises in the last years. And today? Aaron explained what changed with the kernel 4.2. Then he switched to the different language and how they are implementing random numbers generators. He covered Ruby, Node.js and Erlang. All of them did not implement proper random number generators but he also explained what changed to improve this feature. I was a little bit afraid of the talk at 8AM but it was nice and easy to understand for a crypto talk.

The next talk was "Keynterceptor: Press any key to continue" by Niels van Dijkhuizen. Attacks via HID USB devices are not new. Niels reviewed a timeline with all the well-known attacks from 2005 with the KeyHost USB logger until 207 with the BashBunny. The main problems with those attacks: they need an unlocked computer, some social engineer skills and an Internet connection (most of the time). They are products to protect against these attacks. Basically, they act as a USB firewall: USBProxy, USBGuest, GoodDog, DuckHunt, etc. Those products are Windows tools, for Linux, have a look at GRSecurity. Then Niels explains how own solution which gets rid of all the previous constraints: his implants is inline between the keyboard and the host. It must also have notions of real)time. The rogue device clones itself as a classic HID device ("HP Elite USB Keyboard") and also adds random delays to fake a real human typing on a keyboard. This allows bypassing the DuckHunt tool. Niels makes a demonstration of his tool. It comes with another device called the "Companion" which has a 3G/4G module that connects to the Keynterceptor via a 433Mhz connection. A nice demo was broadcasted and his devices were available during the coffee break. This is a very nice tool for red teams…

Then, Clement Rouault, Thomas Imbert presented a view into ALPC-RPC.The idea of the talk: how to abuse the UAC feature in Microsoft Windows.They were curious about this feature. How to trigger the UAC manually? Via RPC! A very nice tool to investigate RPC interface is RpcView. Then, they switched to ALPC: what is it and how does ir work. It is a client/server solution. Clients connect to a port and exchange messages that have two parts: the PORT_MESSAGE header and APLC_MESSAGE_ATTRIBUTES. They explained in details how they reverse-engineering the messages and, of course, they discovered vulnerabilities. They were able to build a full RPC client in Python and, with the help of fuzzing techniques, they found bugs: NULL dereference, out-of-bounds access, logic bugs, etc. Based on their research, one CVE was created: CVE-2017-11783.

After the coffee break, a very special talk was scheduled: "The untold stories of Hackers in Detention". Two hackers came on stage to tell how they were arrested and put in jail. It was a very interesting talk. They explained their personal stories how they were arrested, how it happened (interviews, etc). Also gave some advice: How to behave in jail, what to do and not do, the administrative tasks, etc. This was not recorded and, to respect them, no further details will be provided.

The second keynote was assigned to Ange Albertini: "Infosec and failure". Ange's presentation are always a good surprise. You never know how he will design his slides.As he said, his talk is not about "funny" failures. Infosec is typically about winning. The keynote was a suite of facts that prove us that we usually fail to provide good infosec services and pieces of advice, also in the way we communicate to other people. Ange likes retro-gaming and made several comparisons between the gaming and infosec industries. According to him, we should have some retropwning events to play and learn from old exploits. According to Ange, an Infosec crash is coming like the video game industry in 1983 and a new cycle is coming. It was a great keynote with plenty of real facts that we should take care of! Lean, improve, share, don't be shy, be proactive.

After the lunch, I skipped the second session of lightning talks and got back for "Sigma - Generic Signatures for Log Events" by Thomas Patzke. Let's talk with logs… When the talk started, my first feeling was "What? Another talk about logs?" but, in fact, it was interesting. The idea behind Sigma is that everybody is looking for a nice way to detect threats but all solutions have different features and syntax. Some example of threats are:

The problem we are facing: there is a lack of standardised format. Here comes Sigma. The goal of this tool is to write use case in YAML files that contain all the details to detect a security issue. Thomas gave some examples like detecting Mimikatz or webshells.

Sigma comes with a generator tool that can generate queries for multiple tools: Splunk, Elasticsearch or Logpoint. This is more complex than expected because field names are different, there are inconsistent file names, etc. In my opinion, Sigma could be useful to write use cases in total independence of any SIEM solution. It is still an ongoing project and, good news, recent versions of MISP can integrate Sigma. A field has been added and a tool exists to generate Sigma rules from MISP data.

The next talk was "SMT Solvers in the IT Security - deobfuscating binary code with logic" by Thaís Moreira Hamasaki. She started with an introduction to CLP or "Constraint Logic Programming". Applications in infosec can be useful like malware de-obfuscation. Thais explained how to perform malware analysis using CLP. I did not follow more about this talk that was too theoretical for me.

Then, we came back to more practical stuff with Omar Eissa who presented "Network Automation is not your Safe Haven: Protocol Analysis and Vulnerabilities of Autonomic Network". Omar is working for ERNW and they always provide good content. This time they tested the protocol used by Cisco to provision new routers. The goal is to make a router ready for use in a few minutes without any configuration: the Cisco Autonomic network. It's a proprietary protocol developed by Cisco. Omar explained how this protocol is working and then how to abuse it. They found several vulnerabilities

The talk had many demos that demonstrated the vulnerabilities above. A very nice talk.

The next speaker was Frank Denis who presented "API design for cryptography". The idea of the talk started with a simple Google query: "How to encrypt stuff in C". Frank found plenty of awful replies with many examples that you should never use. Crypto is hard to design but also hard to use. He reviewed several issues in the current crypto libraries then presented libhydrogen which is a library developed to solve all the issues introduced by the other libraries. Crypto is not easy to use and developer don't read the documentation, they just expect some pieces of code that they can copy/paste. The library presented by Frank is called libhyrogen. You can find the source code here.

Then, Okhin came on stage to give an overview of the encryption VS the law in France. The title of his talk was "WTFRance". He explained the status of the French law against encryption and tools. Basically, many political people would like to get rid of encryption to better fight crime. It was interesting to learn that France leads the fight against crypto and then push ideas at EU level. Note that he also mentioned several politician names that are "against" encryption.

The next talk was my preferred for this second day: "In Soviet Russia, Vulnerability Finds You" presented by Inbar Raz. Inbar is a regular speaker at hack.lu and proposes always entertaining presentations! This time he came with several examples of interesting he found "by mistake". Indeed, sometimes, you find interesting stuff by accident. Inbar game several examples like an issue on a taxi reservation website, the security of an airport in Poland or fighting against bots via the Tinder application. For each example, a status was given. It's sad to see that some of them were unresolved for a while! An excellent talk, I like it!

The last slot was assigned to Jelena Milosevic. Jelena is a nurse but she has also a passion for infosec. Based on her job, she learns interesting stuff from healthcare environments. Her talk was a compilation of mistakes, facts and advice for hospitals and health-related services. We all know that those environments are usually very badly protected. It was, once again, proven by Jelena.

The day ended with the social event and the classic Powerpoint karaoke. Tomorrow, it will start again at 08AM with a very interesting talk…

[The post Hack.lu 2017 Wrap-Up Day 2 has been first published on /dev/random]

18 Oct 2017 11:17pm GMT

Philip Van Hoof: Zucht ..

Weer vier politieke posts alvorens ik iets technisch schrijf. Pff. Wat een zagevent ben ik!

Ik beloof dat ik binnenkort een "How it's made" over een AbstractFutureCommand ga schrijven. Dat is een Command in MVVM waar er een QFuture teruggegeven wordt bij een executeAsync() method.

Ik ben eerst nog even met m'n klant en haar ontwikkelaars bezig om één en ander door te voeren. Teneinde de toekomst van CNC besturingssoftware kei goed zal zijn.

We werken er aan op EMO ooit een bende software die bangelijk is te introduceren.

18 Oct 2017 9:32pm GMT

08 Nov 2011

feedfosdem - Google Blog Search

papupapu39 (papupapu39)'s status on Tuesday, 08-Nov-11 00:28 ...

papupapu39 · http://identi.ca/url/56409795 #fosdem #freeknowledge #usamabinladen · about a day ago from web. Help · About · FAQ · TOS · Privacy · Source · Version · Contact. Identi.ca is a microblogging service brought to you by Status.net. ...

08 Nov 2011 12:28am GMT

05 Nov 2011

feedfosdem - Google Blog Search

Write and Submit your first Linux kernel Patch | HowLinux.Tk ...

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the world. ...

05 Nov 2011 1:19am GMT

03 Nov 2011

feedfosdem - Google Blog Search

Silicon Valley Linux Users Group – Kernel Walkthrough | Digital Tux

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the ...

03 Nov 2011 3:45pm GMT

26 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Update your RSS link

If you see this message in your RSS reader, please correct your RSS link to the following URL: http://fosdem.org/rss.xml.

26 Jul 2008 5:55am GMT

25 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Archive of FOSDEM 2008

These pages have been archived.
For information about the latest FOSDEM edition please check this url: http://fosdem.org

25 Jul 2008 4:43pm GMT

09 Mar 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Slides and videos online

Two weeks after FOSDEM and we are proud to publish most of the slides and videos from this year's edition.

All of the material from the Lightning Talks has been put online. We are still missing some slides and videos from the Main Tracks but we are working hard on getting those completed too.

We would like to thank our mirrors: HEAnet (IE) and Unixheads (US) for hosting our videos, and NamurLUG for quick recording and encoding.

The videos from the Janson room were live-streamed during the event and are also online on the Linux Magazin site.

We are having some synchronisation issues with Belnet (BE) at the moment. We're working to sort these out.

09 Mar 2008 3:12pm GMT