25 Jun 2017

feedPlanet Debian

Andreas Bombe: PDP-8/e Replicated — Introduction

I am creating a replica of the DEC PDP-8/e architecture in an FPGA from schematics of the original hardware. So how did I end up with a project like this?

The story begins with me wanting to have a computer with one of those front panels that have many, many lights where you can really see, in real time, what the computer is doing while it is executing code. Not because I am nostalgic for a prior experience with any of those - I was born a bit too late for that and my first computer as a kid was a Commodore 64.

Now, the front panel era ended around 40 years ago with the advent of microprocessors and computers of that age and older that are complete and working are hard to find and not cheap. And even if you do, there's the issue of weight, size (complete systems with peripherals fill at least a rack) and power consumption. So what to do - build myself a small one with modern technology of course.

While there's many computer architectures of that era to choose from, the various PDP machines by DEC are significant and well known (and documented) due to their large numbers. The most important are probably the 12 bit PDP-8, the 16 bit PDP-11 and the 36 bit PDP-10. While the PDP-11 is enticing because of the possibility to run UNIX I wanted to start with something simpler, so I chose the PDP-8.

My implementation on display next to a real PDP-8/e at VCFe 18.0

My implementation on display next to a real PDP-8/e at VCFe 18.0

The Original

DEC started the PDP-8 line of computers programmed data processors designed as low cost machines in 1965. It is a quite minimalist 12 bit architecture based on the earlier PDP-5, and by minimalist I mean seriously minimal. If you are familiar with early 8 bit microprocessors like the 6502 or 8080 you will find them luxuriously equipped in comparison.

The PDP-8 base architecture has a program counter (PC) and an accumulator (AC)1. That's it. There are no pointer or index registers2. There is no stack. It has addition and AND instructions but subtractions and OR operations have to be manually coded. The optional Extended Arithmetic Element adds the MQ register but that's really it for visible registers. The Wikipedia page on the PDP-8 has a good detailed description.

Regarding technology, the PDP-8 series has been in production long enough to get the whole range of implementations from discrete transistor logic to microprocessors. The 8/e which I target was right in the middle, implemented in TTL logic where each IC contains multiple logic elements. This allowed the CPU itself (including timing generator) to fit on three large circuit boards plugged into a backplane. Complete systems would have at least another board for the front panel and multiple boards for the core memory, then additional boards for whatever options and peripherals were desired.

Design Choices and Comparisons

I'm not the only one who had the idea to build something like that, of course. Among the other modern PDP-8 implementations with a front panel, probably the most prominent project is the Spare Time Gizmos SBC6120 which is a PDP-8 single board computer built around the Harris/Intersil HD-6120 microprocessor, which implementes the PDP-8 architecture, combined with a nice front panel. Another is the PiDP-8/I, which is another nice front panel (modeled after the 8/i which has even more lights) driven by the simh simulator running under Linux on a Raspberry Pi.

My goal is to get front panel lights that appear exactly like the real ones in operation. This necessitates driving the lights at full speed as they change with every instruction or even within instructions for some display selections. For example, if you run a tight loop that does nothing but increment AC while displaying that register, it would appear that all lights are lit at equal but less than full brightness. The reason is that the loop runs at such a high speed that even the most significant bit, which is blinking the slowest, is too fast to see flicker. Hence they are all effectively 50% on, just at different frequencies, and appear to be constantly light at the same brightness.

This is where the other projects lack what I am looking for. The PiDP-8/I is a multiplexed display which updates at something like 30 Hz or 60 Hz, taking whatever value is current in the simulation software at the time. All the states the lights took inbetween are lost and consequently there is flickering where there shouldn't be. On the SBC6120 at least the address lines appear to update at full speed as these are the actual RAM address lines. However the used 6120 microprocessor does not have required data for the indicator display externally available. Instead, the SBC6120 runs an interrupt at 30 Hz to trap into its firmware/monitor program which then reads the current state and writes it to the front panel display, which is essentially just another peripheral. A different considerable problem with the SBC6120 is its use of the 6100 microprocessor family ICs, which are themselves long out of production and not trivial (or cheaply) to come by.

Given that the way to go is to drive all lights in step with every cycle3, this can be done by a software running on a dedicated microcontroller - which is how I started - or by implementing a real CPU with all the needed outputs in an FPGA - which is the project I am writing about.

In the next post I will give on overview of the hardware I built so far and some of the features that are yet to be implemented.


  1. With an associated link bit which is a little different from a carry bit in that it is treated as a thirteenth bit, i.e. it will be flipped rather than set when a carry occurs. [return]
  2. Although there are 8 specially treated memory addresses that will pre-increment when used in indirect addressing. [return]
  3. Basic cycles on the PDP-8/e are 1.4 µs for memory modifying cycles and fast cycles of 1.2 µs for everything else. Instructions can be one to three cycles long. [return]

25 Jun 2017 6:58pm GMT

Steinar H. Gunderson: Frame queue management in Nageru 1.6.1

Nageru 1.6.1 is on its way, and what was intended to only be a release centered around monitoring improvements (more specifically a full set of native Prometheus] metrics) actually ended up getting a fairly substantial change to how Nageru manages its frame queues. To understand what's changing and why, it's useful to first understand the history of Nageru's queue management. Nageru 1.0.0 started out with a fairly simple scheme, but with some basics that are still relevant today: One of the input cards was deemed the master card, and whenever it delivers a frame, the master clock ticks and an output frame is produced. (There are some subtleties about dropped frames and/or the master card changing frame rates, but I'm going to ignore them, since they're not important to the discussion.)

To this end, every card keeps a preallocated frame queue; when a card delivers a frame, it's put into the queue, and when the master clock ticks, it tries picking out one frame from each of the other card's queues to mix together. Note that "mix" here could be as simple as picking one input and throwing all the other ones away; the queueing algorithm doesn't care, it just feeds all of them to the theme and lets that run whatever GPU code it needs to match the user's preferences.

The only thing that really keeps the queues bounded is that the frames in them are preallocated (in GPU memory), so if one queue gets longer than 16 frames, Nageru starts dropping it. But is 16 the right number? There are two conflicting demands here, ignoring memory usage:

The 1.0.0 scheme does about as well as one could possibly hope in never dropping frames, but unfortunately, it can be pretty poor at latency. For instance, if your master card runs at 50 Hz and you have a 60 Hz card, the latter will eventually build up a delay of 16 * 16.7 ms = 266.7 ms-clearly unacceptable, and rather unneeded.

You could ask the user to specify a queue length, but the user probably doesn't know, and also shouldn't really have to care-more knobs to twiddle are a bad thing, and even more so knobs the user is expected to twiddle. Thus, Nageru 1.2.0 introduced queue autotuning; it keeps a running estimate on how big the queue needs to be to avoid underruns, simply based on experience. If we've been dropping frames on a queue and then there's an underrun, the "safe queue length" is increased by one, and if the queue has been having excess frames for more than a thousand successive master clock ticks, we reduce it by one again. Whenever the queue has more than this "safe" number, we drop frames.

This was simple, effective and largely fixed the problem. However, when adding metrics, I noticed a peculiar effect: Not all of my devices have equally good clocks. In particular, when setting up for 1080p50, my output card's internal clock (which assumes the role of the master clock when using HDMI/SDI output) seems to tick at about 49.9998 Hz, and my simple home camcorder delivers frames at about 49.9995 Hz. Over the course of an hour, this means it produces one more frame than you should have… which should of course be dropped. Having an SDI setup with synchronized clocks (blackburst/tri-level) would of course fix this problem, but most people are not so lucky with their cameras, not to mention the price of PC graphics cards with SDI outputs!

However, this happens very slowly, which means that for a significant amount of time, the two clocks will very nearly be in sync, and thus racing. Who ticks first is determined largely by luck in the jitter (normal is maybe 1ms, but occasionally, you'll see delayed delivery of as much as 10 ms), and this means that the "1000 frames" estimate is likely to be thrown off, and the result is hundreds of dropped frames and underruns in that period. Once the clocks have diverged enough again, you're off the hook, but again, this isn't a good place to be.

Thus, Nageru 1.6.1 change the algorithm around yet again, by incorporating more data to build an explicit jitter model. 1.5.0 was already timestamping each frame to be able to measure end-to-end latency precisely (now also exposed in Prometheus metrics), but from 1.6.1, they are actually used in the queueing algorithm. I ran several eight- to twelve-hour tests and simply stored all the event arrivals to a file, and then simulated a few different algorithms (including the old algorithm) to see how they fared in measures such as latency and number of drops/underruns.

I won't go into the full details of the new queueing algorithm (see the commit if you're interested), but the gist is: Based on the last 5000 frames, it tries to estimate the maximum possible jitter for each input (ie., how late the frame could possibly be). Based on this as well as clock offsets, it determines whether it's really sure that there will be an input frame available on the next master tick even if it drops the queue, and then trims the queue to fit.

The result is pretty satisfying; here's the end-to-end latency of my camera being sent through to the SDI output:

As you can see, the latency goes up, up, up until Nageru figures it's now safe to drop a frame, and then does it in one clean drop event; no more hundreds on drops involved. There are very late frame arrivals involved in this run-two extra frame drops, to be precise-but the algorithm simply determines immediately that they are outliers, and drops them without letting them linger in the queue. (Immediate dropping is usually preferred to sticking around for a bit and then dropping it later, as it means you only get one disturbance event in your stream as opposed to two. Of course, you can only do it if you're reasonably sure it won't lead to more underruns later.)

Nageru 1.6.1 will ship before Solskogen, as I intend to run it there :-) And there will probably be lovely premade Grafana dashboards from the Prometheus data. Although it would have been a lot nicer if Grafana were more packaging-friendly, so I could pick it up from stock Debian and run it on armhf. Hrmf. :-)

25 Jun 2017 3:45pm GMT

Lars Wirzenius: Obnam 1.22 released (backup application)

I've just released version 1.22 of Obnam, my backup application. It is the first release for this year. Packages are available on code.liw.fi/debian and in Debian unstable, and source is in git. A summary of the user-visible changes is below.

For those interested in living dangerously and accidentally on purpose deleting all their data, the link below shows that status and roadmap for FORMAT GREEN ALBATROSS. http://distix.obnam.org/obnam-dev/182bd772889544d5867e1a0ce4e76652.html

Version 1.22, released 2017-06-25

25 Jun 2017 12:41pm GMT

24 Jun 2017

feedPlanet Grep

Xavier Mertens: BSides Athens 2017 Wrap-Up

The second edition of BSides Athens was planned this Saturday. I already attended the first edition (my wrap-up is here) and I was happy to be accepted as a speaker for the second time! This edition moved to a new location which was great. Good wireless, air conditioning and food. The day was based on three tracks: the first two for regular talks and the third one for the CTP and workshops. The "boss", Grigorios Fragkos introduced the 2nd edition. This one gave more attention to a charity program called "the smile of the child" which helps Greek kids to remain in touch with the new technologies. A specific project is called "ODYSSEAS" and is based on a truck that travels across Greek to educate kids to technologies like mobile phones, social networks, … The BSides Athens donated to this project. A very nice initiative that was presented by Stefanos Alevizos who received a slot of a few minutes to describe the program (content in Greek only).


The keynote was assigned to Dave Lewis who presented "The Unbearable Lightness of Failure". The main fact explained by Dave is that we fail but…we learn from our mistakes! In other words, "failure is an acceptable teaching tool". The keynote was based on many facts like signs. We receive signs everywhere and we must understand how to interpret them or the famous Friedrich Nietzsche's quote: "That which does not kill us makes us stronger". We are facing failures all the time. The last good example is the Wannacry bad story which should never happen but… You know the story! Another important message is that we don't have to be afraid t fail. We also have to share as much as possible not only good stories but also bad stories. Sharing is a key! Participate in blogs, social networks, podcasts. Break out of your silo! Dave is a renowned speaker and delivered a really good keynote!

Then talks were split across the two main rooms. For the first one, I decided to attend the Thanissis Diogos's presentation about "Operation Grand Mars". In January 20167, Trustwave published an article which described this attack. Thanassis came back on this story with more details. After a quick recap about what is incident management, he reviewed all the fact related to the operation and gave some tips to improve abnormal activities on your network. It started with an alert generated by a workstation and, three days later, the same message came from a domain controller. Definitively not good! The entry point was infected via a malicious Word document / Javascript. Then a payload was download from Google docs which is, for most of our organization, a trustworthy service. Then he explained how persistence was achieved (via autorun, scheduled tasks) and also lateral movements. The pass-the-hash attack was used. Another tip from Thanissis: if you see local admin accounts used for network logon, this is definitively suspicious! Good review of the attack with some good tips for blue teams.

My next choice was to move to the second track to follow Konstantinos Kosmidis's talk about machine learning (a hot topic today in many conferences!). I'm not a big fan of these technologies but I was interested in the abstract. The talk was a classic one: after an introduction to machine learning (that we already use every day with technologies like the Google face recognition, self-driving card or voice-recognition), why not apply this technique to malware detection. The goal is to: detect, classify but, more important, to improve the algorithm! After reviewing some pro & con, Konstantinos explained the technique he used in his research to convert malware samples into images. But, more interesting, he explained a technique based on steganography to attack this algorithm. The speaker was a little bit stressed but the idea looks interesting. If you're interested, have a look at his Github repository.

Back to the first track to follow Professor Andrew Blyth with "The Role of Professionalism and Standards in Penetration Testing". The penetration testing landscape changed considerably in the last years. We switched to script kiddies search for juicy vulnerabilities to professional services. The problem is that today some pentest projects are required not to detect security issues and improve but just for … compliance requirements. You know the "checked-case" syndrome. Also, the business evolves and is requesting more insurance. The coming GDP European regulation will increase the demand in penetration tests. But, a real pentest is not a Nessus scan with a new logo as explained Andrew! We need professionalism. In the second part of the talk, Andrew reviewed some standards that involve pentests: iCAST, CBEST, PCI, OWASP, OSSTMM.

After a nice lunch with Greek food, back to talks with the one of Andreas Ntakas and Emmanouil Gavriil about "Detecting and Deceiving the Unknown with Illicium". They are working for one of the sponsors and presented the tool developed by their company: Illicium. After the introduction, my feeling was that it's a new honeypot with extended features. Not only, they are interesting stuff but, IMHO, it was a commercial presentation. I'd expect a demo. Also, the tool looks nice but is dedicated to organization that already reached a mature security level. Indeed, before defeating the attacker, the first step is to properly implement basic controls like… patching! What some organizations still don't do today!

The next presentation was "I Thought I Saw a |-|4><0.-" by Thomas V. Fisher. Many interesting tips were provided by Thomas like:

The model presented by Thomas was based on 4 A's: Assess, Analyze, Articulate and Adapt! A very nice talk with plenty of tips!

The next slot was assigned to Ioannis Stais who presented his framework called LightBulb. The idea is to build a framework to help in bypassing common WAF's (web application firewalls). Ioannis explained first how common WAF's are working and why they could be bypassed. Instead of testing all possible combinations (brute-force), LightBuld relies on the following process:

Note that LightBulb is available also as a BurpSuipe extension! The code is available here.

Then, Anna Stylianou presented "Car hacking - a real security threat or a media hype?". The last events that I attended also had a talk about cars but they focused more on abusing the remote control to open doors. Today, it focuses on ECU ("Engine Control Unit") that are present in modern cars. Today a car might have >100 ECU's and >100 millions lines of code which means a great attack surface! They are many tools available to attack a car via its CAN bus, even the Metasploit framework can be used to pentest cars today! The talk was not dedicated to a specific attack or tools but was more a recap of the risks that cars manufacturers are facing today. Indeed, threats changed:

Some infosec gurus also predict that autonomous cars will be used as lethal weapons! As cars can be seen as computers on wheels, the potential attacks are the same: spoofing, tampering, repudiation, disclosure, DoS or privilege escalation issues.

The next slot was assigned to me. I presented "Unity Makes Strength" and explained how to improve interconnections between our security tools/applications. The last talk was performed by Theo Papadopoulos: A "Shortcut" to Red Teaming. He explained how .LNK files can be a nice way to compromize your victim's computer. I like the "love equation": Word + Powershell = Love. Step by step, Theo explained how to build a malicious document with a link file, how to avoid mistakes and how to increase chances to get the victim infected. I like the persistence method based on assigning a popular hot-key (like CTRL-V) to shortcut on the desktop. Windows will trigger the malicious script attached to the shortcut and them… execute it (in this case, paste the clipboard content). Evil!

The day ended with the CTF winners announce and many information about the next edition of BSides Athens. They already have plenty of ideas! It's now time for some off-days across Greece with the family…

[The post BSides Athens 2017 Wrap-Up has been first published on /dev/random]

24 Jun 2017 8:10pm GMT

22 Jun 2017

feedPlanet Grep

Xavier Mertens: [SANS ISC] Obfuscating without XOR

I published the following diary on isc.sans.org: "Obfuscating without XOR".

Malicious files are generated and spread over the wild Internet daily (read: "hourly"). The goal of the attackers is to use files that are:

That's why many obfuscation techniques exist to lure automated tools and security analysts… [Read more]

[The post [SANS ISC] Obfuscating without XOR has been first published on /dev/random]

22 Jun 2017 10:11am GMT

21 Jun 2017

feedPlanet Grep

Frank Goossens: Quick heads-up: Autoptimize 2.1.2 and 2.2.1 release, includes security fix

[Updated 23/06 to reflect newer versions 2.1.2 and 2.2.1]

Heads-up: Autoptimize 2.2 has just been released with a slew of new features (see changelog) and an important security-fix. Do upgrade as soon as possible.

If you prefer not to upgrade to 2.2 (because you prefer the stability of 2.1.0), you can instead download 2.1.2, which is identical to 2.1.0 except that the security fix has been backported.

I'll follow up on the new features and on the security issue in more detail later today/ tomorrow.

Possibly related twitterless twaddle:

21 Jun 2017 7:23am GMT

08 Nov 2011

feedfosdem - Google Blog Search

papupapu39 (papupapu39)'s status on Tuesday, 08-Nov-11 00:28 ...

papupapu39 · http://identi.ca/url/56409795 #fosdem #freeknowledge #usamabinladen · about a day ago from web. Help · About · FAQ · TOS · Privacy · Source · Version · Contact. Identi.ca is a microblogging service brought to you by Status.net. ...

08 Nov 2011 12:28am GMT

05 Nov 2011

feedfosdem - Google Blog Search

Write and Submit your first Linux kernel Patch | HowLinux.Tk ...

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the world. ...

05 Nov 2011 1:19am GMT

03 Nov 2011

feedfosdem - Google Blog Search

Silicon Valley Linux Users Group – Kernel Walkthrough | Digital Tux

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the ...

03 Nov 2011 3:45pm GMT

26 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Update your RSS link

If you see this message in your RSS reader, please correct your RSS link to the following URL: http://fosdem.org/rss.xml.

26 Jul 2008 5:55am GMT

25 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Archive of FOSDEM 2008

These pages have been archived.
For information about the latest FOSDEM edition please check this url: http://fosdem.org

25 Jul 2008 4:43pm GMT

09 Mar 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Slides and videos online

Two weeks after FOSDEM and we are proud to publish most of the slides and videos from this year's edition.

All of the material from the Lightning Talks has been put online. We are still missing some slides and videos from the Main Tracks but we are working hard on getting those completed too.

We would like to thank our mirrors: HEAnet (IE) and Unixheads (US) for hosting our videos, and NamurLUG for quick recording and encoding.

The videos from the Janson room were live-streamed during the event and are also online on the Linux Magazin site.

We are having some synchronisation issues with Belnet (BE) at the moment. We're working to sort these out.

09 Mar 2008 3:12pm GMT