15 Jun 2021

feedPlanet Grep

Koen Vervloesem: How to build AVR code for the Digispark with PlatformIO

PlatformIO supports the Digispark USB development board, a compact board with the ATtiny85 AVR microcontroller. The canonical example code that lets the built-in LED blink looks like this:

digispark_blink_platformio/main.c (Source)

/* Atml AVR native blink example for the Digispark
 *
 * Copyright (C) 2021 Koen Vervloesem (koen@vervloesem.eu)
 *
 * SPDX-License-Identifier: MIT
 */
#include <avr/io.h>
#include <util/delay.h>

// Digispark built-in LED
// Note: on some models the LED is connected to PB0
#define PIN_LED PB1
#define DELAY_MS 1000

int main(void) {
  // Initalize LED pin as output
  DDRB |= (1 << PIN_LED);

  while (1) {
    PORTB ^= (1 << PIN_LED);
    _delay_ms(DELAY_MS);
  }

  return 0;
}

Note

This doesn't use the Arduino framework, but directly uses the native AVR registers.

If you've bought your Digispark recently, the Micronucleus bootloader is a recent version that isn't supported by PlatformIO's older micronucleus command.

If you've already upgraded the bootloader on your Digispark, you also have the newest version of the micronucleus command. So the only thing you need is make PlatformIO use this version when uploading your code. You can do this with the following platformio.ini:

[env:digispark-tiny]
platform = atmelavr
board = digispark-tiny
upload_protocol = custom
upload_command = micronucleus --run .pio/build/digispark-tiny/firmware.hex

The platform and board options are just the configuration options the PlatformIO documentation lists for the Digispark USB. By setting the upload_protocol to custom, you can supply your own upload command, and the micronucleus in this command refers to the one you've installed globally with sudo make install in /usr/local/bin/micronucleus.

After this, you can just build the code and upload it:

koan@tux:~/digispark_blink_platformio$ pio run -t upload
Processing digispark-tiny (platform: atmelavr; board: digispark-tiny)
---------------------------------------------------------------------------------------------------------
Verbose mode can be enabled via `-v, --verbose` option
CONFIGURATION: https://docs.platformio.org/page/boards/atmelavr/digispark-tiny.html
PLATFORM: Atmel AVR (3.1.0) > Digispark USB
HARDWARE: ATTINY85 16MHz, 512B RAM, 5.87KB Flash
DEBUG: Current (simavr) On-board (simavr)
PACKAGES:
 - tool-avrdude 1.60300.200527 (6.3.0)
 - toolchain-atmelavr 1.50400.190710 (5.4.0)
LDF: Library Dependency Finder -> http://bit.ly/configure-pio-ldf
LDF Modes: Finder ~ chain, Compatibility ~ soft
Found 0 compatible libraries
Scanning dependencies...
No dependencies
Building in release mode
Checking size .pio/build/digispark-tiny/firmware.elf
Advanced Memory Usage is available via "PlatformIO Home > Project Inspect"
RAM:   [          ]   0.0% (used 0 bytes from 512 bytes)
Flash: [          ]   1.4% (used 82 bytes from 6012 bytes)
Configuring upload protocol...
AVAILABLE: custom
CURRENT: upload_protocol = custom
Uploading .pio/build/digispark-tiny/firmware.hex
> Please plug in the device ...
> Device is found!
connecting: 33% complete
> Device has firmware version 2.5
> Device signature: 0x1e930b
> Available space for user applications: 6650 bytes
> Suggested sleep time between sending pages: 7ms
> Whole page count: 104  page size: 64
> Erase function sleep duration: 728ms
parsing: 50% complete
> Erasing the memory ...
erasing: 66% complete
> Starting to upload ...
writing: 83% complete
> Starting the user app ...
running: 100% complete
>> Micronucleus done. Thank you!
====================================== [SUCCESS] Took 3.99 seconds ======================================

I've created a GitHub project with this configuration, the example code, a Makefile and a GitHub Action to automatically check and build the code: koenvervloesem/digispark_blink_platformio. This can be used as a template for your own AVR code for the Digispark with PlatformIO.

15 Jun 2021 11:41am GMT

14 Jun 2021

feedPlanet Grep

Koen Vervloesem: How to upgrade the micronucleus bootloader on the Digispark

The Digispark USB development board has an ATtiny85 microcontroller. (source: Digistump)

Recently I have been playing with the Digispark development board with the ATtiny85 AVR microcontroller. The cool thing is that it's only 2 by 2 cm big and it plugs right into a USB port.

But when I flashed the canonical blink example to the microcontroller, I noticed that the program didn't blink continuously with the fixed interval I had set up. The first five seconds after plugging in the Digispark the LED blinked in a fast heartbeat pattern (which seems to be normal, this is the bootloader staying in the programming mode for five seconds), then my program slowly blinked for a few seconds, then it was five seconds in heartbeat pattern again, then slower blinks again, and so on. It seemed the microcontroller got stuck in a continuous boot loop.

Thinking about the previous time I solved a microcontroller problem by upgrading the bootloader, I decided to try the same approach here. The Digispark is sold with the Micronucleus bootloader, but the boards I bought 1 apparently had an older version. Upgrading the bootloader is easy on Ubuntu.

First build the micronucleus command-line tool:

git clone https://github.com/micronucleus/micronucleus.git
cd micronucleus/commandline
sudo apt install libusb-dev
make
sudo make install

The Micronucleus project offers firmware files to upgrade older versions, so then upgrading the existing firmware was as easy as:

cd ..
micronucleus --run firmware/upgrades/upgrade-t85_default.hex

Then insert the Digispark to write the firmware.

After flashing the blink example, I didn't see the boot loop this time. And the Micronucleus developers apparently got rid of the annoying heartbeat pattern in the newer version.

1

Only after I received the 5-pack I realized that I had bought a Chinese clone. So it's possible that the original product sold by Digistump doesn't have the issue I'm describing here.

14 Jun 2021 5:08pm GMT

Sven Vermeulen: An IT services overview

My current role within the company I work for is "domain architect", part of the enterprise architects teams. The domain I am accountable for is "infrastructure", which can be seen as a very broad one. Now, I've been maintaining an overview of our IT services before I reached that role, mainly from an elaborate interest in the subject, as well as to optimize my efficiency further.

Becoming a domain architect allows me to use the insights I've since gathered to try and give appropriate advice, but also now requires me to maintain a domain architecture. This structure is going to be the starting point of it, although it is not the true all and end all of what I would consider a domain architecture.

A single picture doesn't say it all

To start off with my overview, I had a need to structure the hundreds of technology services that I want to keep an eye on in a way that I can quickly find it back, as well as present to other stakeholders what infrastructure services are about. This structure, while not perfect, currently looks like in the figure below. Occasionally, I move one or more service groups left or right, but the main intention is just to have a structure available.

Overview of the IT services

Figures like these often come about in mapping exercises, or capability models. A capability model that I recently skimmed through is the IF4IT Enterprise Capability Model. I stumbled upon this model after searching for some reference architectures on approaching IT services, including a paper titled IT Services Reference Catalog by Nelson Gama, Maria do Mar Rosa, and Miguel Mira da Silva.

Capability models, or service overviews like the one I presented, do not fit each and every organization well. When comparing the view I maintain with others (or the different capability and service references out there), I notice two main distinctions: grouping, and granularity.

In the figure I maintain, the grouping is often based both on the extensiveness of a group (if a group contains far too many services, I might want to see if I can split up the group) as well as historical and organizational choices. For instance, if the organization has a clear split between network oriented teams and server oriented teams, then the overview will most likely convey the same message, as we want to have the overview interpretable by many stakeholders - and those stakeholders are often aware of the organizational distinctions.

Services versus solutions

I try to keep track of the evolutions of services and solutions within this overview. Now, the definition of a "service" versus "solution" does warrant a bit more explanation, as it can have multiple meanings. I even use "service" for different purposes depending on the context.

For domain architecture, I consider an "infrastructure service" as a product that realizes or supports an IT capability. It is strongly product oriented (even when it is internally developed, or a cloud service, or an appliance) and makes a distinction between products that are often very closely related. For instance, Oracle DB is an infrastructure service, as is the Oracle Enterprise Manager. The Oracle DB is a product that realizes a "relational database" capability, whereas OEM is a "central infrastructure management" capability.

The reason I create distinct notes for these is because they have different life cycles, might have different organizational responsible teams, different setups, etc. Hence, components (parts of products) I generally do not consider as separate, although there are definitely examples where it makes sense to consider certain components separate from the products in which they are provided.

The several hundred infrastructure services that the company is rich in are all documented under this overview.

Alongside these infrastructure services I also maintain a solution overview. The grouping is exactly the same as the infrastructure services, but the intention of solutions is more from a full internal offering point of view.

Within solution architectures, I tend to focus on the choices that the company made and the design that follows it. Many solutions are considered 'platforms' on top of which internal development, third party hosting or other capabilities are provided. Within the solution, I describe how the various infrastructure services interact and work together to make the solution reality.

A good example here is the mainframe platform. Whereas the mainframe itself is definitely an infrastructure service, how we internally organize the workload and the runtimes (such as the LPARs), how it integrates with the security services, database services, enterprise job scheduling, etc. is documented in the solution.

Not all my domain though

Not all services and solutions that I track are part of 'my' domain though. For instance, at my company, we make a distinction between the infrastructure-that-is-mostly-hosting, and infrastructure-that-is-mostly-workplace. My focus is on the 'mostly hosting' orientation, whereas I have a colleague domain architect responsible for the 'mostly workplace' side of things.

But that's just about accountability. Not knowing how the other solutions and services function, how they are set up, etc. would make my job harder, so tracking their progress and architecture is effort that pays off.

In a later post I'll explain what I document about services and solutions and how I do it when I have some time to spend.

14 Jun 2021 3:30pm GMT

11 Jun 2021

feedPlanet Grep

Frank Goossens: Autoptimize 2.9 final beta, testers wanted!

I just upped Autoptimize 2.9 beta to version 4, which is likely to be the last version before the official 2.9 release (eta end June/ early July).

Main new features;

You can download the beta from Github (do disable 2.8.x before activating 2.9-beta-4) and you can log any issues/ bugs over at https://github.com/futtta/autoptimize/issues

Looking forward to your feedback!

Possibly related twitterless twaddle:

11 Jun 2021 10:14am GMT

Xavier Mertens: [SANS ISC] Keeping an Eye on Dangerous Python Modules

I published the following diary on isc.sans.edu: "Keeping an Eye on Dangerous Python Modules":

With Python getting more and more popular, especially on Microsoft Operating systems, it's common to find malicious Python scripts today. I already covered some of them in previous diaries. I like this language because it is very powerful: You can automate boring tasks in a few lines. It can be used for offensive as well as defensive purposes, and… it has a lot of 3rd party "modules" or libraries that extend its capabilities. For example, if you would like to use Python for forensics purposes, you can easily access the registry and extract data… [Read more]

The post [SANS ISC] Keeping an Eye on Dangerous Python Modules appeared first on /dev/random.

11 Jun 2021 10:07am GMT

09 Jun 2021

feedPlanet Grep

Sven Vermeulen: The three additional layers in the OSI model

At my workplace, I jokingly refer to the three extra layers on top of the OSI network model as a way to describe the difficulties of discussions or cases. These three additional layers are Financial Layer, Politics Layer and Religion Layer, and the idea is that the higher up you go, the more challenging discussions will be.

09 Jun 2021 9:10am GMT

08 Jun 2021

feedPlanet Grep

Paul Cobbaut: Useless Top 10

Fastly had some issues today.

Here is a top 10 of the number of complainers on downdetector of some random internet services...

10. github 199
9. spotify 264
8. stackoverflow 351
7. twitter 368
6. discord 906
5. nytimes 2069
4. hbomax 2245
3. cnn 3432
2. twitch 5438
1. reddit 18628

Does this measure popularity? probably not.

Does it measure anything useful? Nope.

08 Jun 2021 12:34pm GMT

06 Jun 2021

feedPlanet Grep

Koen Vervloesem: Change the uplink interval of a Dragino LoRaWAN sensor

I have a couple of Dragino's LoRaWAN temperature sensors in my garden. 1 The LSN50 that I bought came with a default uplink interval of 30 seconds. Because the temperature outside doesn't change that fast and I want to have a long battery life, I wanted to change the uplink interval to 10 minutes.

Dragino's firmware accepts commands as AT commands over a USB to TTL serial line (you need to physically connect to the device's UART pins for that) or as LoRaWAN downlink messages (which you can send over the air via the LoRaWAN gateway). All Dragino LoRaWAN products support a common set of commands explained in the document End Device AT Commands and Downlink Commands.

The Change Uplink Interval command starts with the command code 0x01 followed by three bytes with the interval time in seconds. So to set the uplink interval to 10 minutes, I first convert 600 seconds to its hexadecimal representation:

$ echo 'ibase=10;obase=16;600' | bc
258

So I have to send the command 0x01000258.

My LoRaWAN gateway is connected to The Things Network, a global open LoRaWAN network. You can send a downlink message on the device's page in The Things Network's console (which is the easiest way), but you can also do it with MQTT (which is a bit harder). The Things Network's MQTT API documentation shows you the latter. 2 The MQTT topic should be in the form <AppID>/devices/<DevID>/down and the message looks like this:

{
  "port": 1,
  "confirmed": false,
  "payload_raw": "AQACWA=="
}

Note that you should supply the payload in Base64 format. You can easily convert the hexadecimal payload to Base64 on the command line:

$ echo 01000258|xxd -r -p|base64
AQACWA==

Enter this as the value for the payload_raw field.

So then the full command to change the uplink interval of the device to 10 minutes becomes:

$ mosquitto_pub -h eu.thethings.network -p 8883 --cafile ttn-ca.pem -u AppID -P AppKey -d -t 'AppID/devices/DevID/down' -m '{"port":1,"payload_raw":"AQACWA=="}'

This downlink message will be scheduled by The Things Network's application server to be sent after the next uplink of the device. After it has been received, the device changes its uplink interval.

1

I highly recommend Dragino. Their products are open hardware and use open source firmware, and their manuals are extremely detailed, covering countless application scenarios.

2

Note that I'm still using The Things Network v2, as my The Things Indoor Gateway can't be migrated yet to The Things Stack (v3). Consult the MQTT documentation of The Things Stack if you're already using v3.

06 Jun 2021 10:41am GMT

04 Jun 2021

feedPlanet Grep

Xavier Mertens: [SANS ISC] Russian Dolls VBS Obfuscation

I published the following diary on isc.sans.edu: "Russian Dolls VBS Obfuscation":

We received an interesting sample from one of our readers (thanks Henry!) and we like this. If you find something interesting, we are always looking for fresh meat! Henry's sample was delivered in a password-protected ZIP archive and the file was a VBS script called "presentation_37142.vbs" (SHA256:2def8f350b1e7fc9a45669bc5f2c6e0679e901aac233eac63550268034942d9f). I uploaded a copy of the file on MalwareBazaar… [Read more]

The post [SANS ISC] Russian Dolls VBS Obfuscation appeared first on /dev/random.

04 Jun 2021 10:09am GMT

03 Jun 2021

feedPlanet Grep

Sven Vermeulen: Virtualization vs abstraction

When an organization has an extensively large, and heterogeneous infrastructure, infrastructure architects will attempt to make itless complex and chaotic by introducing and maintaining a certain degree of standardization. While many might consider standardization as a rationalization (standardizing on a single database technology, single vendor for hardware, etc.), rationalization is only one of the many ways in which standards can simplify such a degree of complexity.

In this post, I'd like to point out two other, very common ways to standardize the IT environment, without really considering a rationalization: abstraction and virtualization.

03 Jun 2021 8:10am GMT

01 Jun 2021

feedPlanet Grep

Koen Vervloesem (NL): Visualiseer data van je Bluetooth-sensors

Bluetooth Low Energy (BLE) is een dankbaar protocol om als hobbyprogrammeur mee te werken. De specificaties zijn allemaal op de website van de Bluetooth SIG te vinden en zowat elk ontwikkelplatform kent wel een of meerdere BLE-bibliotheken.

In een artikel voor Computer!Totaal leg ik uit hoe je data van BLE-sensors uitleest in je eigen Arduino-code die op een ESP32-microcontroller draait, en het resultaat visualiseert op het scherm van een M5Stack Core.

Meer weten over Bluetooth Low Energy?

Eerder dit jaar schreef ik een achtergrondartikel voor Tweakers, Bluetooth Low Energy voor domotica: lage drempel en breed ondersteund. Hierin ga ik dieper in over de werking van Bluetooth Low Energy en hoe je er zelf mee aan de slag kunt.

Een BLE-apparaat kan op twee manieren communiceren: door data te broadcasten naar iedereen in de buurt, of door een één-op-één verbinding te maken met een ander apparaat en daarnaar data te zenden. In het artikel werk ik voor beide communicatiemanieren een voorbeeldprogramma uit.

Broadcasten gebeurt vaak door omgevingssensoren. De RuuviTag is zo'n sensor die temperatuur, luchtvochtigheid, luchtdruk en beweging detecteert. 1 De sensor broadcast zijn data elke seconde via BLE als manufacturer data. Het protocol is volledig gedocumenteerd. In het artikel in Computer!Totaal leg ik uit hoe je de temperatuur hieruit decodeert en op het scherm van de M5Stack Core toont.

Andere apparaten gebruiken verbindingen. De Bluetooth SIG heeft een heel aantal services voor verbindingen gedefinieerd waarvan je de specificaties gewoon kunt raadplegen. Een populaire is de service Heart Rate. In het artikel beschrijf ik hoe je de code schrijft om met behulp van de bibliotheek NimBLE-Arduino met een hartslagsensor te verbinden en realtime je hartslag op het scherm van de M5Stack Core te tonen. Ideaal voor tijdens het sporten! 2

/images/m5stack-heart-rate-display.jpg
1

De nRF52832-gebaseerde RuuviTag is open hardware en ook de firmware is opensource.

2

Ik heb ondertussen ook een versie van de code geschreven voor de nieuwere M5Stack Core2: M5Core2 Heart Rate Display.

01 Jun 2021 11:54am GMT

30 May 2021

feedPlanet Grep

Philip Van Hoof: Laster en eerroof naar correctionele

Ik vind dat het verhuizen van welke rechtbank laster een eerroof behandelt naar de correctionele rechtbanken te laten gaan, een goede zaak is.

Ik begrijp het, dat het feit dat, assisen de rechtbank was die hierover ging, als een soort van buffer speelde om de vrijheid van meningsuiting te vrijwaren. De comma's staan er omdat het momenten om over na te denken zijn.

Maar dat is en was eigenlijk een misbruik van een soort van overheidsincompentie (of eigenlijk rechtbankenincompetentie). Namelijk dat er bij Assisen te weinig capaciteit was om dit soort misdaden te beoordelen en wanneer (en slechts enkel wanneer) het over de schreef ging, te veroordelen.

In de praktijk betekende het dat eigenlijk bijna alles kon. Terwijl dat nu geen goed idee meer is. Niet alles kan. Je vrijheid om bv. een leugen te spreken is begrenst daar waar die vrijheid schade berokkent. En dat is zo zelfs wanneer wat jij zegt geen leugen is, en voornamelijk wanneer dat wat gezegd wordt geen maatschappelijke meerwaarde heeft.

Het is dus aan een rechter om je dan te stoppen. Wij leven hier in België onder het Belgische recht. Niet onder jouw absolute vrijheid. Want onze geringe vrijheid is afhankelijk van ons Belgische recht. M.a.w. is, hoe controversieel ook (en laat ik duidelijk zijn dat ik duidelijk weet dat het controversieel is), jouw ingebeelde recht om een ander te schaden, niet altijd gelijk aan jouw recht om het ook te uiten. Niet hier in België. Niet altijd. Bijna altijd wel, maar niet altijd.

Context doet er toe (dus, de humor kan terecht zeer ver gaan). Maar niet altijd.

Ik ben wel erg bezorgd hierover. Maar dat maakt niet dat dit niet mag of kan behandeld worden. Het slaat stevig in op de verlichtingsfilosofie. Die zegt dat bijna alles moet kunnen gezegd worden. Maar om alles te kunnen zeggen moeten we ook, zeker met de opkomst van nieuwe media, akkoord gaan dat een overvloed van leugens ook de reputatie van eerbare mensen onherstelbaar zal schaden. Dus was de verlichting als filosofie soms naïef (hoewel ze stelde dat dit normaalgesproken zichelf corrigeert).

De grond van de vrije meningsuiting mogen we toch niet vergeten. Vind ik.

30 May 2021 11:23pm GMT

28 May 2021

feedPlanet Grep

Xavier Mertens: [SANS ISC] Malicious PowerShell Hosted on script.google.com

I published the following diary on isc.sans.edu: "Malicious PowerShell Hosted on script.google.com":

Google has an incredible portfolio of services. Besides the classic ones, there are less known services and… they could be very useful for attackers too. One of them is Google Apps Script. Google describes it like this:

"Apps Script is a rapid application development platform that makes it fast and easy to create business applications that integrate with G Suite."

Just a quick introduction to this development platform to give you an idea about the capabilities. If, as the description says, it is used to extend the G Suite, it can of course perform basic actions like… hosting and delivering some (malicious)Â content… [Read more]

The post [SANS ISC] Malicious PowerShell Hosted on script.google.com appeared first on /dev/random.

28 May 2021 10:03am GMT

27 May 2021

feedPlanet Grep

Wouter Verhelst: SReview and pandemics

The pandemic was a bit of a mess for most FLOSS conferences. The two conferences that I help organize -- FOSDEM and DebConf -- are no exception. In both conferences, I do essentially the same work: as a member of both video teams, I manage the postprocessing of the video recordings of all the talks that happened at the respective conference(s). I do this by way of SReview, the online video review and transcode system that I wrote, which essentially crowdsources the manual work that needs to be done, and automates as much as possible of the workflow.

The original version of SReview consisted of a database, a (very basic) Mojolicious-based webinterface, and a bunch of perl scripts which would build and execute ffmpeg command lines using string interpolation. As a quick hack that I needed to get working while writing it in my spare time in half a year, that approach was workable and resulted in successful postprocessing after FOSDEM 2017, and a significant improvement in time from the previous years. However, I did not end development with that, and since then I've replaced the string interpolation by an object oriented API for generating ffmpeg command lines, as well as modularized the webinterface. Additionally, I've had help reworking the user interface into a system that is somewhat easier to use than my original interface, and have slowly but surely added more features to the system so as to make it more flexible, as well as support more types of environments for the system to run in.

One of the major issues that still remains with SReview is that the administrator's interface is pretty terrible. I had been planning on revamping that for 2020, but then massive amounts of people got sick, travel was banned, and both the conferences that I work on were converted to an online-only conference. These have some very specific requirements; e.g., both conferences allowed people to upload a prerecorded version of their talk, rather than doing the talk live; since preprocessing a video is, technically, very similar to postprocessing it, I adapted SReview to allow people to upload a video file that it would then validate (in terms of length, codec, and apparent resolution). This seems like easy to do, but I decided to implement this functionality so that it would also allow future use for in-person conferences, where occasionally a speaker requests that modifications would be made to the video file in a way that SReview is unable to do. This made it marginally more involved, but at least will mean that a feature which I had planned to implement some years down the line is now already implemented. The new feature works quite well, and I'm happy I've implemented it in the way that I have.

In order for the "upload" processing and the "post-event" processing to not be confused, however, I decided to import the conference schedules twice: once as the conference itself, and once as a shadow version of that conference for the prerecordings. That way, I could track the progress through the system of the prerecording completely separately from the progress of the postprocessing of the video (which adds opening/closing credits, and transcodes to multiple variants of the same video). Schedule parsing was something that had not been implemented in a generic way yet, however; since that made doubling the schedule in that way rather complex, I decided to bite the bullet and (finally) implement schedule parsing in a generic way. Currently, schedule parsers exist for two formats (Pentabarf XML and the Wafer variant of that same format which is almost, but not quite, entirely the same). The API for that is quite flexible, and I'm happy with the way things have been implemented there. I've also implemented a set of "virtual" parsers, which allow mangling the schedule in various ways (by either filtering out talks that we don't want, or by generating the shadow version of the schedule that I talked about earlier).

While the SReview settings have reasonable defaults, occasionally the output of SReview is not entirely acceptable, due to more complicated matters that then result in encoding artifacts. As a result, the DebConf video team has been doing a final review step, completely outside of SReview, to ensure that such encoding artifacts don't exist. That seemed suboptimal, so recently I've been working on integrating that into SReview as well. First tests have been run, and seem to be acceptable, but there's still a few loose ends to be finalized.

As part of this, I've also reworked the way comments could be entered into the system. Previously the presence of a comment would signal that the video has some problems that an administrator needed to look at. Unfortunately, that was causing some confusion, with some people even thinking it's a good place to enter a "thank you for your work" style of comment... which it obviously isn't. Turning it into a "comment log" system instead fixes that, and also allows for better two-way communication between administrators and reviewers. Hopefully that'll improve things in that area as well.

Finally, the audio normalization in SReview -- for which I've long used bs1770gain -- is having problems. First of all, bs1770gain will sometimes alter the timing of the video or audio file that it's passed, which is very problematic if I want to process it further. There is an ffmpeg loudnorm filter which implements the same algorithm, so that should make things easier to use. Secondly, the author of bs1770gain is a strange character that I'd rather not be involved with. Before I knew about the loudnorm filter I didn't really have a choice, but now I can just rip bs1770gain out and replace it by the loudnorm filter. That will fix various other bugs in SReview, too, because SReview relies on behaviour that isn't actually there (but which I didn't know at the time when I wrote it).

All in all, the past year-and-a-bit has seen a lot of development for SReview, with multiple features being added and a number of long-standing problems being fixed.

Now if only the pandemic would subside, allowing the whole "let's do everything online only" wave to cool down a bit, so that I can finally make time to implement the admin interface...

27 May 2021 3:14pm GMT

Wouter Verhelst: Freenode

Bye, Freenode

I have been on Freenode for about 20 years, since my earliest involvement with Debian in about 2001. When Debian moved to OFTC for its IRC presence way back in 2006, I hung around on Freenode somewhat since FOSDEM's IRC channels were still there, as well as for a number of other channels that I was on at the time (not anymore though).

This is now over and done with. What's happening with Freenode is a shitstorm -- one that could easily have been fixed if one particular person were to step down a few days ago, but by now is a lost cause.

At any rate, I'm now lurking, mostly for FOSDEM channels, on libera.chat, under my usual nick, as well as on OFTC.

27 May 2021 9:23am GMT

Toshaan Bharvani: Starting the LibreBMC project

Today is the first meeting of the LibreBMC project. This project is an initiative of the OpenPOWER Foundation. We are starting this project to build an open source hardware BMC. It will use the POWER ISA as a soft-core running on an FPGA, like the Lattice ECP5 or the Xilinx Artix7. Our goals is to design a board that is OCP DC-SCM compatible and that can easily build on current FPGA's. While we are starting with these 2 FPGA's in mind and we'll be focussing on 1 or 2 soft-cores. But we want it to be modular, so the general design can support different soft-cores, and run at least LSB and OpenBMC. You can read more in the OPF announcement : https://openpowerfoundation.org/openpower-foundation-announces-librebmc-a-power-based-fully-open-source-bmc/

I am very happy to be part of this initiative and will be contributing as much as I can. This initiative is from the OpenPOWER Foundation, however it runs as a public SIG, meaning anybody can help out. If you want to contribute, participate or observe, feel free to follow any updates on : https://discuss.openpower.foundation/c/sig/librebmc/11 We also have a project page available, where we'll update the git repo links, discussion links and any other information : https://openpower.foundation/groups/librebmc/

A small anecdote, which isn't mentioned in many articles is how the name came to be, while initial discussions start in the beginning of this year 2021. We, meaning the OpenPOWER Foundation steering committee, were guaging interest and looking for founding members. In our discussions it quickly became very confusing, as we were talking about theBMC, using the term OpenBMC, and then switched to using OpenBMC hardware, or OpenBMC software, which lead to more confusion. I started using the name LibreBMC for the hardware project we wanted to start and OpenBMC for the existing BMC software stack. It quickly was adopted by the rest of the team and made it out into the wild. While we, engineers, often struggle to come up with good names for our projects, this one was easy and inspired by LibreOffice, Libre-SOC, and many other projects that are open/libre and this is what we're also doing.

27 May 2021 12:00am GMT