28 May 2020

feedPlanet Ubuntu

Podcast Ubuntu Portugal: Ep 92 – taciturnidade

No episódio 92 vamos saber o que andou o Tiago a fazer, novidades do mundo do UBports, incluindo Volla Phone, e PineTab, também passamos pela MS Build Conference cheia de noviades relacionadas com WSL2.

Já sabem: oiçam, subscrevam e partilhem!


Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Esta semana recomendamos o bundle: Learn You Some Python by No Starch Press com o link de afiliado:


Atribuição e licenças

A música do genérico é: "Won't see it comin' (Feat Aequality & N'sorte d'autruche)", por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

28 May 2020 8:38pm GMT

Didier Roche: ZFS focus on Ubuntu 20.04 LTS: ZSys general principle on state management

ZFS focus on Ubuntu 20.04 LTS: ZSys sate management

After our previous general presentation of ZSys, it's """time""" to deep dive to one of its main predominant feature: state management!

Why calling that state and not simply ZFS snapshots?

A little technical detour first. as this question will necessarily arise, especially from those familiar with ZFS concepts.

What is a state, and what's the difference with snapshots?

We have purposively chosen the "state" terminology to prevent system administrators and in general, all those familiar with ZFS to confuse if with snapshot datasets.

Basically a state is a set of datasets, all frozen in time (apart from the current state), which regrouped together forms a system "state" that you can chose to reboot on.

Those group of datasets can be either made of snapshot datasets (read only) (which is what most of advanced ZFS users will expect), but it can also be filesystem datasets (read write), made of filesystem datasets clone of the current state datasets. You can boot to any of those.

Why not only relying on snapshots?

This choice was made to go beyond what a simple zfs rollback can do. The limitation of rollbacking is that the current filesystem itself is reverted (remember, it's read write while snapshots themselves are read only), and any intermediate snapshot datasets are destroyed. So, current and all intermediate states wouldn't have been able to recover after a revert. We wanted to avoid being blocked by those limitations and this is why navigating between ZSys states includes using ZFS cloning (creating filesystem datasets (read write) from all snapshot datasets (read only) compounding a particular state). Consequently, we navigate between those ZFS filesystem datasets. We promote any correctly booted reverted state to migrate snapshots over it.

Creating this non destructive revert action allows crazy dreams: you can imagine reverting a revert! And this also unlock system bisection to find when a regression started!

Another consideration to take into account is that we separate system states from user states. System states have matching user states (one for each available user on the system). However, we can have finer grain user states (this is where most of your interesting work is done, afterwards!), while system states are limited to have reliable boot and working machine.

In addition to this, we have properties associated to each system state (via user dataset properties). This goes from simply storing mountpoint properties when the state save was requested, even if you changed it afterwards on the parent dataset. As we stored this as user properties, we are able to then reapply original base filesystem dataset properties, to ensure better fidelity. Similarly, we will reboot with the exact same kernel you booted with when you saved the state, even if this wasn't the latest available version and if it's been purged from the current state.

This is for all those reasons that we deliberately used the term "state" instead of snapshots. A state can be based on a collection (and not only a single one) dataset snapshot with the same name. However, not all states are (after a revert for instance). We hope this sheds some lights to advanced ZFS users on those particular decisions, and what ZSys provides on top of ZFS awesome core capabilities.

Reverting to a previous system state

And here the meat of the matter! Managing states can be fun, but when managing is only done in the purpose of managing, there is no real point! Let's exercise the whole goal of this state concept now: reverting!

Our grub bootloader (available via [Escape] or [Shift] press on boot) will allow you to revert the system (and optionally user) states on demand! An history entry will propose you to boot on an older state of your system.

History option available

Entering it will propose you the different system states you can revert to:

History with ZSys

You can see there multiple times and also naming! What are those?

Once you select to which state you can revert to, you have multiple options offered to you: Revert option in history entry

Reverting the system will, as we explained previously, revert (if it's a ZSys state save) with the exact metadata on every datasets constituting the state. In addition to this, we will boot with the exact same kernel that was last successfully booted with.

You can only revert the system dataset, in that case, only the whole system is reverted, but not non persistent datasets (we have none by default on the desktop, but we will detail that in a later post) and the user data are kept as current. It means though that you are taking the risk of reverting to older version of some softwares not being forward-compatible with your user data (previous version of software A not being able to read newer format of data). It means though that your user data content will be the very last available data you created. A technical note: if you select that option on filesystem dataset associated states (basically no snapshot), then user data will be considered as "current".

The user state will be considered then as shared between 2 system states, which has some implications in case of removal as you will see below.

The second main option "Revert system and user data" is to revert the whole system, including user data. It means that the home directory of all users on the system will be reverted to the time this system state was last saved. You will still have ways to access manually the newly created content (see just below), but the default will ensure that you have a very robust and compatible user data with the system softwares that are running.

Finally, there are the recovery option which are similar to the ubuntu "Advanced options" for recovery mode for each of this state.

And as we illustrated previously, this work out of the box, including reverting to previous ubuntu release with a high fidelity, and all this without loosing newer states (as not being a destructive action)!

Having access to previous user states

We have plan for future cycles to integrate this more deeply with the system (login manager, file manager…). For now, outside of reverting the whole system to a previous state (alongside users), there are other ways to access to previous revision of files. In your datasets mountpoint, you can navigate to the .zfs hidden directory (which is really hidden: even showing all hidden directories won't display it or ls -a either).

Typically, you can press "[Ctrl]+l" in Files, and append /.zfs to it. You will see a snapshots directory available to you, with one folder for each snapshots.

Snapshot .zfs directory

You can then navigate and restore any file by copying/pasting here (those directories are read only).

Restore user file from .zfs directory

Note that you can only find here history user states that are made of snapshot datasets which migrated on current state with this method. Having access to filesystem-based states or snapshots on those filesystem-based states involve manually mounting the datasets and navigating there. This is why we have plans to make all that easier in the future, when you can navigate based on time. :)

Boot success and failure

You will tell us that pressing [Escape] or [Shift] on boot to show up grub isn't the most discoverable feature and you are right.

However, in case of boot failing, the next boot will show up grub by default and you will have those "History entries" available to you! We define successful boot by systemd having reached the default.target target. On desktop, default.target is a symlink to graphical.target:

$ systemctl status default.target
● graphical.target - Graphical Interface
     Loaded: loaded (/lib/systemd/system/graphical.target; static; vendor preset
     Active: active since Thu 2020-04-30 08:39:17 CEST; 9h ago
       Docs: man:systemd.special(7)

avril 30 08:39:17 casanier systemd[1]: Reached target Graphical Interface.

This is how we set LastUsed timestamp, record which kernel was used and optionally rebuild the bootloader menu if needed (basically, if you reverted and current state is different):

$ systemctl status zsys-commit.service 
● zsys-commit.service - Mark current ZSYS boot as successful
     Loaded: loaded (/lib/systemd/system/zsys-commit.service; enabled; vendor preset: enabled)
     Active: active (exited) since Thu 2020-04-30 08:39:23 CEST; 9h ago
    Process: 3249 ExecStart=/sbin/zsysctl boot commit (code=exited, status=0/SUCCESS)
   Main PID: 3249 (code=exited, status=0/SUCCESS)

avril 30 08:39:17 casanier systemd[1]: Starting Mark current ZSYS boot as successful...
avril 30 08:39:18 casanier zsysctl[3249]: INFO Updating GRUB menu
avril 30 08:39:23 casanier systemd[1]: Finished Mark current ZSYS boot as successful.

Editable grub menu

We know advanced users like to press "e" to edit grub menu or tweak their settings on some boot at their convenience.

We took great care so that it's easily editable by them: our functions have a set of argument with readable names.

Pressing "e" when entering a history state line:

Edit one history entry

Then, on each revert entry, you can edit it as well and replace the variable if you prefer directly editing them here:

Edit one revert entry

The use of functions with readable names should be of help for this audience. In addition to this, we didn't use that function for normal boots or advanced options to not confuse them who will have entries without any variables.

As a reminder, those are the changes which allowed us to reduce grub.cfg from 7329 lines for 100 state saves that grub didn't really like to 728, each new history entry being 3 additional lines instead of 50, and reducing from 80s to load the history entry to being instantaneous!

This was quite dense! We hope we made it easy from a pure user perspective. In the next article, we will detail how and when states are saved, how to manage them and more neaty cripsy details about ZSys states! See you there :)

Meanwhile, join the discussion via the dedicated Ubuntu discourse thread.

28 May 2020 1:00pm GMT

27 May 2020

feedPlanet Ubuntu

Ubuntu Podcast from the UK LoCo: S13E08.5 – When a broken clock chimes

We announce the Ubuntu Podcast crowd-funder on Patreon and why, after 13 years, we are seeking your support.

It's Season 13 Episode 8.5 of the Ubuntu Podcast! Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this mini-show:

That's all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. We are running a crowd funder to cover our audio production costs on Patreon. If there's a topic you'd like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

27 May 2020 11:30pm GMT

Kees Cook: security things in Linux v5.5

Previously: v5.4.

I got a bit behind on this blog post series! Let's get caught up. Here are a bunch of security things I found interesting in the Linux kernel v5.5 release:

restrict perf_event_open() from LSM
Given the recurring flaws in the perf subsystem, there has been a strong desire to be able to entirely disable the interface. While the kernel.perf_event_paranoid sysctl knob has existed for a while, attempts to extend its control to "block all perf_event_open() calls" have failed in the past. Distribution kernels have carried the rejected sysctl patch for many years, but now Joel Fernandes has implemented a solution that was deemed acceptable: instead of extending the sysctl, add LSM hooks so that LSMs (e.g. SELinux, Apparmor, etc) can make these choices as part of their overall system policy.

generic fast full refcount_t
Will Deacon took the recent refcount_t hardening work for both x86 and arm64 and distilled the implementations into a single architecture-agnostic C version. The result was almost as fast as the x86 assembly version, but it covered more cases (e.g. increment-from-zero), and is now available by default for all architectures. (There is no longer any Kconfig associated with refcount_t; the use of the primitive provides full coverage.)

linker script cleanup for exception tables
When Rick Edgecombe presented his work on building Execute-Only memory under a hypervisor, he noted a region of memory that the kernel was attempting to read directly (instead of execute). He rearranged things for his x86-only patch series to work around the issue. Since I'd just been working in this area, I realized the root cause of this problem was the location of the exception table (which is strictly a lookup table and is never executed) and built a fix for the issue and applied it to all architectures, since it turns out the exception tables for almost all architectures are just a data table. Hopefully this will help clear the path for more Execute-Only memory work on all architectures. In the process of this, I also updated the section fill bytes on x86 to be a trap (0xCC, int3), instead of a NOP instruction so functions would need to be targeted more precisely by attacks.

KASLR for 32-bit PowerPC
Joining many other architectures, Jason Yan added kernel text base-address offset randomization (KASLR) to 32-bit PowerPC.

seccomp for RISC-V
After a bit of long road, David Abdurachmanov has added seccomp support to the RISC-V architecture. The series uncovered some more corner cases in the seccomp self tests code, which is always nice since then we get to make it more robust for the future!

seccomp USER_NOTIF continuation
When the seccomp SECCOMP_RET_USER_NOTIF interface was added, it seemed like it would only be used in very limited conditions, so the idea of needing to handle "normal" requests didn't seem very onerous. However, since then, it has become clear that the overhead of a monitor process needing to perform lots of "normal" open() calls on behalf of the monitored process started to look more and more slow and fragile. To deal with this, it became clear that there needed to be a way for the USER_NOTIF interface to indicate that seccomp should just continue as normal and allow the syscall without any special handling. Christian Brauner implemented SECCOMP_USER_NOTIF_FLAG_CONTINUE to get this done. It comes with a bit of a disclaimer due to the chance that monitors may use it in places where ToCToU is a risk, and for possible conflicts with SECCOMP_RET_TRACE. But overall, this is a net win for container monitoring tools.

Some EFI systems provide a Random Number Generator interface, which is useful for gaining some entropy in the kernel during very early boot. The arm64 boot stub has been using this for a while now, but Dominik Brodowski has now added support for x86 to do the same. This entropy is useful for kernel subsystems performing very earlier initialization whre random numbers are needed (like randomizing aspects of the SLUB memory allocator).

As has been enabled on many other architectures, Dmitry Korotin got MIPS building with CONFIG_FORTIFY_SOURCE, so compile-time (and some run-time) buffer overflows during calls to the memcpy() and strcpy() families of functions will be detected.

limit copy_{to,from}_user() size to INT_MAX
As done for VFS, vsnprintf(), and strscpy(), I went ahead and limited the size of copy_to_user() and copy_from_user() calls to INT_MAX in order to catch any weird overflows in size calculations.

That's it for v5.5! Let me know if there's anything else that I should call out here. Next up: Linux v5.6.

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

27 May 2020 8:04pm GMT

Ubuntu Blog: Ubuntu on WSL 2 Is Generally Available

Today Microsoft announced the general availability of Windows Subsystem for Linux 2 in the Windows 10 May 2020 update.

WSL 2 is based on a new architecture that provides full Linux binary application compatibility and improved performance. WSL 2 is powered by a real Linux kernel in a lightweight virtual machine that boots in under two seconds. WSL 2 is the best way to experience Ubuntu on WSL.

Ubuntu was the first Linux distribution for WSL and remains the most popular choice of WSL users. Ubuntu 20.04 LTS for WSL was released simultaneously with the general availability of Ubuntu 20.04 LTS in April.

Canonical supports Ubuntu on WSL in organizations through Ubuntu Advantage which includes Landscape for managing Ubuntu on WSL deployments, extended security, and e-mail and phone support.

Ubuntu is ready for WSL 2. All versions of Ubuntu can be upgraded to WSL 2. The latest version of Ubuntu, Ubuntu 20.04 LTS, can be installed on WSL directly from the Microsoft Store. For other versions of Ubuntu for WSL and other ways to install WSL see the WSL page on the Ubuntu Wiki.

Ubuntu on WSL supports powerful developer and system administrator tools, including microk8s, the simplest way to deploy a single node Kubernetes cluster for development and DevOps.

See our YouTube page for more WSL-related videos from WSLConf 2020.

Enable WSL 2

To enable WSL 2 on Windows 10 May 2020 update (build 19041 or higher) run the following in PowerShell as Administrator:

dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

"WSL2 requires an update to its kernel component"

Some users upgrading from Insider builds of Windows 10 will encounter an error running the commands below. You will be directed to manually download and update the Linux kernel. Visit aka.ms/wsl2kernel to download a .msi package, install it, and then try again.

Convert Ubuntu on WSL 1 to WSL 2

To convert an existing WSL 1 distro to WSL 2 run the following in PowerShell:

wsl.exe --set-version Ubuntu 2

Set WSL 2 as the default

To set WSL 2 as the default for installing WSL distributions in the future run the following in PowerShell:

wsl.exe --set-default-version 2

Upgrade to Ubuntu 20.04 LTS on WSL

To upgrade to the latest version of Ubuntu on WSL run the following in Ubuntu:

sudo do-release-upgrade -d

Windows Terminal 1.0

The new open source Windows Terminal recently reached 1.0 and makes an excellent companion to Ubuntu on WSL 2. Windows Terminal can be downloaded from the Microsoft Store or GitHub and can be extensively customized.

Community Help with Ubuntu on WSL

Community support is available for users:

Enterprise Support for Ubuntu on WSL

Ubuntu on WSL is fully supported by Canonical for enterprise and organizations through Ubuntu Advantage.

For more information on Ubuntu on WSL, go to ubuntu.com/wsl.

To read more about the new features coming to WSL 2 announced at Microsoft Build, see our blog post.

27 May 2020 7:04pm GMT

Ubuntu Blog: Hybrid cloud and multi-cloud: what is the difference?

Hybrid cloud and multi-cloud are two exclusive terms that are often confused. While the hybrid cloud represents a model for extending private cloud infrastructure with one of the existing public clouds, a multi-cloud refers to an environment where multiple clouds are used at the same time, regardless of their type. Thus, while the hybrid cloud represents a very specific use case, multi-cloud is a more generic term and usually better reflects reality.

Although both architectures are relatively simple to implement from the infrastructure point of view, the more important question is about workloads orchestration in such environments. In the following blog, I describe the differences between hybrid clouds and the multi-cloud and discuss the advantages of orchestrating workloads in a multi-cloud environment with Juju.

Understanding the difference between the hybrid cloud and the multi-cloud

Let's assume that you manage a transport company. The company owns ten cars which is sufficient in most cases. However, there are days when you really need more than ten. So how do you handle your customers during those traffic-heavy periods? Do you buy additional cars? No, you rent them instead. You rely on an external supplier who can lend you cars on-demand. As a result, you can continue to deliver your services. This is almost exactly how the hybrid cloud model works.

However, the reality is slightly different. First of all, companies never rely on a single supplier. You need at least two suppliers to ensure business continuity in case the first one cannot provide their services. Moreover, not all cars are the same. What if you need a really big one which none of your existing suppliers can provide? You would probably rent it from another supplier, even if this only happens once a year, wouldn't you? And finally, it is quite possible that even though you are in need of buses, due to lack of experience with them you are not really willing to own and maintain them, even if in the long term that would have been a more cost-efficient option.

The second case represents the multi-cloud model and is closer to what we are observing in modern organisations. While the hybrid cloud concept was developed as a solution to offload a private cloud during computationally-intensive periods, an orchestration of workloads in multi-cloud environments is what most organisations are really struggling with nowadays. This is because not hybrid clouds, but the multi-cloud is a part of their daily reality.

Hybrid cloud and multi-cloud: architectural differences

A hybrid cloud is an IT infrastructure that consists of a private cloud and one of the available public clouds. Both are connected with a persistent virtual private network (VPN). Both use a single identity management (IdM) system and unified logging, monitoring, and alerting (LMA) stack. Even their internal networks are integrated, so in fact, the public cloud becomes an extension of the private cloud. As a result, both behave as a single environment which is fully transparent from the workloads' point of view.

The goal behind such an architecture is to use the public cloud only if the private cloud can no longer handle workloads. As the private cloud is always a cheaper option, an orchestration platform always launches workloads in the private cloud first. However, once the resources of the private cloud become exhausted, an orchestration platform moves some of the workloads to the public cloud and starts using it by default when launching new workloads. Once the peak period is over, the workloads are moved back to the private cloud, which becomes the default platform once again.

In turn, multi-cloud simply refers to using multiple clouds at the same time, regardless of their type. There is no dedicated infrastructure that facilitates it. There is no dedicated link, single IdM system, unified LMA stack or an integrated network. Just instead of a single cloud, an organisation uses at least two clouds at the same time.

The goal behind the multi-cloud approach is to reduce the risk of relying on a single cloud service provider. Workloads can be distributed across multiple clouds which improves independence and helps to avoid 'vendor lock-in'. Furthermore, as the multi-cloud is usually a geographically-distributed environment, this helps to improve high availability of applications and their resiliency against failures. Finally, the multi-cloud approach combines the best advantages of various cloud platforms. For example, running databases on virtual machines (VMs) while hosting frontend applications inside of containers. Thus, workload orchestration remains the most prominent challenge in this case.

Orchestrating workloads in a multi-cloud environment

When running workloads in the multi-cloud environment, having a tool that can orchestrate them becomes essential. This tool has to be able to provision cloud resources (VMs, containers, block storage, etc.), deploy applications on top of them and configure them so that they can communicate with each other. For example, fronted applications have to be aware of the IP address of the database, as they are consuming the data stored there. Moreover, as the resources are distributed across various cloud types, the entire platform has to be substrate-agnostic. Thus, the entire process needs to be fully transparent from the end user's perspective.

One of the tools providing this kind of functionality is Juju. It supports leading public cloud providers as well as most of those used for private cloud implementations: VMware vSphere, OpenStack, Kubernetes, etc. Juju allows modelling and deployment of distributed applications, providing multi-cloud software-as-a-service (SaaS) experience. As a result, users can focus on shaping their applications, while the entire complexity behind the multi-cloud setup becomes fully abstracted.


Although looking similar at first sight, hybrid cloud and multi-cloud are two different concepts. While hybrid clouds focus on offloading private clouds, the multi-cloud approach attempts to address the challenges associated with using multiple clouds at the same time. The biggest challenge with multi-cloud is not the infrastructure setup. It's workload orchestration. Juju solves this problem by providing a multi-cloud SaaS experience.

Learn more

Canonical provides all necessary components and services for building a modern private cloud infrastructure. Those include OpenStack and Kubernetes, as well as Juju - a software for workloads orchestration in multi-cloud environments.

Get in touch with us or watch our webinar - "Open source infrastructure: from bare metal to microservices" to learn more.

27 May 2020 8:00am GMT

26 May 2020

feedPlanet Ubuntu

Stuart Langridge: Browsers are not rendering engines

An interesting writeup by Brian Kardell on web engine diversity and ecosystem health, in which he puts forward a thesis that we currently have the most healthy and open web ecosystem ever, because we've got three major rendering engines (WebKit, Blink, and Gecko), they're all cross-platform, and they're all open source. This is, I think, true. Brian's argument is that this paints a better picture of the web than a lot of the doom-saying we get about how there are only a few large companies in control of the web. This is… well, I think there's truth to both sides of that. Brian's right, and what he says is often overlooked. But I don't think it's the whole story.

You see, diversity of rendering engines isn't actually in itself the point. What's really important is diversity of influence: who has the ability to make decisions which shape the web in particular ways, and do they make those decisions for good reasons or not so good? Historically, when each company had one browser, and each browser had its own rendering engine, these three layers were good proxies for one another: if one company's browser achieved a lot of dominance, then that automatically meant dominance for that browser's rendering engine, and also for that browser's creator. Each was isolated; a separate codebase with separate developers and separate strategic priorities. Now, though, as Brian says, that's not the case. Basically every device that can see the web and isn't a desktop computer and isn't explicitly running Chrome is a WebKit browser; it's not just "iOS Safari's engine". A whole bunch of long-tail browsers are essentially a rethemed Chrome and thus Blink: Brave and Edge are high up among them.

However, engines being open source doesn't change who can influence the direction; it just allows others to contribute to the implementation. Pick something uncontroversial which seems like a good idea: say, AVIF image format support, which at time of writing (May 2020) has no support in browsers yet. (Firefox has an in-progress implementation.) I don't think anyone particularly objects to this format; it's just not at the top of anyone's list yet. So, if you were mad keen on AVIF support being in browsers everywhere, then you're in a really good position to make that happen right now, and this is exactly the benefit of having an open ecosystem. You could build that support for Gecko, WebKit, and Blink, contribute it upstream, and (assuming you didn't do anything weird), it'd get accepted. If you can't build that yourself then you ring up a firm, such as Igalia, whose raison d'etre is doing exactly this sort of thing and they write it for you in exchange for payment of some kind. Hooray! We've basically never been in this position before: currently, for the first time in the history of the web, a dedicated outsider can contribute to essentially every browser available. How good is that? Very good, is how good it is.

Obviously, this only applies to things that everyone agrees on. If you show up with a patchset that provides support for the <stuart> element, you will be told: go away and get this standardised first. And that's absolutely correct.

But it doesn't let you influence the strategic direction, and this is where the notions of diversity in rendering engines and diversity in influence begins to break down. If you show up to the Blink repository with a patchset that wires an adblocker directly into the rendering engine, it is, frankly, not gonna show up in Chrome. If you go to WebKit with a complete implementation of service worker support, or web payments, it's not gonna show up in iOS Safari. The companies who make the browsers maintain private forks of the open codebase, into which they add proprietary things and from which they remove open source things they don't want. It's not actually clear to me whether such changes would even be accepted into the open source codebases or whether they'd be blocked by the companies who are the primary sponsors of those open source codebases, but leave that to one side. The key point here is that the open ecosystem is only actually open to non-controversial change. The ability to make, or to refuse, controversial changes is reserved to the major browser vendors alone: they can make changes and don't have to ask your permission, and you're not in the same position. And sure, that's how the world works, and there's an awful lot of ingratitude out there from people who demand that large companies dedicate billions of pounds to a project and then have limited say over what it's spent on, which is pretty galling from time to time.

Brian references Jeremy Keith's Unity in which Jeremy says: "But then I think of situations where complete unity isn't necessarily a good thing. Take political systems, for example. If you have hundreds of different political parties, that's not ideal. But if you only have one political party, that's very bad indeed!" This is true, but again the nuance is different, because what this is about is influence. If one party wins a large majority, then it doesn't matter whether they're opposed by one other party or fifty, because they don't have to listen to the opposition. (And Jeremy makes this point.) This was the problem with Internet Explorer: it was dominant enough that MS didn't have to give a damn what anyone else thought, and so they didn't. Now, this problem does eventually correct itself in both browsers and political systems, but it takes an awfully long time; a dominant thing has a lot of inertia, and explaining to a peasant in 250AD that the Roman Empire will go away eventually is about as useful as explaining to a web developer in 2000AD that CSS is coming soon, i.e., cold comfort at best and double-plus-frustrating at worst.

So, a qualified hooray, I suppose. I concur with Brian that "things are better and healthier because we continue to find better ways to work together. And when we do, everyone does better." There is a bunch of stuff that is uncontroversial, and does make the web better, and it is wonderful that we're not limited to begging browser vendors to care about it to get it. But I think that definition excludes a bunch of "things" that we're not allowed, for reasons we can only speculate about.

26 May 2020 1:52pm GMT

Didier Roche: ZFS focus on Ubuntu 20.04 LTS: ZSys general presentation

ZFS focus on Ubuntu 20.04 LTS: ZSys general presentation

In our previous blog post, we presented some enhancements and differences between Ubuntu 19.10 and Ubuntu 20.04 LTS in term of ZFS support. We only alluded to ZSys, our ZFS system helper, which is now installed by default when selecting ZFS on root installation on the Ubuntu Desktop.

It's now time to shed some lights on it and explain what exactly ZSys is bringing to you.

What is ZSys?

We called ZSys a ZFS system (hence its name) helper. It can be first seen as a boot environment, which are popular in the OpenZFS community, to help you booting on previous revision of your system (basically snapshots) in a coherent manner. However, ZSys goes beyond than that by providing regular user snapshots, system garbage collection and much more that we will detail below!

System state saving

We will go deeper in details about what is a state and how it behaves, but as we want to get exhaustive in term of ZSys features in this post, here is a little introduction.

Each time you install, remove or upgrade your packages, a state save is automatically taken by the system. This is done by apt transactions. Note that we have some specificities in ubuntu with background updates (via unattended-upgrades) which splits up upgrades in multiple apt transactions. We were able to group that in only one system saving. We split it up in two parts (saving state and rebuilding bootloader menu) as you can see when running the apt command manually:

$ apt install foo
INFO Requesting to save current system state      
Successfully saved as "autozsys_ip60to"
[installing/remove/upgrading package(s)]
INFO Updating GRUB menu

You can find them now stored on the system:

$ zsysctl show
Name:           rpool/ROOT/ubuntu_e2wti1
ZSys:           true
Last Used:      current
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_cgym7c
    Created on: 2020-04-28 12:23:12
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_kho1px
    Created on: 2020-04-28 12:16:34
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_w2kfiv
    Created on: 2020-04-28 11:52:46
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_ixfcpk
    Created on: 2020-04-28 11:52:24
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_ip60to
    Created on: 2020-04-28 11:50:06
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_08865s
    Created on: 2020-04-28 09:27:31
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_nqq08r
    Created on: 2020-04-28 09:07:42
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_258qec
    Created on: 2020-04-27 18:11:12
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_yldeob
    Created on: 2020-04-27 18:10:22
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_r66le8
    Created on: 2020-04-24 17:51:42

Our grub bootloader (available via [Escape] or [Shift] press on boot) will allow you to revert the system (and optionally user) states on demand! An history entry will propose you to boot on an older state of your system.

This goes beyond than OpenZFS rollbacks as we made revert a non destructive action: current and intermediate states aren't destroyed, and you can imagine even reverting the revert (or doing system bisection!). Also, we store exactly some key dataset properties as to when the state saving was taken. The revert will then reapply original base filesystem dataset properties, to ensure better fidelity. For instance, changing a filesystem mountpoint won't apply to the ZSys snapshots inheriting from it, and we will remount it at the correct place. Similarly, we will reboot with the exact same kernel you booted with on the state that you saved, even if this wasn't the latest available version.

Note: simply put if you are unfamiliar with ZFS technology and terminology, you can think of a dataset as a directory that you can control independently of the rest of your system: each dataset can be saved and restored separately, have their own history, properties, quotas…

History with ZSys

In a nutshell, we try to ensure high fidelity when you revert your system so that you can trustfully and safely boot on an older state of your system.

Commit successful boot

You will tell us that pressing [Escape] or [Shift] on boot to show up grub isn't the most discoverable feature and you are right.

However, in case of boot failing, the next boot will show up grub by default and you will have those "History entries" available to you!

Similarly, we save and commit every time you are able to successfully boot your system (we will define successful boot in the next blog post about states). This can trigger a grub menu update if needed (new states to add for instance). It means that if a boot is failing and you revert, just rebooting should keep the latest successful state as the default grub entry didn't change!

This is just a small taste of what we do on our path of robust and bullet-proof Ubuntu desktop. We will explain all of this in greater details in the next blog post.

User integration

ZSys is deeply integrating users to ZFS. Each non system user created manually or automatically (via gnome-control-center, adduser or useradd) will have its own separate space (dataset) created. We handle home directory renaming with usermod but still need to do more work on userdel as we shouldn't delete the user immediately (what if you revert to a state that had this user for instance?).

This allows us to take automatic hourly user state save… However, we only do it if the user is connected (to a GUI or CLI session)!

You can see below those hourly automated user state save. Hourly ones are automated time-based user snapshots and you can see some more linked to system state changes.

$ zsysctl show
  - Name:    didrocks
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_setmsc (2020-04-28 15:36:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_8wdamc (2020-04-28 14:35:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_y5tsor (2020-04-28 13:34:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_yysp1w (2020-04-28 12:33:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_cgym7c (2020-04-28 12:23:12)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_kho1px (2020-04-28 12:16:34)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_w2kfiv (2020-04-28 11:52:46)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_ixfcpk (2020-04-28 11:52:24)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_ip60to (2020-04-28 11:50:06)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_891br7 (2020-04-28 11:32:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_yl9fuu (2020-04-28 10:31:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_vg70mw (2020-04-28 09:30:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_08865s (2020-04-28 09:27:31)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_nqq08r (2020-04-28 09:07:42)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_xgntka (2020-04-28 08:29:45)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_0zisgn (2020-04-27 19:41:36)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_44635k (2020-04-27 18:40:36)

User snapshots are basically free when you don't change any files (you can think of them as if they were only storing the difference between current and previous states). It will allow us in the near future to offer easily restore of previous version of your user's data. The "oh, I removed that file and don't have any backups" will be something of the past soon :)

Note that this isn't a real backup as all your data are on the same physical place (your disk!), so this shouldn't replace backups. We have plan to integrate that with some backup tools in the future.

Garbage collection on state saving

As you can infer from the above, we will have a lot of state saves. While we allow the user to manually save and remove states, as most of them will be taken automatically, it would be complicated and counter-productive on a daily bases to handle them manually. Add to this that some states are dependent on other states to be purged (more on that in … you would have guess, the next blog post about state!), you can understand the complexity here.

The GC will have also its dedicated post, but in summary, we are trying to prune states as time passes, to ensure that you have a number of relevant states that you can revert to. The general idea is that as more time pass by, the less granularity you need. This will help saving disk space. You will have a very finer grain states to revert to for the previous day, a little bit less for the previous weeks, less for months… You get it I think. :)

Multiple machines

Something that isn't really well supported nowdays was multiple OpenZFS installations on the same machines. This isn't fully complete yet (you can't have 2 pools of the same name, like rpool or bpool), but if you install manually a secondary machine on the same pools or different ones (with different names), this is handled by ZSys and our grub menu!

Multiple machine support with ZSys

Both will have their own separate history of states and ZSys will manage both! You can have shared or unshared user home data between the machines.

You can see easily see all machines attached to your machine, and if they are or not managed by ZSys:

$ zsysctl machine list
ID                        ZSys   Last Used
--                        ----   ---------
rpool/ROOT/ubuntu_e2wti1  true   current
rpool/ROOT/ubuntu_l33t42  true   23/04/20 18:04
rpool/ROOT/ubuntu_manual  false  20/03/20 14:17

Principles of ZSys

We built ZSys on multiple principles.

ZSys architecture


The first principle of ZSys is to be as lightweight as possible: ZSys only runs on demand, it's made of a command line (zsysctl) which connects to a service zsysd, which is socket activated. This means that after a while, if zsysd has nothing else to do, it will shutdown and not take any additional memory on your system.

Similarly, we don't want ZSys to slow down or interfere during the boot. This is why we hooked it up to the upstream OpenZFS systemd generator and it will only be run when doing a revert to a previous state.

Everything is stored on ZFS pools

Secondly, we don't want to maintain a separate database for ZFS dataset layout and structure. This approach would have been dangerous as it would have been really easy to get out of sync with what is really on disk, and being unaware of any manual user interaction on ZFS. That way, we let ZFS familiar system administrators handling manual operations while still being compatible with ZSys. Any additional properties we need are handled via user properties on ZFS datasets directly. Basically, everything is in the ZFS pool and no additional data are needed (you can migrate your disk from one disk to another).

Permission mediation

Thirdly, permission handling is mediated via polkit which is a familiar mechanism to administrator and compatible with company-wide policies. If any priviledge escalation is needed, the system will ask you for it.

Polkit request for performing system write operation

We will develop this topic with more details in future posts.

Ease of use

This is the core of the command line experience: familiarity and discoverability. zsysctl has a lot of commands, subcommands and options. We are using advanced shell completion which will complete on [Tab][Tab] in both bash and zsh environments.

$ zsysctl state [Tab][Tab]
remove  save

We try to discard any optional arguments by limiting the number of them being required. However if you try to complete on - or --, we will present any available matching options (some being global, other being local to your subcommand):

$ zsysctl state save -[Tab][Tab]
--auto                -s                    -u                    --user=               --verbose
--no-update-bootmenu  --system              --user                -v

Similarly, an help command is available for each commands and subcommands and will complete as expected:

$ zsysctl help s[Tab][Tab]
save     service  show     state
$ zsysctl help state
Machine state management

  zsysctl state COMMAND [flags]
  zsysctl state [command]

Available Commands:
  remove      Remove the current state of the machine. By default it removes only the user state if not linked to any system state.
  save        Saves the current state of the machine. By default it saves only the user state. state_id is generated if not provided.

  -h, --help   help for state

Global Flags:
  -v, --verbose count   issue INFO (-v) and DEBUG (-vv) output

Use "zsysctl state [command] --help" for more information about a command.

We made aliases for popular commands (or what we think are going to be popular :)). For instance zsysctl show is an alias for zsysctl machine show. Less typing for the win! :)

Also, all those commands and subcommands are backed by man pages. Those are for now the equivalent of -help but if you have the desire to enhance any of them, this is a simple, but very valuable contribution that we strongly welcome as Pull Requests on our project!

The github repo README has a dedicated section with the details of all commands.

The best thing is that any of those are autogenerated at build time from source code (completion, man pages and README!). That means that the help and nice completions will never be out of sync with the released version. It also means that ZSys developers can (via the zsysctl completion command) already experience and have a feeling of command interactions without installing tip on the system.

Finally, typos are allowed and we try to match commands that are close enough to your command:

$ zsysctl sae
  zsysctl COMMAND [flags]
  zsysctl [command]

Available Commands:
  completion  Generates bash completion scripts
  help        Help about any command
  list        List all the machines and basic information.
  machine     Machine management
  save        Saves the current state of the machine. By default it saves only the user state. state_id is generated if not provided.
  service     Service management
  show        Shows the status of the machine.
  state       Machine state management
  version     Returns version of client and server

  -h, --help            help for zsysctl
  -v, --verbose count   issue INFO (-v) and DEBUG (-vv) output

Use "zsysctl [command] --help" for more information about a command.

ERROR zsysctl requires a valid subcommand. Did you mean this?

For the ones attached to details, you should have seen that some commands and subcommands on our README isn't available through completion or that we have some man pages on commands that aren't shown there. There are indeed system-oriented hidden commands and this is why they are not proposed by default (like the boot command: zsysctl bo[Tab][Tab] won't display anything). However, for testing, we are found of completion and if you type it entirely, you are back on completion enjoyable land:

$ zsysctl boot [Tab][Tab]
commit       prepare      update-menu

If you want to implement something similar in your (golang) CLI program, we proposed our changes to the cobra upstream repository so that you can get advanced shell completion as well!


We didn't want to force upon user a strict ZSys layout. First, we will have different default ZFS dataset layouts between server and desktop (this will be detailed in a future blog post). We are also very aware that there are excited and passionate ZFS system administrator or hobbyist that want to have full control over their system. This is why: * Any system can be untagged to prevent ZSys controlling it. The system will then boot and behave as any ZFS on root manual installation. However, of course, any additional features we are providing through ZSys will be unavailable. * Dataset layout can be a mix between what we provide by default and how system administrators have their habits or desired targets. For instance, bpool isn't mandatory, any child dataset can be deleted, some persistent datasets can be created… We will explain more about dataset layouts and the types of datasets ZSys is handling in more advanced parts of this blog post series.

Strongly tested

We put a strong emphasize on testing. ZSys itself is currently covered by more than 680 tests (from units to integration tests). It can exercise real ZFS system but we have also built an in memory mock (which can parallelize tests for instance). We can thus validate it against the real system with the same set of tests!

We also have more than 400 integration tests for grub menu building, covering a wide variety of dataset layouts configuration.

Detailed bug reporting

I hope you can appreciate that we put a lot of thoughts and care on ZSys and how it integrates with the ZFS system.

Of course, bugs can and will occur. This is why our default bug report template is asking to run ubuntu-bug ZSys as we collect a bunch of non private information on your system, which will help us to understand what configuration your are running with and what actually happened in various part of the system (OpenZFS, grub or ZSys itself)!

Again, most of the features are completely working under the hood and transparently to the user! I hope this gives you a sneak preview of what ZSys is capable of doing. I teased a lot about further development and explanation I couldn't include there (this is already long enough) on a lot of those concepts. This is why we will dive right away into state management (which will include revert, bisections and more)! See you there :)

Meanwhile, join the discussion via the dedicated Ubuntu discourse thread.

26 May 2020 7:22am GMT

25 May 2020

feedPlanet Ubuntu

The Fridge: Ubuntu Weekly Newsletter Issue 632

Welcome to the Ubuntu Weekly Newsletter, Issue 632 for the week of May 17 - 23, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

25 May 2020 10:37pm GMT

David Tomaschik: Book Review: Operator Handbook

When Netmux first released the Operator Handbook, I had to check it out. I had some initial impressions, but wanted to take some time to refine my thoughts on it before putting together a full review of the book. The book review will be a bit short, but that's because this is a rather straightforward book.

Operator Handbook

I think the first things to know is that this book is strictly a reference. There's nothing to read and learn things from in a cohesive way. It would be like reading a dictionary or a theasaurus - while you might learn things reading it, it's not going to be in any meaningful way. There's lots of things you can learn on a particular very narrow topic, but it is mostly organized to be "in the moment", not as a "learning in advance" kind of thing.

The second thing to know is that unless you're regularly in environments that don't allow you to bring electronics in (e.g, heavily restricted customer sites), you really want this book in electronic format for quick searching and copy/paste. In fact, the tagline on the cover is "SEARCH.COPY.PASTE.L33T:)". This is obviously a lot easier from the digital version. (Though I have to admit, I love the cover of the physical book - it's got a robust feel and a cool "find it quick" yellow color.)

I rather suspect this book is inspired by books like the Red Team Field Manual, the Blue Team Field Manual, and Netmux's own Hash Crack: Password Cracking Manual. When you crack it open, you'll immediately see the similarities - very task focused, intended to get something done quickly, rather than a focus on the underlying theory or background.

I've actually referred to the book a couple of times while doing operations. Some of the things in it would be easily obtained elsewhere (e.g., a quick Google search for "nmap cheatsheet" gets you much the same information), but many other things would require distillation of the information into a more consumable format, and Netmux has already done that.

Many of the items in the book are also transformed into a security mindset - e.g., interacting with cloud platforms like AWS or GCP. Rather than trying to provide the information necessary to operate those platforms, the books focuses on the aspects relevant to security practitioners. The book also contains links to additional references, which is yet another reason you want to have this in a digital format. Some kind of URL shortener links would have been a nice touch for the print version.

One thing that I really want to applaud in this book is that there is a reference for mental health in the book. Whether or not the information security industry has a particular predisposition for mental health issues, I absolutely love the normalization of discussing mental health issues.

While there is content for both Red and Blue teamers, like so many resources, it seems to tend to the Red. Maybe it's only my perception as a Red Teamer, maybe some of the contents I perceive as "Red" are also useful to Blue teamers. I'd love to hear from someone on the Blue side as to how they find the book contents for their role - any takers?

Overall, I think this is a useful book. A lot of effort clearly went into curating the content and covering the wide variety of topics that is included in it's 123 references. There's probably nothing ground-breaking in it, but it's just presented so well that it's totally worth having.

25 May 2020 7:00am GMT

23 May 2020

feedPlanet Ubuntu

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 284.5 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

In April we dispatched more hours than ever and another was new too, we had our first (virtual) contributors meeting on IRC! Logs and minutes are available and we plan to continue doing IRC meetings every other month.
Sadly one contributor decided to go inactive in April, Hugo Lefeuvre.
Finally, we like to remind you, that the end of Jessie LTS is coming in less than two months!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian.

The security tracker currently lists 4 packages with a known CVE and the dla-needed.txt file has 25 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

23 May 2020 4:10pm GMT

Full Circle Magazine: Full Circle Weekly News #172

Debian Leader Says "One Year Will Do"
Debian 8 Adds Longer Support
Debian 11 Package Freeze Scheduled
Gnome 3.36 "Gresik"
The Linux Foundation Open Sources Project OWL
FreeNAS and TrueNAS are Merging
There's a Vulnerability in Timeshift
Linux Kernel 5.6 rc6 Out

Zorin OS 15.2 Out

Wine 5.4 Out

Red Hat's Ceph Storage 4 Out

AWS' Bottle Rocket Out

Tails 4.4 Out

Basilisk Browser Out

LibreELEC 9.2.1 Out

KDE Plasma 5.18.3 Out

SDL (or Simple DirectMedia Layer) 2 Out

Splice Machine 3.0 Out

4M Linux 32.0 Out

Ubuntu "Complete" sound: Canonical

Theme Music: From The Dust - Stardust

23 May 2020 10:19am GMT

22 May 2020

feedPlanet Ubuntu

Daniel Holbach: Gitops Days - Day 2 playlist

GitOps Days are over now - what a blast I had. Even though it was long hours, it was so much fun supporting the event: such a friendly and engaged audience (loads of great questions and discussion on Slack), excellent - very experienced and fun speakers, and a super well-organised team! Thanks everyone who made these two days as special as they were! 💓

It has happened before: people were picking a DJ name for me. The list of names on Slack and Twitter was long and gave me a laugh. Looks like as my GitOps DJ name, DJ Desired State was winning. You are all hilarious. 😂

As a follow-up to my blog post yesterday, here is the playlist from Day 2:

Mas Que Nada (UFe remix)
Morphy - Ragga Spindle
Acie - Sexymama
Jorge Ben - Take It Easy My Brother Charles
Marina Gallardo - Golden Ears (M.RUX Edit)
Cypress Hill - Insane In The Brain - Kasabian Cover (Matija & Richard Elcox Edit)
Siriusmo - EGO

Lokke - Song Nº 1

Quantic - Atlantic Oscillations
Khen - Manginot
The Chemical Brothers - Go (Claude VonStroke Remix)

RSL - Wesley Music
Adome Nyueto - Yta Jourias (Sopp's Party Edit)

The Silver Thunders - Fresales eternos
Alexander - Truth

Black Milk - Detroit's New Dance Show
Zeds Dead -  Rumble In The Jungle
Kalemba - Wegue Wegue (Krafty Kuts Remix)

Fdel - Let The Beat Kick
The Living Graham Bond - Werk
Pizeta - Nina Papa (Andy Kohlmann Remix)
Format:B - Gospel (Super Flu's Antichrist Remix)
Taisun - Senorita (Remix)
Pleasurekraft - Carny

If you sign up at https://www.gitopsdays.com/, you can get links to the recordings and we'll send you a GitOps conversation kit as well.

22 May 2020 7:54am GMT

David Tomaschik: Everyone in InfoSec Should Know How to Program

Okay, I'm not going to lie, the title was a bit of clickbait. I don't believe that everyone in InfoSec really needs to know how to program, just almost everyone. Now, before my fellow practitioners jump on me, saying they can do their job just fine without programming, I'd appreciate you hearing me out.

So, how'd I get on this? Well, a thread on a private Slack discussing whether Red Team operators should know how to program, followed by people on Reddit asking if they should know how to program. I thought I'd share my views in a concrete (and longer) format here.

Computers are Useless without Programs

I realize that it sounds idomatic, but computers don't do anything without programs. Programs are what gives a computer the ability to, well, be useful. So I think we can all agree that information security, as an industry, is based entirely around software.

I submit that knowing how to program makes most roles more effective merely by having a better understanding of how software works. Understanding I/O, network connectivity, etc., at the application layer will help professionals do a better job of understanding how software affects their role.

That being said, this is probably not reason enough to learn to program.

Learning to Program Opens Doors

I suppose this point can be summarized as "more skills makes you more employable", which is probably (again) idiomatic, but it's probably worth considering. There are roles and organizations that will expect you to be able to program as part of the core expectations.

For example, if you currently work in the SoC, and you want to work on building/refining the tools used in the SoC, you'll need to program.

Alternatively, if you want to move laterally to certain roles, those roles will require programming - application security, tool development, etc.

You Will Be More Efficient

There are so many times where I could have done something manually, but ended up writing a program of some sort to do it instead. Maybe you have a range of IPs and need to check which of them are running a particular webserver, or you want to combine several CSVs based on one or two fields on them. Maybe you just want to automate some daily task.

As a Red Teamer, I often write scripts to accomplish a variety of tasks:

On the blue side, I know people who write programs to:

How much do you need to know?

Well, technically none, depending on your role. But if you've read this far, I hope you're convinced of the benefits. I'm not suggesting everyone needs to be a full-on software engineer or be coding every day, but knowing something about programming is useful.

I suggest learning a language like Python or Ruby, since they have REPLs, a "read-eval-print loop". These provide an interactive prompt where you can run statements and see the responses immediately. Python seems to be more commonly used for InfoSec tooling, but they both are good options to get things done.

I would focus on file and network operations, and not so much on complicated algorithms or data structures. While those can be useful, standard libraries tend to have common algorithms (searching, sorting, etc.) well-covered. Having a sensible data structure makes code more readable, but there's not often a need for "low level" structures in a high level language.

Have I Convinced You?

Hopefully I've convinced you. If you want to learn programming with a security-specific slant, I can highly recommend some books from No Starch Press:

22 May 2020 7:00am GMT

21 May 2020

feedPlanet Ubuntu

Podcast Ubuntu Portugal: Ep 91 – O Mundo é dos Desktops

Um episódio em que como de costumo conversámos sobre as nossas aventuras semanas, mas também cobrimos nostícias sobre Pine64 Pinetab com Ubuntu Touch, ms office no Linux, certificação de Ubuntu 20.04 para Raspberry Pi, o Ubuntutiverso em expansão, e a mudança radical no Ubuntu Studio.
Saibam também como até uma criança de 10 anos pode fazer a diferença no Ubuntu e como vocês podem passar também a ser heróis da comunidade.
Já sabem: oiçam, subscrevam e partilhem!

Neste episódio o discutimos tomadas inteligentes, extensões, gestão de cabos e controlo de consumo, redes e equipamento de redes, webcams, revisitamos a questão do ImageMagick no Nextcloud, imagens de Ubuntu Server.

Vejam lá que ainda tivemos tempo para para do UBports da OTA-12, da OTA-13, de PinePhone e Volla Phone

Já sabem: oiçam, subscrevam e partilhem!


Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Esta semana recomendamos o bundle: Software Development by O'Reilly com o link de afiliado https://www.humblebundle.com/books/software-development-oreilly-books?partner=PUP

Atribuição e licenças

A música do genérico é: "Won't see it comin' (Feat Aequality & N'sorte d'autruche)", por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

21 May 2020 10:04pm GMT

Daniel Holbach: Gitops Days - Day 1 playlist

What a brilliant day 1 of GitOps Days it was. Weeks of hard work from a great team went into this, as was quite apparent. Minor glitches, some last minute shuffling of speakers, but apart from that very very seamless. (You can still sign up and get links to the recordings.)

I had a bit of an unusual role: I was DJing at an online event.

Some questions online were about the setup and why I wasn't changing records during playing (well spotted!). So here's what I used during the event:

I've used this setup for a long time and it's rock-solid. Having played real vinyls for years, I just never updated my muscle memory to use a CDJ or any other fancy new controller. At some stage I just had to move on from buying records every weekend or two to digital - there's just so much more stuff out there and you don't suffer as much from tracks (and records catching dust) that you find to be quite short-lived.

On the transmission end, I was very lucky that Lucijan (one of my besties) gave me his Windows laptop (in use for making Windows builds of Sitala - their free drum plugin and standalone app). I needed it because we used Zoom (as in zoom.us) for transmission and only the Windows or Mac versions of Zoom offer the "Original Sound" option where your sound is sent "as-is", which you very much want when you're playing live music. I also used his Zoom H4n recorder as an external soundcard.

To get a better view of what I'm doing, I mounted an external webcam on an USB extension on a broomstick (yeah I know, very professional!). The webcam was the piece of equipment that most urgently would need replacement. It didn't really cope well when it got darker here in Berlin (it was 23:59 here when day 1 ended) and I had to add additional lighting. Also the disco ball didn't come out quite as well as I wanted.

All in all, I had a great time supporting the event and look forward to day 2. The vibe was incredible - the audience was super friendly and had great conversations. I also learned that one of the speakers, Vuk Gojnic (Deutsche Telekom), used to be a DJ in the past and we're losely planning to play at an event near you when this pandemic is over. We also had somebody who used to VJ in the late nineties. I love interactions like these - so diverse interests apart from GitOps.

Today I'll also make sure to move and sit a bit more during the times when I'm not playing. I was standing most of yesterday.

One thing that was asked for as well was a playlist. Looking at this I'm quite surprised how much I managed to play on day 1.

The Rebirth - Evil Vibrations
Daniël Leseman - Ease The Pain (Extended Mix)
Yuksek & Bertrand Burgalat - Icare (Yuksek Remix)
Mooqee & Beatvandals - Player (2019 Disco Rework)

Afterclapp - BRZL
Claudia - Deixa Eu Dizer (iZem ReShape)
Dengue Dengue Dengue - Simiolo (Cumbia Cosmonauts Remix)
Twerking Class Heroes - Vanakkam
Romare - Down the line (it takes a number)

Jorge Ben - Ponta de Lança Africano
Edu Lobo - Viola Fora De Moda (Cau Lopez Remix)

Quantic & Nickodemus feat Tempo & The Candela Allstars - Mi Swing Es Tropical

Cocotaxi - Dejala Corre
Daniel Haaksman - Puerto Rico (Neki Stranac Moombahton Mix)
Cupcake Project - This Ain't No Boogie (Prosper & Adam Polo Remix)
Frajle - Pare Vole Me Extended Club (Gramophonedzie remix)

Fela Kuti and The Africa 70 - Shakara (Diamond Setter Edit)

Romare - The Blues (It Began in Africa)
Owiny Sigoma Band - Doyoi Nyajo Nam (Quantic Dub)
Zed Bias - Trouble in the Streets (feat. Mark Pritchard)
Gramophonedzie - Why Dont You

David Walters - Mama

Please join us for day 2 of GitOps Days - the schedule just looks great. I look forward to seeing you there! 💞

21 May 2020 7:21am GMT