Twenty years ago… I decided to start a blog to share my thoughts! That's why I called it "/dev/random". How was the Internet twenty years ago? Well, they were good things and bad ones…
With the years, the blog content evolved, and I wrote a lot of technical stuff related to my job, experiences, tools, etc. Then, I had the opportunity to attend a lot of security conferences and started to write wrap-ups. With COVID, fewer conferences and no more reviews. For the last few months, I'm mainly writing diaries for the Internet Storm Center therefore, I publish less private stuff here, and just relay the content published on the ISC website. If you have read my stuff for a long time (or even if you are a newcomer), thank you very much!
A few stats about the site:
2056 articles
20593 pictures
5538 unique visitors to the RSS feed in the last 30 days
85000 hits/day on average (bots & attacks included ?)
I know that these numbers might seem low for many of you but I'm proud of them!
Today I wanted to program an ESP32 development board, the ESP-Pico-Kit v4, but when I connected it to my computer's USB port, the serial connection didn't appear in Linux. Suspecting a hardware issue, I tried another ESP32 board, the ESP32-DevKitC v4, but this didn't appear either, so then I tried another one, a NodeMCU ESP8266 board, which had the same problem. Time to investigate...
The dmesg output looked suspicious:
[14965.786079] usb 1-1: new full-speed USB device number 5 using xhci_hcd[14965.939902] usb 1-1: New USB device found, idVendor=10c4, idProduct=ea60, bcdDevice= 1.00[14965.939915] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3[14965.939920] usb 1-1: Product: CP2102 USB to UART Bridge Controller[14965.939925] usb 1-1: Manufacturer: Silicon Labs[14965.939929] usb 1-1: SerialNumber: 0001[14966.023629] usbcore: registered new interface driver usbserial_generic[14966.023646] usbserial: USB Serial support registered for generic[14966.026835] usbcore: registered new interface driver cp210x[14966.026849] usbserial: USB Serial support registered for cp210x[14966.026881] cp210x 1-1:1.0: cp210x converter detected[14966.031460] usb 1-1: cp210x converter now attached to ttyUSB0[14966.090714] input: PC Speaker as /devices/platform/pcspkr/input/input18[14966.613388] input: BRLTTY 6.4 Linux Screen Driver Keyboard as /devices/virtual/input/input19[14966.752131] usb 1-1: usbfs: interface 0 claimed by cp210x while 'brltty' sets config #1[14966.753382] cp210x ttyUSB0: cp210x converter now disconnected from ttyUSB0[14966.754671] cp210x 1-1:1.0: device disconnected
So the ESP32 board, with a Silicon Labs, CP2102 USB to UART controller chip, was recognized, and it was attached to the /dev/ttyUSB0 device, as it should normally do. But then suddenly the brltty command intervened and disconnected the serial device.
I looked up what brltty is doing, and apparently this is a system daemon that provides access to the console for a blind person using a braille display. When looking into the contents of the package on my Ubuntu 22.04 system (with dpkg -L brltty), I saw a udev rules file, so I grepped for the product ID of my USB device in the file:
So apparently there's a Braille display with the same CP210x USB to UART controller as a lot of microcontroller development boards have. And because this udev rule claims the interface for the brltty daemon, UART communication with all these development boards isn't possible anymore.
As I'm not using these Braille displays, the fix for me was easy: just find the systemd unit that loads these rules, mask and stop it.
The latest MySQL release has been published on January 17th, 2023. MySQL 8.0.32 contains some new features and bug fixes. As usual, it also contains contributions from our great MySQL Community.
I would like to thank all contributors on behalf of the entire Oracle MySQL team !
MySQL 8.0.32 contains patches from Facebook/Meta, Alexander Reinert, Luke Weber, Vilnis Termanis, Naoki Someya, Maxim Masiutin, Casa Zhang from Tencent, Jared Lundell, Zhe Huang, Rahul Malik from Percona, Andrey Turbanov, Dimitry Kudryavtsev, Marcelo Altmann from Percona, Sander van de Graaf, Kamil Holubicki from Percona, Laurynas Biveinis, Seongman Yang, Yamasaki Tadashi, Octavio Valle, Zhao Rong, Henning Pöttker, Gabrielle Gervasi and Nico Pay.
Here is the list of the above contributions and related bugs, we can see that for this release, our connectors got several contributions, always a good sign of their increasing popularity.
We can also notice the return of a major contributor: Laurynas Biveinis!
Connectors
Connector / NET
#74392 - Support to use (Memory-)Stream for bulk loading data - Alexander Reinert
Fedora now has frame pointers. I don't want to dwell on the how of this, it was a somewhat controversial decision and you can read all about it here. But I do want to say a bit about the why, and how it makes performance analysis so much easier.
FlameGraphs rely on the Linux tool perf being able to collect stack traces. The stack traces start in the kernel and go up through userspace often for dozens or even hundreds of frames. They must be collected quickly (my 1 minute long trace has nearly half a million samples) and accurately.
Perf (or actually I think it's some component of the kernel) has various methods to unwind the stack. It can use frame pointers, kernel ORC information or DWARF debug information. The thing is that DWARF unwinding (the only userspace option that doesn't use frame pointers) is really unreliable. In fact it has such serious problems that it's not that usable at all.
For example, here is a broken stack trace from Fedora 37 (with full debuginfo installed):
Look at those beautiful towering peaks! What seems to be happening (I don't know why) is that stack traces start in the wrong place when you don't have frame pointers (note that FlameGraphs show stack traces upside down, with the starting point in the kernel shown at the top). Also if you look closely you'll notice missed frames in the first one, like the "direct" call to __libc_action which actually goes through an intermediate frame.
Before Fedora 38 the only way to get good stack traces was to recompile your software and all of its dependencies with frame pointers, a massive pain in the neck and a major barrier to entry when investigating performance problems.
With Fedora 38, it's simply a matter of using the regular libraries, installing debuginfo if you want (it does still add detail), and you can start using perf straight away by following Brendan Gregg's tutorials.
MLOps (short for machine learning operations) is slowly evolving into an independent approach to the machine learning lifecycle that includes all steps - from data gathering to governance and monitoring. It will become a standard as artificial intelligence is moving towards becoming part of everyday business, rather than an innovative activity.
Get an intro to MLOps on the 15th of February with Canonical's experts.
Register now
Over time, there have been different approaches used in MLOps. The most popular ones are model-driven and data-driven approaches. The split between them is defined by the main focus of the AI system: data or code. Which one should you choose? The decision challenges data scientists to choose which component will play a more important role in the development of a robust model. In this blog, we will evaluate both.
Model-driven development focuses, as the name suggests, on machine learning model performance. It uses different methods of experimentation in order to improve the performance of the model, without altering the data. The main goal of this approach is to work on the code and optimise it as much as possible. It includes code, model architecture and training processes as well.
If you look deeper into this development method, the model-driven approach is all about high-quality ML models. What it means, in reality, is that developers focus on using the best set of ML algorithms and AI platforms. The approach also is a basis for great advancements in the AI space, such as the development of specialised frameworks like Tensorflow or PyTorch.
Model-centric development has been around since the early days of the discipline, so it benefits from widespread adoption across a variety of AI applications. The reason for this can be traced back to the fact that AI was initially a research-focused area. Historically, this approach was designed for challenging problems and huge datasets, which ML specialists were meant to solve by optimising AI models. It has also been driven by the wide adoption of open source, which allows free access to various GitHub repositories. Model-driven development encourages developers to experiment with the latest bits of technology and try to get the best results by fine-tuning the model. From an organisational perspective, it is suited for enterprises which have enough data to train machine-learning models.
When it comes to pitfalls, the model-centric approach requires a lot of manual work at the various stages of the ML lifecycle. For example, data scientists have to spend a lot of time on data labelling, data validation or training the model. The approach may result in slower project delivery, higher costs and little return on investment. This is the main reason why practitioners considered trying to tackle this problem from a different perspective with data-centric development.
Data-centric development
As it is often mentioned, data is the heart of any AI initiative. The data-centric approach takes this statement seriously, by systematically interacting with the datasets in order to obtain better results and increase the accuracy of machine learning applications.
When compared to the model-centric approach, in this case, the ML model is fixed, and all improvements are related to the data. These enhancements range from better data labelling to using different data samples for training or increasing the size of the data set. This approach improves data handling as well, by creating a common understanding of the datasets.
The data-centric approach has a few essential guidelines that look after:
Data labelling
Data augmentation
Error analysis
Data versioning
Data labelling for data-centric development
Data labelling assigns labels to data. The process provides information about the datasets that are then used by algorithms to learn. It emphasises both content and structure information, so it often includes various data types, measurement units, or time periods represented in the dataset. Having correct and consistent labels can define the success of an AI project.
Data-centric development often highlights the importance of correct labelling. There are various examples of how to approach it; the key goal is avoiding inconsistencies and ambiguities. Below you can find an image that Andrew Ng offers as an example of data labels in practice. In this case, the labels are used for two adjectives: inconsistency and ambiguity.
Data augmentation is a process that consists of the generation of new data based on various means, such as interpolation or explorations. It is not always needed, but in some instances, there are models that require a larger amount of data at various stages of the ML lifecycle: training, validation, and data synthesis.
Whenever you perform this activity, checking data quality and ensuring the elimination of noise is also part of the guidelines.
Error analysis for data-centric development
Error analysis is a process performed once a model is trained. Its main goal is to identify a subset that can be used for improving the dataset. It is a task that requires diligence, as it needs to be performed repeatedly, in order to get gradual improvements in both data quality and model performance.
Data versioning for data-centric development
Data versioning tracks changes that happen within the datasets, in order to identify performance changes within the model. It enables collaboration, eases the data management process and fastens the delivery of machine learning pipelines from experimentation to production.
When it comes to pitfalls, the data-centric method struggles mostly with data. On one hand, it can be hard to manage and control. On the other hand, it can be biased if it does not represent the actual population, leading to models that underperform in real life. Lastly, because of the data requirements, it can easily be expensive or suitable only for projects which have collected data for a longer period of time.
Model-centric and data-centric development with MLOps
In reality, both of these approaches are tightly linked to MLOps. Regardless of the option that data scientists choose, they need to follow MLOps guidelines and integrate their method within the tooling that they choose. Developers can use the same tool but have different approaches across different projects. The main difference could occur at the level of the ML lifecycle where changes are happening. It's important to note that the approach will affect how the model is optimised for the specific initiative, so choosing it with care is important to position your project for success.
Get an intro to MLOps on the 15th of February with Canonical's experts.
Register now
Charmed Kubeflow is an end-to-end MLOps tooling, designed for scaling machine learning models to production. Because of its features and integrations, it has the ability to support both model-centric and data-centric development. It is an open-source platform, which encourages contributions and represents the foundations of a growing MLOps ecosystem that Canonical is moving towards, with various integrations at various levels: hardware, tooling and AI frameworks.
The Fedora Project is participating in the upcoming round of Outreachy. We need more project ideas and mentors! The last day to propose a project or to apply as a general mentor is February 24, 2023, at 4pm UTC.
Outreachy provides a unique opportunity for underrepresented groups to gain valuable experience in open-source and gain access to a supportive community of mentors and peers. By participating in this program, the Fedora community can help create a more diverse and inclusive tech community.
If you have a project idea for the upcoming round of Outreachy, please open a ticket in the mentored projects repository. You can also volunteer to be a mentor for a project that's not yours. As a supporting mentor, you will guide interns through the completion of the project.
A good project proposal makes all the difference. It saves time for both the mentors and the applicants.
What makes a good project proposal
Well-defined. The project has a well-defined scope.
Self-contained. Has few dependencies on uncompleted work. Does not require interacting with multiple open source communities who are not on board with interacting with an Outreachy intern.
Incremental. The project should produce several deliverables during the internship period, rather than having only one large deliverable. This allows the project goals to be modified if the intern completes tasks faster or slower than expected. If the project does have one large deliverable, it's recommended that the intern complete a design document. This allows the intern to hand off unfinished work to the next intern, or the community.
The Mentored Projects Coordinators will review your ideas and help you prep your project proposal to be submitted to Outreachy.
How to participate
Project Mentor
Signing up as a mentor is a commitment. Before signing up, please consider the following
Do you have enough time to work on this with the intern during the entire timeline?
Committing to 5-10 hours a week during the six-week application period to review applicant contributions
Committing up to 5 hours a week during the three-month internship period to work with the Outreachy intern
It is harder to find success when you are completely certain of how an idea needs to be implemented. Finding an intern with the skills and interest to implement a specific solution is a lot harder. Instead, the goal should be to focus on finding an intern with enough skills to respond to a use case need.
Who can help you? Try to find a second mentor for the project. Not only can they bring a new perspective, but in case you decide to go on a vacation, they will be the backup.
We are also looking for general mentors for the facilitation of proper communication of feedback and evaluation with the interns working on the selected projects.
Submit your proposals
Please submit your project ideas and mentorship availability as soon as possible. The last date for project idea submission is February 24, 2023
Mentoring can be a fulfilling pursuit. It is a great opportunity to contribute to the community and shape the future of Fedora by mentoring a talented intern who will work on your project. Don't miss out on this exciting opportunity to make a difference in the Fedora community and the tech industry as a whole. Together, we can make the open-source community even more diverse and inclusive.
The 2023 LPC PC is pleased to announce that we've begun exclusive negotiations with the Omni Hotel in Richmond, VA to host Plumbers 2023 from 13-15 November. Note: These dates are not yet final (nor is the location; we have had one failure at this stage of negotiations from all the Plumbers venues we've chosen). We will let you know when this preliminary location gets finalized (please don't book irrevocable travel until then).
The November dates were the only ones that currently work for the venue, but Richmond is on the same latitude as Seville in Spain, so it should still be nice and warm.
Ray, the popular open-source machine learning (ML) framework, has released its 2.2 version with improved performance and observability capabilities, as well as features that can help to enable reproducibility.
With the new release, Red Hat is integrating new capabilities to help improve security and compliance for OpenShift, as well as new deployment options on ARM-based architectures. The OpenShift 4.12 release comes as Red Hat continues to expand its footprint, announcing partnerships with Oracle and SAP this week.
In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.
There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?
To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.
The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.
In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.
There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?
To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.
The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.
In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.
There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?
To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.
The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.
When we reviewed termusic back in April 2022 we lamented that this music player was a strong candidate for someone looking for a terminal-based music player with one exception. The software lacked gapless playback.
Setiap pemain yang menargetkan kemenangan di dalam permainan judi online Kasino harus tahu tentang game yang bisa dimainkan. Saat ini pilihan game yang tersedia dan ditawarkan serta bisa dimainkan memang banyak sekali pilihannya. Namun meski demikian anda disarankan untuk bisa hanya fokus mencari pilihan game yang mudah dimenangkan. Biasanya memang permainan kasino tersebut banyak sekali peminat dan penggunanya sehingga Anda bisa meyakinkan diri bahwa bermain game tersebut akan memudahkan anda untuk menang dengan mudah.
Kemenangan mudah dan keuntungan besar dalam sebuah permainan taruhan judi akan sangat bergantung pada beberapa teknik dan strategi yang Anda gunakan. Dalam hal ini penting juga untuk anda cari tahu terlebih dahulu trik dan strategi yang nantinya akan anda gunakan. Tapi di samping itu sebelumnya penting untuk cari tahu terlebih dahulu beberapa koleksi dan pilihan permainan yang populer. Anda bisa mencoba dan memilih serta memainkan salah satu pilihan game maju dia memang paling populer dan paling banyak peminat dan penggunanya.
Data Rekomendasi Game Judi Online Casino Mudah Menang
Saat ini rekomendasi pilihan permainan judi casino yang bisa dimainkan memang banyak sekali dan disarankan untuk Anda memilih pilihan game yang mudah menang. Permainan slot yang mudah untuk dimenangkan adalah pilihan terbaik yang recommended untuk dipilih karena itu akan memudahkan untuk anda segera balik modal. Maka dari itu coba pelajari dan juga cari tahu beberapa pilihan game Mbak judi online Kasino yang selama ini memang nggak banyak sekali peminatnya. Anda bisa memilih dan memainkan salah satu dari beberapa pilihan permainan berikut yang memang terbukti bagus dan bisa digunakan.
Sicbo - Permainan dadu yang satu ini juga merupakan salah satu pilihan permainan yang populer di Indonesia karena memang permainan ini memiliki keseruan di mana kita bermain dengan 3 buah dadu. Kemudian selanjutnya kita diharuskan untuk menebak berapa mata dadu yang akan muncul setelah kita lempar.
Dragon Tiger - Kemudian jenis permainan selanjutnya yang populer adalah permainan Dragon Tiger yang merupakan permainan yang cukup unik dan menarik dimana kita ada 2 buah yakni Dragon dan tiger. Permainan ini memiliki daya tarik serta memiliki keunikan yang membuat banyak orang memainkan permainan itu.
Casino Holdem Poker - Ini juga permainan kartu yang memang cukup populer di Indonesia yang tersedia dalam Casino Online sehingga layak untuk Anda coba mainkan. Permainan ini memungkinkan anda untuk bisa mengasah skill dan insting serta keseruan serta tantangan juga.
Blackjack- game judi online blackjack juga merupakan pilihan terbaik yang recommended dan bagus untuk dipilih karena memang punya banyak penawaran menarik yang menguntungkan. Permainan ini sangatlah menarik dan juga memiliki sensasi klasik karena sudah ada sejak zaman dulu dan sampai sekarang masih menjadi pilihan yang Primadona dan banyak peminatnya.
Bacarat - selanjutnya juga Anda bisa mencoba bermain permainan taruhan judi online bacarat. Sebagaimana diketahui bahwa permainan tersebut menjadi pilihan game terbaik yang menguntungkan yang akan bisa memberikan peluang dan penghasilan yang besar. Kesempatan terbaik untuk Anda bisa bermain permainan judi dengan sistem lain yang aman dan nyaman sekaligus juga terintegrasi.
Jadi pastikan agar supaya Anda bisa memilih dan memainkan salah satu pilihan game jadi terbaik dengan berbagai keuntungan terbesar. Pelajari dan juga cari tahu bagaimana cara bermain permainan taruhan judi dengan aman dan nyaman serta menguntungkan.
While installing Python and Virtualenv on MacOS Ventura and Monterey can be done several ways, this tutorial will guide you through the process of configuring a stock Mac system into a solid Python development environment.
First steps
This guide assumes that you have already installed Homebrew. For details, please follow the steps in the MacOS Configuration Guide.
Python
We are going to install the latest version of Python via asdf and its Python plugin. Why bother, you ask, when Apple includes Python along with MacOS? Here are some reasons:
When using the bundled Python, MacOS updates can remove your Python packages, forcing you to re-install them.
As new versions of Python are released, the Python bundled with MacOS will become out-of-date. Building Python via asdf means you always have access to the most recent Python version.
Apple has made significant changes to its bundled Python, potentially resulting in hidden bugs.
Building Python via asdf includes the latest versions of Pip and Setuptools (Python package management tools)
Use the following command to install asdf and Python build dependencies via Homebrew:
Install the asdf Python plugin and the latest version of Python:
asdf plugin add python
asdf install python latest
Note the Python version number that was just installed. For the purpose of this guide, we will assume version 3.11.1, so replace that number below with the version number you actually just installed.
Set the default global Python version:
asdf global python 3.11.1
Confirm the Python version matches the latest version we just installed:
python --version
Pip
Let's say you want to install a Python package, such as the Virtualenv environment isolation tool. While many Python-related articles for MacOS tell the reader to install Virtualenv via sudo pip install virtualenv, the downsides of this method include:
installs with root permissions
installs into the system /Library
yields a less reliable environment when using Python built with asdf
As you might have guessed by now, we are going to use the asdf Python plugin to install the Python packages that we want to be globally available. When installing via python -m pip […], packages will be installed to: ~/.asdf/installs/python/{version}/lib/python{version}/site-packages/
First, let's ensure we are using the latest version of Pip and Setuptools:
python -m pip install --upgrade pip setuptools
In the next section, we'll use Pip to install our first globally-available Python package.
Virtualenv
Python packages installed via Pip are global in the sense that they are available across all of your projects. That can be convenient at times, but it can also create problems. For example, sometimes one project needs the latest version of Django, while another project needs an older Django version to retain compatibility with a critical third-party extension. This is one of many use cases that Virtualenv was designed to solve. On my systems, only a handful of general-purpose Python packages (including Virtualenv) are globally available - every other package is confined to virtual environments.
With that explanation behind us, let's install Virtualenv:
Now we have Virtualenv installed and ready to create new virtual environments, which we will store in ~/Virtualenvs. New virtual environments can be created via:
cd ~/Virtualenvs
virtualenv project-a
If you have both Python 3.10.x and 3.11.x installed and want to create a Python 3.10.9 virtual environment:
What happens if we think we are working in an active virtual environment, but there actually is no virtual environment active, and we install something via python -m pip install foobar? Well, in that case the foobar package gets installed into our global site-packages, defeating the purpose of our virtual environment isolation.
Thankfully, Pip has an undocumented setting (source) that tells it to bail out if there is no active virtual environment, which is exactly what we want. In fact, we've already set that above, via the require-virtualenv = true directive in Pip's configuration file. For example, let's see what happens when we try to install a package in the absence of an activated virtual environment:
python -m pip install markdown
Could not find an activated virtualenv (required).
Perfect! But once that option is set, how do we install or upgrade a global package? We can temporarily turn off this restriction by defining a new function in ~/.zshrc:
(As usual, after adding the above you must run source ~/.zshrc for the change to take effect.)
If in the future we want to upgrade our global packages, the above function enables us to do so via:
gpip install --upgrade pip setuptools virtualenv
You could achieve the same effect via PIP_REQUIRE_VIRTUALENV="0" python -m pip install --upgrade […], but that's much more cumbersome to type every time.
Creating virtual environments
Let's create a virtual environment for Pelican, a Python-based static site generator:
cd ~/Virtualenvs
virtualenv pelican
Change to the new environment and activate it via:
cd pelican
source bin/activate
To install Pelican into the virtual environment, we'll use Pip:
python -m pip install pelican markdown
For more information about virtual environments, read the Virtualenv docs.
Dotfiles
These are obviously just the basic steps to getting a Python development environment configured. Feel free to also check out my dotfiles.
If you found this article to be useful, feel free to find me on Twitter.
This release has some particularly interesting features that we've been wanting to ship for a while now. We're excited to share them with you!
For those who aren't familiar with Multipass, it's software that streamlines every aspect of managing and working with virtual machines. We've found that development, particularly for cloud applications, can often involve a huge amount of tedious work setting up development and testing environments. Multipass aims to solve that by making the process of creating and destroying VMs as simple as a single command, and by integrating the VM into your host machine and your development flow as much as possible.
That principle of integration is one of the main focuses we had for the 1.11 release. There are two major features out today that make Multipass much more integrated with your host machine - native mounts and directory mapping.
Performance has always been in Multipass' DNA - we try to keep it as lightweight as we can so that nothing gets between developers and their work. With the 1.11 release, we've taken another big step forward.
With the new native mounts feature, Multipass is getting a major performance boost. This feature uses platform-optimized software to make filesystems shared between the host computer and the virtual machine much faster than before. In benchmarking, we've seen speed gains of around 10x! For people sharing data with Multipass from their host machine, this is a huge time saver.
Multipass is one of the few VM management tools available to developers on Apple silicon. Performance mounts make the M1 and M2 even faster platforms for Ubuntu. For those who don't remember, Multipass can launch VMs on the Apple M1 and M2 in less than 20 seconds.
User experience
Multipass' performance leveled up with this release, and the user experience did as well! Directory mapping is a new way to be more efficient than ever with Multipass. Multipass has supported command aliasing for some time now, but one drawback of aliasing alone is that it loses the context of where the command is executed in the filesystem. Commands like docker-compose, for example, are context sensitive. They may rely on certain files being present in the working directory, or give different results depending on where they are run.
Directory mapping maintains the context of an aliased command, meaning that an aliased command sent from the host will be executed in the same context on the VM. This feature has the potential to make it feel like you are running linux programs natively on your Mac or Windows terminal.
In addition to directory mapping, Blueprints now allow for alias and workspace definitions, meaning you can now spin up a new VM and start using aliased (and context-sensitive) commands in a shared filespace with no additional configuration required. Look for some examples in the near future!
Some other notable upgrades include the `transfer` command and UEFI booting. The `transfer` command now allows for recursive file transfers. This should make it much easier to transfer entire directories as opposed to individual folders or files. Multipass now boots its instances via UEFI which means we are able to support Ubuntu Core 20 and 22 for our IoT developers.
To get started with Multipass, head to our install page or check out our tutorials. We always love to hear feedback from our community, so please let us know what you're up to by posting in discourse, or dropping in for our office hours.
This is the story of the currently progressing changes to secure boot on Ubuntu and the history of how we got to where we are.
taking a step back: how does secure boot on Ubuntu work?
Booting on Ubuntu involves three components after the firmware:
shim
grub
linux
Each of these is a PE binary signed with a key. The shim is signed by Microsoft's 3rd party key and embeds a self-signed Canonical CA certificate, and optionally a vendor dbx (a list of revoked certificates or binaries). grub and linux (and fwupd) are then signed by a certificate issued by that CA
In Ubuntu's case, the CA certificate is sharded: Multiple people each have a part of the key and they need to meet to be able to combine it and sign things, such as new code signing certificates.
BootHole
When BootHole happened in 2020, travel was suspended and we hence could not rotate to a new signing certificate. So when it came to updating our shim for the CVEs, we had to revoke all previously signed kernels, grubs, shims, fwupds by their hashes.
This generated a very large vendor dbx which caused lots of issues as shim exported them to a UEFI variable, and not everyone had enough space for such large variables. Sigh.
We decided we want to rotate our signing key next time.
This was also when upstream added SBAT metadata to shim and grub. This gives a simple versioning scheme for security updates and easy revocation using a simple EFI variable that shim writes to and reads from.
Spring 2022 CVEs
We still were not ready for travel in 2021, but during BootHole we developed the SBAT mechanism, so one could revoke a grub or shim by setting a single EFI variable.
We actually missed rotating the shim this cycle as a new vulnerability was reported immediately after it, and we decided to hold on to it.
2022 key rotation and the fall CVEs
This caused some problems when the 2nd CVE round came, as we did not have a shim with the latest SBAT level, and neither did a lot of others, so we ended up deciding upstream to not bump the shim SBAT requirements just yet. Sigh.
Anyway, in October we were meeting again for the first time at a Canonical sprint, and the shardholders got together and created three new signing keys: 2022v1, 2022v2, and 2022v3. It took us until January before they were installed into the signing service and PPAs setup to sign with them.
We also submitted a shim 15.7 with the old keys revoked which came back at around the same time.
Now we were in a hurry. The 22.04.2 point release was scheduled for around middle of February, and we had nothing signed with the new keys yet, but our new shim which we need for the point release (so the point release media remains bootable after the next round of CVEs), required new keys.
So how do we ensure that users have kernels, grubs, and fwupd signed with the new key before we install the new shim?
upgrade ordering
grub and fwupd are simple cases: For grub, we depend on the new version. We decided to backport grub 2.06 to all releases (which moved focal and bionic up from 2.04), and kept the versioning of the -signed packages the same across all releases, so we were able to simply bump the Depends for grub to specify the new minimum version. For fwupd-efi, we added Breaks.
(Actually, we also had a backport of the CVEs for 2.04 based grub, and we did publish that for 20.04 signed with the old keys before backporting 2.06 to it.)
Kernels are a different story: There are about 60 kernels out there. My initial idea was that we could just add Breaks for all of them. So our meta package linux-image-generic which depends on linux-image-$(uname -r)-generic, we'd simply add Breaks: linux-image-generic (« 5.19.0-31) and then adjust those breaks for each series. This would have been super annoying, but ultimately I figured this would be the safest option. This however caused concern, because it could be that apt decides to remove the kernel metapackage.
I explored checking the kernels at runtime and aborting if we don't have a trusted kernel in preinst. This ensures that if you try to upgrade shim without having a kernel, it would fail to install. But this ultimately has a couple of issues:
It aborts the entire transaction at that point, so users will be unable to run apt upgrade until they have a recent kernel.
We cannot even guarantee that a kernel would be unpacked first. So even if you got a new kernel, apt/dpkg might attempt to unpack it first and then the preinst would fail because no kernel is present yet.
Ultimately we believed the danger to be too large given that no kernels had yet been released to users. If we had kernels pushed out for 1-2 months already, this would have been a viable choice.
So in the end, I ended up modifying the shim packaging to install both the latest shim and the previous one, and an update-alternatives alternative to select between the two:
In it's post-installation maintainer script, shim-signed checks whether all kernels with a version greater or equal to the running one are not revoked, and if so, it will setup the latest alternative with priority 100 and the previous with a priority of 50. If one or more of those kernels was signed with a revoked key, it will swap the priorities around, so that the previous version is preferred.
Now this is fairly static, and we do want you to switch to the latest shim eventually, so I also added hooks to the kernel install to trigger the shim-signed postinst script when a new kernel is being installed. It will then update the alternatives based on the current set of kernels, and if it now points to the latest shim, reinstall shim and grub to the ESP.
Ultimately this means that once you install your 2nd non-revoked kernel, or you install a non-revoked kernel and then reconfigure shim or the kernel, you will get the latest shim. When you install your first non-revoked kernel, your currently booted kernel is still revoked, so it's not upgraded immediately. This has a benefit in that you will most likely have two kernels you can boot without disabling secure boot.
regressions
Of course, the first version I uploaded had still some remaining hardcoded "shimx64" in the scripts and so failed to install on arm64 where "shimaa64" is used. And if that were not enough, I also forgot to include support for gzip compressed kernels there. Sigh, I need better testing infrastructure to be able to easily run arm64 tests as well (I only tested the actual booting there, not the scripts).
shim-signed migrated to the release pocket in lunar fairly quickly, but this caused images to stop working, because the new shim was installed into images, but no kernel was available yet, so we had to demote it to proposed and block migration. Despite all the work done for end users, we need to be careful to roll this out for image building.
another grub update for OOM issues.
We had two grubs to release: First there was the security update for the recent set of CVEs, then there also was an OOM issue for large initrds which was blocking critical OEM work.
We fixed the OOM issue by cherry-picking all 2.12 memory management patches, as well as the red hat patches to the loader we take from there. This ended up a fairly large patch set and I was hesitant to tie the security update to that, so I ended up pushing the security update everywhere first, and then pushed the OOM fixes this week.
With the OOM patches, you should be able to boot initrds of between 400M and 1GB, it also depends on the memory layout of your machine and your screen resolution and background images. So OEM team had success testing 400MB irl, and I tested up to I think it was 1.2GB in qemu, I ran out of FAT space then and stopped going higher :D
other features in this round
Intel TDX support in grub and shim
Kernels are allocated as CODE now not DATA as per the upstream mm changes, might fix boot on X13s
am I using this yet?
The new signing keys are used in:
shim-signed 1.54 on 22.10+, 1.51.3 on 22.04, 1.40.9 on 20.04, 1.37~18.04.13 on 18.04
grub2-signed 1.187.2~ or newer (binary packages grub-efi-amd64-signed or grub-efi-arm64-signed), 1.192 on 23.04.
fwupd-signed 1.51~ or newer
various linux updates. Check apt changelog linux-image-unsigned-$(uname -r) to see if Revoke & rotate to new signing key (LP: #2002812) is mentioned in there to see if it signed with the new key.
If you were able to install shim-signed, your grub and fwupd-efi will have the correct version as that is ensured by packaging. However your shim may still point to the old one. To check which shim will be used by grub-install, you can check the status of the shimx64.efi.signed or (on arm64) shimaa64.efi.signed alternative. The best link needs to point to the file ending in latest:
$ update-alternatives --display shimx64.efi.signed
shimx64.efi.signed - auto mode
link best version is /usr/lib/shim/shimx64.efi.signed.latest
link currently points to /usr/lib/shim/shimx64.efi.signed.latest
link shimx64.efi.signed is /usr/lib/shim/shimx64.efi.signed
/usr/lib/shim/shimx64.efi.signed.latest - priority 100
/usr/lib/shim/shimx64.efi.signed.previous - priority 50
If it does not, but you have installed a new kernel compatible with the new shim, you can switch immediately to the new shim after rebooting into the kernel by running dpkg-reconfigure shim-signed. You'll see in the output if the shim was updated, or you can check the output of update-alternatives as you did above after the reconfiguration has finished.
For the out of memory issues in grub, you need grub2-signed 1.187.3~ (same binaries as above).
how do I test this (while it's in proposed)?
upgrade your kernel to proposed and reboot into that
upgrade your grub-efi-amd64-signed, shim-signed, fwupd-signed to proposed.
If you already upgraded your shim before your kernel, don't worry:
upgrade your kernel and reboot
run dpkg-reconfigure shim-signed
And you'll be all good to go.
deep dive: uploading signed boot assets to Ubuntu
For each signed boot asset, we build one version in the latest stable release and the development release. We then binary copy the built binaries from the latest stable release to older stable releases. This process ensures two things: We know the next stable release is able to build the assets and we also minimize the number of signed assets.
OK, I lied. For shim, we actually do not build in the development release but copy the binaries upward from the latest stable, as each shim needs to go through external signing.
The entire workflow looks something like this:
Upload the unsigned package to one of the following "build" PPAs:
Copy the unsigned package back across all stable releases in the PPA
Upload the signed package for stable releases to the same PPA with ~<release>.1 appended to the version
Submit a request to canonical-signing-jobs to sign the uploads.
The signing job helper copies the binary -unsigned packages to the primary-2022v1 PPA where they are signed, creating a signing tarball, then it copies the source package for the -signed package to the same PPA which then downloads the signing tarball during build and places the signed assets into the -signed deb.
This step is not strictly necessary, but it enables tools like sru-review to work, as they cannot access the packages from the normal private "proposed" PPA.
Binary copy from proposed-public to the proposed queue(s) in the primary archive
Lots of steps!
WIP
As of writing, only the grub updates have been released, other updates are still being verified in proposed. An update for fwupd in bionic will be issued at a later point, removing the EFI bits from the fwupd 1.2 packaging and using the separate fwupd-efi project instead like later release series.
Tahukah anda bahwa di Indonesia ada banyak sekali pilihan permainan judi online yang tersedia dan bisa anda mainkan. Anda sebetulnya bebas aja memilih permainan mana saja sesuai dengan yang anda mau makan sebaiknya Coba Anda cari manakah yang memang paling populer dan menguntungkan. Dengan cara begitu kita bisa menemukan dan mendapatkan salah satu pilihan permainan yang memang bisa memberikan kita dan membawa kita pada keberuntungan. Untuk bisa menemukan dan mendapatkan pilihan permainan yang seperti itu tentu kita harus melakukan pencarian sampai kemudian bisa menemukan salah satunya.
Memilih permainan casino yang populer tentu bisa menjadi salah satu cara dan langkah terbaik yang bisa kita lakukan. Kenapa demikian? Ya ketika memang permainan itu populer dan banyak dimainkan oleh banyak orang itu artinya memang ada banyak keuntungan dan kelebihan yang ditawarkan oleh permainan tersebut. Tidak akan menyesal jika bisa bergabung dan bermain di salah satu pilihan permainan yang populer. Justru akan ada banyak keuntungan dan kelebihan besar yang bisa kita peroleh dari keputusan kita bergabung di salah satu pilihan permainan yang populer.
Apa Saja Permainan Judi Online yang Populer di Indonesia?
Indonesia ada banyak sekali pilihan permainan casino yang tersedia dan bisa kita mainkan dan tentunya kita bisa memilih jenis permainan yang paling bagus dan paling populer. Dari banyak pilihan permainan yang ada, memang ada beberapa diantaranya yang sangat populer dan banyak diminati oleh banyak kalangan. Ketika memang ada banyak orang yang meminati untuk bermain itu juga bisa menjadi salah satu pilihan yang tepat. Apa saja Pilihan permainan yang banyak dimainkan itu? Beberapa diantaranya dan penjelasannya di bawah ini:
Baccarat - Ini adalah salah satu pilihan permainan yang seru yang banyak diminati banyak orang selama ini karena ada posisi ataupun banker. Kedua posisi tersebut bisa anda pilih dan mainkan dalam posisi taruhan permainan tersebut.
Blackjack - Blackjack adalah salah satu pilihan permainan yang seru dan menyenangkan karena dalam permainan ini ada melawan dealer ataupun bandar. Anda juga diharuskan untuk bisa mengkombinasikan kan kartu dan mendapatkan angka 21 tidak boleh lebih.
Roulette - Roulette juga merupakan salah satu pilihan permainan yang populer di Indonesia karena memang peminatnya cukup banyak dan beragam. Kita bisa cek dan pastikan sendiri bahwa jumlah pemain yang memainkan permainan ini sangatlah beragam dari banyak kalangan. Permainan ini juga bisa memberikan keuntungan finansial yang besar.
Dragon Tiger - Pilihan dan rekomendasi games judi online Casino selanjutnya yang juga sangat familiar ada Dragon Tiger yakni sebuah permainan kartu yang berasal dari Cina. Game ini juga sangat populer dan populer serta banyak peminatnya karena bisa memberikan penghasilan besar sekaligus juga daya tarik menarik lainnya.
Beberapa permainan di atas juga di Indonesia masih banyak permainan judi Casino versi online lainnya yang tersedia dan bisa anda pilih dan mainkan. Segitunya untuk Anda bisa memilih dan memainkan permainan mana saja namun cobalah untuk Anda cek Apakah terbuat dari permainan itu dan bagaimana caranya agar Anda bisa mendapatkan keuntungan finansial dari permainan tersebut. Kamu itu harus dipikirkan dengan baik sebelum kita terjun ke dalam permainan itu agar kita berhasil memperoleh banyak penghasilan dan juga keseluruhan dan kesenangan dari Casino Online itu. Selamat mencoba dan selamat memainkan salah satu atau beberapa permainan yang tersedia.
This is a non-comprehensive list of all of the major work I've done for KDE this month of January. I think I got a lot done this month! I also was accepted as a KDE Developer near the start of the month, so I'm pretty happy about that.
Sorry that it's pretty much only text, a lot of this stuff isn't either not screenshottable or I'm too lazy to attach an image. Next month should be better!
Custom icon theme in Tokodon
I threw all of the custom icons we use in Tokodon into a proper custom icon theme, which should automatically match your theme and includes a dark theme variant. In the future, I'd like to recolor these better and eventually upstream them into Breeze.
As part of cleaning up some KDE games-related stuff, I also looked into the issue of duplicate "What's This?" tooltips. This also fixes that visual bug where you can close normal tooltips that don't have "What's This?" information to actually open.
This one isn't merged yet, but in the future - KBlock themes authors will be able to specify where to pin the background instead of having it stretched by default.
I added something that's been wanted for a while, Kirigami's own "About KDE" dialog! It's currently sitting in Add-ons, but will most likely be moved in the future. If you would like to suggest what we do about the About pages/windows in KDE, please check out the proposal.
I did a lot of work improving media in Tokodon this month, including fixing the aspect ratios scaling correctly, video support (not merged yet) and other miscellaneous fixes. I also caught a bunch of blurhash bugs along with making the timeline fixed-width so images aren't absurdly sized on a typical desktop display.
Not merged yet (due to MPRIS bugginess in general?) but I cracked a shot at improving the MPRIS situation with Gwenview. Notably, slideshow controls no longer "hang around" until a slideshow is actually happening.
I worked a little on solving the kdesrc-build issue of manual package lists, and created cmake-package-installer. It parses your CMake log and installs the relevant packages for you. I want to start looking into hooking this into kdesrc-build!
I made some misc changes to the Community Wiki this month, mostly centered around fixing some long-standing formatting issues I've noticed. The homepage should be more descriptive, important pages no longer misformatted (or just missing?) and the Get Involved/Development page should be better organized.
Misc Qt patches
I cherry-picked a Qt6 commit fixing video playback in QML, which should appear in the next Qt KDE Patch collection update, mostly for use in Tokodon when video support lands. I also submitted an upstream Qt patch fixing WebP loading, meant for NeoChat where I see the most WebP images.
This isn't merged yet (but it's close!) so it barely misses the mark for January, but I'll include it anyway. I'm working on making the Window Decoration KCM frameless and give it a new look that matches the other KCMs.
I previously wrote a post talking about some optimization work that's been done with RADV to improve fast-link performance. As promised, that wasn't the end of the story. Today's post will be a bit different, however, as I'll be assuming all the graphics experts in the audience are already well-versed in all the topics I'm covering.
Also I'm assuming you're all driver developers interested in improving your GPL fast-link performance.
The one exception is that today I'll be using a specific definition for fast when it comes to fast-linking: to be fast, a driver should be able to fast-link in under 0.01ms. In an extremely CPU-intensive application, this should allow for even the explodiest of pipeline explosions (100+ fast-links in a single frame) to avoid any sort of hitching/stuttering.
Which drivers have what it takes to be fast?
Testing
To begin evaluating fast-link performance, it's important to have test cases. Benchmarks. The sort that can be easily run, easily profiled, easily understood.
vkoverhead is the premier tool for evaluating CPU overhead in Vulkan drivers, and thanks to Valve, it now has legal support for GPL fast-link using real pipelines from Dota2. That's right. Acing this synthetic benchmark will have real world implications.
For anyone interested in running these cases, it's as simple as building and then running:
./vkoverhead -start 135
These benchmark cases will call vkCreateGraphicsPipelines in a tight loop to perform a fast-link on GPL-created pipeline libraries, fast-linking thousands of times per second for easy profiling. The number of iterations per second, in thousands, is then printed.
vkoverhead works with any Vulkan driver on any platform (including Windows!), which means it's possible to use it to profile and optimize any driver.
Optimization
vkoverhead currently has two cases for GPL fast-link. As they are both extracted directly from Dota2, they have a number of properties in common:
similar descriptor layouts/requirements
same composition of libraries (all four GPL stages created separately)
Each case tests the following:
depthonly is a pipeline containing only a vertex shader, forcing the driver to use its own fragment shader
slow is a pipeline that happens to be slow to create on many drivers
Various tools are available on different platforms for profiling, and I'm not going to go into details here. What I'm going to do instead is look into strategies for optimizing drivers. Strategies that I (and others) have employed in real drivers. Strategies that you, if you aren't shipping a fast-linking implementation of GPL, might be interested in.
First Strategy: Move NO-OP Fragment Shader To Device
The depthonly case explicitly tests whether drivers are creating a new fragment shader for every pipeline that lacks one. Drivers should not do this.
Instead, create a single fragment shader on the device object and reuse it like these drivers do:
In addition to being significantly faster, this also saves some memory.
Second Strategy: Avoid Copying Shader IR
Regular, optimized pipeline creation typically involves running optimization passes across the shader binaries, possibly even the entire pipeline, to ensure that various speedups can be found. Many drivers copy the internal shader IR in the course of pipeline creation to handle shader variants.
Don't copy shader IR when trying to fast-link a pipeline.
Copying IR is very expensive, especially in larger shaders. Instead, either precompile unoptimized shader binaries in their corresponding GPL stage or refcount IR structures that must exist during execution. Examples:
There's no reason to cache a fast-linked pipeline. The amount of time saved by retrieving a cached pipeline should be outweighed by the amount of time required to:
compute a key/hash for a given pipeline
access the cache
I say should because ideally a driver should be so fast at combining a GPL pipeline that even a cache hit is only comparable performance, if not slower outright. Skip all aspects of caching for these pipelines.
If a driver is still slow after checking for the above items, it's time to try profiling. It's surprising what slowdowns drivers will hit. The classics I've seen are large memset calls and avoidable allocations.
Lavapipe deferred CSO creation*
Lavapipe is special because it is both a CPU-based driver using LLVM, meaning the time spent out of LLVM will never exceed the time spent inside LLVM. It also executes its command buffers in a thread. Thus, it can "cheat" by deferring the final creation of its shader CSOs until pipeline bind time.
A Mystery Solved
In my previous post, I alluded to a driver that was shipping a GPL implementation that advertised fast-link but wasn't actually fast. I saw a lot of guesses. Nobody got it right.
It was Lavapipe (me) all along.
As hinted at above, however, this is no longer the case. In fact, after going through the listed strategies, Lavapipe now has the fastest GPL linking in the world.
Obviously it would have to if I'm writing a blog post about optimizing fast-linking, right?
Fast-linking: Initial Comparisons
How fast is Lavapipe's linking, you might ask?
To answer this, let's first apply a small patch to bump up Lavapipe's descriptor limits so it can handle the beefy Dota2 pipelines. With that done, here's a look at comparisons to other, more legitimate drivers, all running on the same system.
NVIDIA is the gold standard for GPL fast-linking considering how long they've been shipping it. They're pretty fast.
$ VK_ICD_FILENAMES=nvidia_icd.json ./vkoverhead -start 135 -duration 5
vkoverhead running on NVIDIA GeForce RTX 2070:
* misc numbers are reported as thousands of operations per second
* percentages for misc cases should be ignored
135, misc_compile_fastlink_depthonly, 444, 100.0%
136, misc_compile_fastlink_slow, 243, 100.0%
RADV (with pending MRs applied) has gotten incredibly fast over the past week-ish.
$ RADV_PERFTEST=gpl ./vkoverhead -start 135 -duration 5
vkoverhead running on AMD Radeon RX 5700 XT (RADV NAVI10):
* misc numbers are reported as thousands of operations per second
* percentages for misc cases should be ignored
135, misc_compile_fastlink_depthonly, 579, 100.0%
136, misc_compile_fastlink_slow, 537, 100.0%
Lavapipe (with pending MRs applied) blows them both out of the water.
$ VK_ICD_FILENAMES=lvp_icd.x86_64.json ./vkoverhead -start 135 -duration 5
vkoverhead running on llvmpipe (LLVM 15.0.6, 256 bits):
* misc numbers are reported as thousands of operations per second
* percentages for misc cases should be ignored
135, misc_compile_fastlink_depthonly, 1485, 100.0%
136, misc_compile_fastlink_slow, 1464, 100.0%
Even if the NVIDIA+RADV numbers are added together, it's still not close.
Fast-linking: More Comparisons
If I switch over to a different machine, Intel's ANV driver has a MR for GPL open, and it's seeing some movement. Here's a head-to-head with the champion.
$ ./vkoverhead -start 135 -duration 5
vkoverhead running on Intel(R) Iris(R) Plus Graphics (ICL GT2):
* misc numbers are reported as thousands of operations per second
* percentages for misc cases should be ignored
135, misc_compile_fastlink_depthonly, 384, 100.0%
136, misc_compile_fastlink_slow, 276, 100.0%
$ VK_ICD_FILENAMES=lvp_icd.x86_64.json ./vkoverhead -start 135 -duration 5
vkoverhead running on llvmpipe (LLVM 15.0.6, 256 bits):
* misc numbers are reported as thousands of operations per second
* percentages for misc cases should be ignored
135, misc_compile_fastlink_depthonly, 1785, 100.0%
136, misc_compile_fastlink_slow, 1779, 100.0%
On yet another machine, here's Turnip, which advertises the fast-link feature. This driver requires a small patch to modify MAX_SETS=5 since this is hardcoded at 4. I've also pinned execution here to the big cores for consistency.
# turnip ooms itself with -duration
$ ./vkoverhead -start 135
vkoverhead running on Turnip Adreno (TM) 618:
* misc numbers are reported as thousands of operations per second
* percentages for misc cases should be ignored
135, misc_compile_fastlink_depthonly, 73, 100.0%
136, misc_compile_fastlink_slow, 23, 100.0%
$ VK_ICD_FILENAMES=lvp_icd.aarch64.json ./vkoverhead -start 135 -duration 5
vkoverhead running on llvmpipe (LLVM 14.0.6, 128 bits):
* misc numbers are reported as thousands of operations per second
* percentages for misc cases should be ignored
135, misc_compile_fastlink_depthonly, 690, 100.0%
136, misc_compile_fastlink_slow, 699, 100.0%
More Analysis
We've seen that Lavapipe is unequivocally the champion of fast-linking in every head-to-head, but what does this actually look like in timings?
Here's a chart that shows the breakdown in milliseconds.
Driver
` misc_compile_fastlink_depthonly `
` misc_compile_fastlink_slow `
NVIDIA
0.002ms
0.004ms
RADV
0.0017ms
0.0019ms
Lavapipe
0.0007ms
0.0007ms
ANV
0.0026ms
0.0036ms
Lavapipe
0.00056ms
0.00056ms
Turnip
0.0137ms
0.0435ms
Lavapipe
0.001ms
0.001ms
As we can see, all of these drivers are "fast". A single fast-link pipeline isn't likely to cause any of them to drop a frame.
The driver I've got my eye on, however, is Turnip, which is the only one of the tested group that doesn't quite hit that 0.01ms target. A little bit of profiling might show some easy gains here.
Even More Analysis
For another view of these drivers, let's examine the relative performance. Since GPL fast-linking is inherently a CPU task that has no relation to the GPU, it stands to reason that a CPU-based driver should be able to optimize for it the best given that there's already all manner of hackery going on to defer and delay execution. Indeed, reality confirms this, and looking at any profile of Lavapipe for the benchmark cases reveals that the only remaining bottleneck is the speed of malloc, which is to say the speed with which the returned pipeline object can be allocated.
Thus, ignoring potential micro-optimizations of pipeline struct size, it can be said that Lavapipe has effectively reached the maximum speed of the system for fast-linking. From there, we can say that any other driver running on the same system is utilizing some fraction of this power.
Therefore, every other driver's fast-link performance can be visualized in units of Lavapipe (lvps) to determine how much gain is possible if things like refactoring time and feasibility are ignored.
Driver
misc_compile_fastlink_depthonly
misc_compile_fastlink_slow
NVIDIA
0.299lvps
0.166lvps
RADV
0.390lvps
0.367lvps
ANV
0.215lvps
0.155lvps
Turnip
0.106lvps
0.033lvps
The great thing about lvps is that these are comparable units.
At last, we finally have a way to evaluate all these drivers in a head-to-head across different systems.
The results are a bit surprising to me:
RADV, with the insane heroics of Samuel "Gotta Go Fast" Pitoiset, has gone from roughly zero lvps last week to first place this week
NVIDIA's fast-linking, while quite fast, is closer in performance to an unoptimized, unlanded ANV MR than it is to RADV
Turnip is a mobile driver that both 1) has a GPL implementation 2) is kinda fast at an objective level?
Key Takeaways
Aside from the strategies outlined above, the key takeaway for me is that there shouldn't be any hardware limitation to implementing fast-linking. It's a CPU-based architectural problem, and with enough elbow grease, any driver can aspire to reach nonzero lvps in vkoverhead's benchmark cases.
After several days of head scratching, debugging and despair I finally got font subsetting working in PDF. The text renders correctly in Okular, goes througg Ghostscript without errors and even passes an online PDF validator I found. But not Acrobat Reader, which chokes on it completely and refuses to show anything. Sigh.
The most likely cause is that the subset font that gets generated during this operation is not 100% valid. The approach I use is almost identical to what LO does, but for some reason their PDFs work. Opening both files in FontForge seems to indicate that the .notdef glyph definition is somehow not correct, but offers no help as to why.
In any case it seems like there would be a need for a library for PDF generation. Existing libs either do not handle non-RGB color spaces or are implemented in Java, Ruby or other languages that are hard to use from languages other than themselves. Many programs, like LO and Scribus, have their own libraries for generating PDF files. It would be nice if there could be a single library for that.
Is this a reasonable idea that people would actually be interested in? I don't know, but let's try to find out. I'm going to spend the next weekend in FOSDEM. So if you are going too and are interested in PDF generation, write a comment below or send me an email, I guess? Maybe we can have a shadow meeting in the cafeteria.
Flatpak applications are based on runtimes such as KDE or Gnome Runtimes. Both of these runtimes are actually based on Freedesktop SDK which contains essential libraries and services such as Wayland or D-Bus.
Recently there were a lot of discussion about supply chain attacks, so it might be interesting to ask how Freedesktop SDK was built. The answer can be found in freedesktop-sdk repository:
So it is built using an older version of Freedesktop SDK image. There is now an approved merge request that completely reworks bootstrapping of Freedesktop SDK. It uses another intermediate docker image freedesktop-sdk-binary-seed that bridges the gap between freedesktop-sdk and live-bootstrap.
So what is this live-bootstrap? If you look at parts.rst you'll see that it is a build chain that starts with 256 byte hex assembler that can build itself from its source and also 640-byte trivial shell that can read list of commands from the file and executes them. Then it proceeds building 130 (as of the moment of writing) other components and in the process builds GCC, Python, Guile, Perl and lots of other supporting packages. Furthermore, each component is built reproducibly (and this is checked using SHA256 hash).
Some caveat: at the moment freedesktop-sdk-binary-seed still uses older binary of rustc to build rustc but in principle one could leverage mrustc to build it. Or possibly rust-gcc will become more capable in future versions and will be able to bootstrap rustc.
So unless your flatpak application uses rust, it will soon be buildable from sub 1 KiB binary seed.
This tutorial covers how to properly install the latest version of [Django (4.1)](https://www.djangoproject.com/) and [Python (3.11)](https://www.python.org). As the [official docs note](https://docs.djangoproject.com/en/dev/topics/install/), if you are already familiar with the command line, …
Memilih permainan taruhan memang harus dipikirkan dengan baik dan benar dan tidak boleh dilakukan dengan sembarangan. Kalaupun ada banyak koleksi permainan yang tersedia tetap kita harus bisa selektif memutuskan dan memilih salah satu pilihan yang paling tepat. Dengan cara demikian inilah yang kemudian akan memungkinkan dan memudahkan kita untuk bisa memperoleh kemenangan. Kita bisa mendapatkan dan memperoleh kemenangan lebih mudah dari permainan yang kita lakukan jika kita bisa bermain dengan cara yang tepat.
Kenapa membutuhkan strategi?
Permainan casino sebetulnya bukan sepenuhnya permainan yang harus mengandalkan yang namanya keberuntungan, akan tetapi kita harus bisa menguasai beragam strategi bermain agar bisa menang mudah dalam permainan itu. Makin banyak strategi bermain yang kita kuasai maka semakin besar pula peluang keuntungan yang bisa kita peroleh dan dapatkan. Oleh karena itu sebisa mungkin kita harus pelajari strategi bermain seperti apa yang harus kita gunakan. Dengan cara demikian ini bisa memungkinkan dan memudahkan kita untuk menang lebih mudah dan lebih seru. Bermain dengan strategi juga bisa memudahkan kita untuk bisa menggapai target yang ingin kita capai.
Strategi Bermain Judi Online Menang Terus
Bermain game judi online agar supaya bisa menang terus terkadang memang membutuhkan beberapa trik dan juga strategi yang jitu. Kita bisa coba pelajari dan cari tahu bagaimana sebetulnya trik yang bisa digunakan bergantung pada pilihan permainan. Misalnya Anda mau bermain permainan seperti slot pragmatic, joker 123 dan lain sebagainya, sebaiknya gunakan teknik dan strategi yang benar.
Jika anda sudah bisa memilih salah satu pilihan permen yang yang tepat, anda tinggal mainkan saja permainan itu dan kemudian Anda rasakan keseruan dari permainan tersebut.
Pilih game yang punya potensi menang tinggi
Cara yang paling mudah pertama sebetulnya adalah memilih game yang memiliki potensi kemenangan yang tinggi. Oleh karena itu anda perlu lakukan beberapa pencarian untuk bisa menemukan pilihan game tersebut. Saat ini banyak sekali pilihan yang tersedia dan bisa dipilih namun juga harus disadari bahwa tidak semua pilihan game memiliki tingkat RTP tinggi.
Menyiapkan modal yang besar
Setiap pemain yang mau menang dan untung banyak di dalam permainan game maju di online judi pasti disarankan untuk bisa memilih untuk menyiapkan modal yang cukup. Persiapan modal yang cukup akan mempengaruhi banyak hal termasuk juga mempengaruhi tingkat kemenangan dan besar keuntungan yang bisa didapatkan. Jadi sebisa mungkin Anda harus bisa siapkan modal yang cukup agar bisa memiliki kesempatan untuk bisa bermain dengan lebih banyak keuntungan besar di dalamnya.
Mempelajari celah kemenangan
Setiap permainan apapun itu juga pasti akan selalu ada celah kemenangan yang bisa kita pelajari. Jadi pastikan supaya Anda bisa belajar dan cari tahu apa saja dan bagaimana saja sebetulnya cara kemenangan yang bisa kita gunakan agar bisa mendapatkan keuntungan yang besar dan berkali Lipat. Bahkan Anda juga bisa coba belajar dari beberapa sumber di beberapa media untuk bisa mendapatkannya informasinya.
Akan tetapi bagi Anda yang memang masih pemula sangat disarankan untuk bisa mengetahui dengan baik bahwa memang ada beberapa cara dan strategi tertentu yang bisa anda gunakan. Ada beberapa panduan khusus mendasar dan Advance yang memang harus dipahami dengan baik. Hal itu bertujuan agar Kemudian Anda bisa memperoleh dan mendapatkan keuntungan besar dari permainan tersebut. Ini juga yang memudahkan anda untuk bisa berjalan dengan lancar dan mendapatkan potensi atau peluang kemenangan yang lebih besar lagi dari permainan tersebut.
As we gear up for the release of GNOME 44, let's take a moment to reflect on the visual design updates.
We've made big strides in visual consistency with the growing number of apps that have been ported to gtk4 and libadwaita, embracing the modern look. Sam has also given the high-contrast style in GNOME Shell some love, keeping it in line with gtk's updates last cycle.
The default wallpaper stays true to our brand, but the supplementary set has undergone some bolder changes. From the popular simple shape blends to a nostalgic nod to the past with keypad and pixel art designs, there's something for everyone. The pixelized icons made their debut in the last release, but this time we focus on GNOME Circle apps, rather than the core apps.
Another exciting development is the continued use of geometry nodes in Blender. Although the tool has a steep learning curve, I'm starting to enjoy my time with it. I gave a talk on geometry nodes and its use for GNOME wallpaper design at the Fedora Creative Freedom Summit. You can watch the stream archive recording here (and part2).
After my previous efforts, I wrote up a PKCS#11 module of my own that had no odd restrictions about using non-RSA keys and I tested it. And things looked much better - ssh successfully obtained the key, negotiated with the server to determine that it was present in authorized_keys, and then went to actually do the key verification step. At which point things went wrong - the Sign() method in my PKCS#11 module was never called, and a strange debug1: identity_sign: sshkey_sign: error in libcrypto
sign_and_send_pubkey: signing failed for ECDSA "testkey": error in libcrypto"
error appeared in the ssh output. Odd. libcrypto was originally part of OpenSSL, but Apple ship the LibreSSL fork. Apple don't include the LibreSSL source in their public source repo, but do include OpenSSH. I grabbed the OpenSSH source and jumped through a whole bunch of hoops to make it build (it uses the macosx.internal SDK, which isn't publicly available, so I had to cobble together a bunch of headers from various places), and also installed upstream LibreSSL with a version number matching what Apple shipped. And everything worked - I logged into the server using a hardware-backed key.
Was the difference in OpenSSH or in LibreSSL? Telling my OpenSSH to use the system libcrypto resulted in the same failure, so it seemed pretty clear this was an issue with the Apple version of the library. The way all this works is that when OpenSSH has a challenge to sign, it calls ECDSA_do_sign(). This then calls ECDSA_do_sign_ex(), which in turn follows a function pointer to the actual signature method. By default this is a software implementation that expects to have the private key available, but you can also register your own callback that will be used instead. The OpenSSH PKCS#11 code does this by calling EC_KEY_set_method(), and as a result calling ECDSA_do_sign() ends up calling back into the PKCS#11 code that then calls into the module that communicates with the hardware and everything works.
Except it doesn't under macOS. Running under a debugger and setting a breakpoint on EC_do_sign(), I saw that we went down a code path with a function called ECDSA_do_sign_new(). This doesn't appear in any of the public source code, so seems to be an Apple-specific patch. I pushed Apple's libcrypto into Ghidra and looked at ECDSA_do_sign() and found something that approximates this:
nid = EC_GROUP_get_curve_name(curve);
if (nid == NID_X9_62_prime256v1) {
return ECDSA_do_sign_new(dgst,dgst_len,eckey);
}
return ECDSA_do_sign_ex(dgst,dgst_len,NULL,NULL,eckey);
What this means is that if you ask ECDSA_do_sign() to sign something on a Mac, and if the key in question corresponds to the NIST P256 elliptic curve type, it goes down the ECDSA_do_sign_new() path and never calls the registered callback. This is the only key type supported by the Apple Secure Enclave, so I assume it's special-cased to do something with that. Unfortunately the consequence is that it's impossible to use a PKCS#11 module that uses Secure Enclave keys with the shipped version of OpenSSH under macOS. For now I'm working around this with an SSH agent built using Go's agent module, forwarding most requests through to the default session agent but appending hardware-backed keys and implementing signing with them, which is probably what I should have done in the first place.
Here's a bunch of handy commands that I've conceived to semi-automatically remove old versions of packages that do not have stable keywords (and therefore are not subject to post-stabilization cleanups that I do normally).
Requirements
The snippets below require the following packages:
app-portage/mgorny-dev-scripts
dev-util/pkgcheck
dev-util/pkgdev
They should be run in top directory of a ::gentoo checkout, ideally with no other changes queued.
Remove redundant versions
First, a pipeline that finds all packages without stable amd64 keywords (note: this is making an assumption that there are no packages that are stable only on some other architecture), then scans these packages for redundant versions and removes them. The example command operates on dev-python/*:
Use git restore -WS ... to restore versions as necessary and repeat until it comes out clean.
Check for stale files
Finally, iterate over packages with files/ to check for stale patches:
(
for x in $(
git diff --name-only --cached | cut -d'/' -f1-2 | sort -u
); do
[[ -d ${x}/files ]] && ( cd ${x}; bash )
done
)
This will start bash inside every cleaned up package (note: I'm assuming that there are no other changes in the repo) that has a files/ directory. Use a quick grep followed by FILESDIR lookup:
grep FILES *.ebuild
ls files/
Remove the files that are no longer referenced.
Commit the removals
for x in $(
git diff --name-only --cached | cut -d'/' -f1-2 | sort -u
); do
(
cd ${x} && pkgdev manifest &&
pkgcommit . -sS -m 'Remove old'
)
done
pkgcheck scan --commits
I know everyone's been eagerly awaiting the return of the pasta maker.
The wait is over.
But today we're going to move away from those dangerous, addictivive synthetic benchmarks to look at a different kind of speed. That's right. Today we're looking at pipeline compile speed. Some of you are scoffing, mouse pointer already inching towards the close button on the tab.
Pipeline compile speed in the current year? Why should anyone care when we have great tools like Fossilize that can precompile everything for a game over the course of several hours to ensure there's no stuttering?
It turns out there's at least one type of pipeline compile that still matters going forward. Specifically, I'm talking about fast-linked pipelines using VK_EXT_graphics_pipeline_library.
Let's get an appetizer going, some exposition under our belts before we get to the spaghetti we're all craving.
Pipelines: They're In Your Games
All my readers are graphics experts. It won't come as any surprise when I say that a pipeline is a program containing shaders which is used by the GPU. And you all know how VK_EXT_graphics_pipeline_library enables compiling partial pipelines into libraries that can then be combined into a full pipeline. None of you need a refresher on this, and we all acknowledge that I'm just padding out the word count of this post for posterity.
Some of you experts, however, have been so deep into getting those green triangles on the screen to pass various unit tests that you might not be fully aware of the fast-linking property of VK_EXT_graphics_pipeline_library.
In general, compiling shaders during gameplay is (usually) bad. This is (usually) what causes stuttering: the compilation of a pipeline takes longer than the available time to draw the frame, and rendering blocks until compilation completes. The fast-linking property of VK_EXT_graphics_pipeline_library changes this paradigm by enabling pipelines, e.g., for shader variants, to be created fast enough to avoid stuttering.
Typically, this is utilized in applications through the following process:
create pipeline libraries from shaders during game startup or load screen
wait until pipeline is needed at draw-time
fast-link final pipeline and use for draw
background compile an optimized version of the same pipeline for future use
In this way, no draw is blocked by a pipeline creation, and optimized pipelines are still used for the majority of GPU operations.
But Why…
…would I care about this if I have Fossilize and a brand new gaming supercomputer with 256 cores all running at 12GHz?
I know you're wondering, and the answer is simple: not everyone has these things.
Some people don't have extremely modern computers, which means Fossilize pre-compile of shaders can take hours. Who wants to sit around waiting that long to play a game they just downloaded?
Some games don't use Fossilize, which means there's no pre-compile. In these situations, there are two options:
compile all pipelines during load screens
compile on-demand during rendering
The former option here gives us load times that remind us of the original Skyrim release. The latter probably yields stuttering.
Thus, VK_EXT_graphics_pipeline_library (henceforth GPL) with fast-linking.
The Obvious Problem
What does the "fast" in fast-linking really mean?
How fast is "fast"?
These are great questions that nobody knows the answer to. The only limitation here is that "fast" has to be "fast enough" to avoid stuttering.
Given that RADV is in the process of bringing up GPL for general use, and given that Zink is relying on fast-linking to eliminate compile stuttering, I thought I'd take out my perf magnifying glass and see what I found.
Uh-oh
Obviously we wouldn't be advertising fast-linking on RADV if it wasn't fast.
Obviously.
It goes without saying that we care about performance. No credible driver developer would advertise a performance-related feature if it wasn't performant.
RIGHT?
And it's not like I tried running Tomb Raider on zink and discovered that the so-called "fast"-link pipelines were being created at a non-fast speed. That would be insane to even consider-I mean, it's literally in the name of the feature, so if using it caused the game to stutter, or if, for example, I was seeing "fast"-link pipelines being created in 10ms+…
Surely I didn't see that though.
Surely I didn't see fast-link pipelines taking more than an entire frame's worth of time to create.
It's Fine
Long-time readers know that this is fine. I'm unperturbed by seeing numbers like this, and I can just file a ticket and move on with my life like a normal per-
OBVIOUSLY I CAN'T.
Obviously.
And just as obviously I had to get a second opinion on this, which is why I took my testing over to the only game I know which uses GPL with fast-link: 3D Pinball: Space Cadet DOTA 2!
Naturally it would be DOTA2, along with any other Source Engine 2 game, that uses this functionality.
Thus, I fired up my game, and faster than I could scream MID OR MEEPO into my mic, I saw the unthinkable spewing out in my console:
Yes, those are all "fast"-linked pipeline compile times in nanoseconds.
Yes, half of those are taking more than 10ms.
First Steps
The first step is always admitting that you have a problem, but I don't have a problem. I'm fine. Not upset at all. Don't read more into it.
As mentioned above, we have great tools in the Vulkan ecosystem like Fossilize to capture pipelines and replay them outside of applications. This was going to be a great help.
I thought.
I fired up a 32bit build of Fossilize, set it to run on Tomb Raider, and immediately it exploded.
Zink has, historically, been the final boss for everything Vulkan-related, so I was unsurprised by this turn of events. I filed an issue, finger-painted ineffectually, and then gave up because I had called in the expert.
That's right.
Friend of the blog, artisanal bit-wrangler, and a developer whose only speed is -O3 -ffast-math, Hans-Kristian Arntzen took my hand-waving, unintelligible gibbering, and pointing in the wrong direction and churned out a masterpiece in less time than it took RADV to "fast"-link some of those pipelines.
While I waited, I was working at the picosecond-level with perf to isolate the biggest bottleneck in fast-linking.
Fast-linking: Stop Compiling.
My caveman-like, tool-less hunt yielded immediate results: nir_shader_cloneduring fast-link was taking an absurd amount of time, and then also the shaders were being compiled at this point.
This was a complex problem to solve, and I had lots of other things to do (so many things), which meant I needed to call in another friend of the blog to take over while I did all the things I had to do.
Some of you know his name, and others just know him as "that RADV guy", but Samuel Pitoiset is the real deal when it comes to driver development. He can crank out an entire extension implementation in less time than it takes me to write one of these long-winded, benchmark-number-free introductions to a blog post, and when I told him we had a huge problem, he dropped* everything and jumped on board.
* and when I say "dropped" I mean he finished finding and fixing another Halo Infinite hang in the time it took me to explain the problem
With lightning speed, Samuel reworked pipeline creation to not do that thing I didn't want it to do. Because doing any kind of compiling when the driver is instead supposed to be "fast" is bad. Really bad.
How did that affect my numbers?
By now I was tired of dealing with the 32bit nonsense of Tomb Raider and had put all my eggs in the proverbial DOTA2 basket, so I again fired up a round, went to AFK in jungle, and checked my debug prints.
Do my eyes deceive me or is that a 20,000% speedup from a single patch?!
Problem Solved
And so the problem was solved. I went to Dan Ginsburg, who I'm sure everyone knows as the author of this incredible blog post about GPL, and I showed him the improvements and our new timings, and I asked what he thought about the performance now.
Dan looked at me. Looked at the numbers I showed him. Shook his head a single time.
It shook me.
I don't know what I was thinking.
In my defense, a 20,000% speedup is usually enough to call it quits on a given project. In this case, however, I had the shadow of a competitor looming overhead.
While RADV was now down to 0.05-0.11ms for a fast-link, NVIDIA can apparently do this consistently in 0.02ms.
That's pretty fast.
Even Faster
By now, the man, the myth, @themaister, Hans-Kristian Arntzen had finished fixing every Fossilize bug that had ever existed and would ever exist in the future, which meant I could now capture and replay GPL pipelines from DOTA2. Fossilize also has another cool feature: it allows for extraction of single pipelines from a larger .foz file, which is great for evaluating performance.
The catch? It doesn't have any way to print per-pipeline compile timings during a replay, nor does it have a way to sort pipeline hashes based on compile times.
Either I was going to have to write some C++ to add this functionality to Fossilize, or I was going to have to get creative. With my Chromium PTSD in mind, I found myself writing out this construct:
for x in$(fossilize-list --tag 6 dota2.foz);do
echo"PIPELINE $x"RADV_PERFTEST=gpl fossilize-replay --pipeline-hash$x dota2.foz 2>&1|grep COMPILE
done
I'd previously added some in-driver printfs to output compile times for the fast-link pipelines, so this gave me a file with the pipeline hash on one line and the compile timing on the next. I could then sort this and figure out some outliers to extract, yielding slow.foz, a fast-link that consistently took longer than 0.1ms.
I took this to Samuel, and we put our perfs together. Immediately, he spotted another bottleneck: SHA1Transform() was taking up a considerable amount of CPU time. This was occurring because the fast-linked pipelines were being added to the shader cache for reuse.
But what's the point of adding an unoptimized, fast-linked pipeline to a cache when it should take less time to just fast-link and return?
Blammo, another lightning-fast patch from Samuel, and fast-linked pipelines were no longer being considered for cache entries, cutting off even more compile time.
slow.foz was now consistently down to 0.07-0.08ms.
Are We There Yet?
No.
A post-Samuel flamegraph showed a few immediate issues:
First, and easiest, a huge memset. Get this thing out of here.
Now slow.foz was fast-linking in 0.06-0.07ms. Where was the flamegraph at on this?
Now the obvious question: What the farfalloni was going on with still creating a shader?!
It turns out this particular pipeline was being created without a fragment shader, and that shader was being generated during the fast-link process. Incredible coverage testing from an incredible game.
Fixing this proved trickier, and it still remains tricky. An unsolved problem.
However.
<zmike> can you get me a hack that I can use for that foz ?
* zmike just needs to get numbers for the blog
<hakzsam> hmm
<hakzsam> I'm trying
Like a true graphics hero, that hack was delivered just in time for me to run it through the blogginator. What kinds of gains would be had from this untested mystery patch?
slow.foz was now down to 0.023 ms (23566 ns).
We Did It
Thanks to Hans-Kristian enabling us and Samuel doing a lot of heavy and unsafe lifting while I sucked wind on the sidelines, we hit our target time of 0.02ms, which is a 50,000% improvement from where things started.
What does this mean?
If You're A User…
This means in the very near future, you can fire up RADV_PERFTEST=gpl and run DOTA2 (or zink) on RADV without any kind of shader pre-caching and still have zero stuttering.
If You're A Game Developer…
This means you can write apps relying on fast-linking and be assured that your users will not see stuttering on RADV.
If You're A Driver Developer…
So far, there aren't many drivers out there that implement GPL with true fast-linking. Aside from (a near-future version of) RADV, I'm reasonably certain the only driver that both advertises fast-linking and actually has fast linking is NVIDIA.
If you're from one of those companies that has yet to take the plunge and implement GPL, or if you've implemented it and decided to advertise the fast-linking feature without actually being fast, here's some key takeaways from a week in GPL optimization:
Ensure you aren't compiling any shaders at link-time
Ensure you aren't creating any shaders at link-time
Avoid adding fast-link pipelines to any sort of shader cache
Profile your fast-link pipeline creation
You might be thinking that profiling a single operation like this is tricky, and it's hard to get good results from a single fossilize-replay that also compiles multiple library pipelines.
You thought I wouldn't plug it again, but here we are. In the very near future (ideally later today), vkoverhead will have some cases that isolate GPL fast-linking. This should prove useful for anyone looking to go from "fast" to fast.
There's no big secret about being truly fast, and there's no architectural limitations on speed. It just takes a little bit of elbow grease and some profiling.
But Also
The goal is to move GPL out of RADV_PERFTEST with Mesa 23.1 to enable it by default. There's still some functional work to be done, but we're not done optimizing here either.
One day I'll be able to say with confidence that RADV has the fastest fast-link in the world, or my name isn't Spaghetti Good Code.
Like much of the world I've been working to migrate off of Twitter to Mastodon and the rest of the Fediverse. Along with a new network is the need for new automation tools, and I've taken this opportunity to scratch my own itch and finally build an auto-posting bot for my own needs. And it is, of course, available as Free Software.
Announcing Mastobot! Your PHP-based Mastodon auto-poster.
One of the persistently lesser-known symptoms of ADHD is hyperfocus. It is sometimes quasi-accurately described as a "superpower"12, which it can be. In the right conditions, hyperfocus is the ability to effortlessly maintain a singular locus of attention for far longer than a neurotypical person would be able to.
However, as a general rule, it would be more accurate to characterize hyperfocus not as an "ability to focus on X" but rather as "an inability to focus on anything other than X". Sometimes hyperfocus comes on and it just digs its claws into you and won't let go until you can achieve some kind of closure.
Recently, the X I could absolutely not stop focusing on - for days at a time - was this extremely annoying picture:
Which lead to me writing the silliest computer program I have written in quite some time.
You see, for some reason, macOS seems to prefer YUV422 chroma subsampling3 on external displays, even when the bitrate of the connection and selected refresh rate support RGB.4 Lots of people have been trying to address this for a literal decade567891011, and the problem has gotten worse with Apple Silicon, where the operating system no longer even supports the EDID-override functionality available on every other PC operating system that supports plugging in a monitor.
In brief, this means that every time I unplug my MacBook from its dock and plug it back in more than 5 minutes later, its color accuracy is destroyed and red or blue text on certain backgrounds looks like that mangled mess in the picture above. Worse, while the color distinction is definitely noticeable, it's so subtle that it's like my display is constantly gaslighting me. I can almost hear it taunting me:
Magenta? Yeah, magenta always looked like this. Maybe it's the ambient lighting in this room. You don't even have a monitor hood. Remember how you had to use one of those for print design validation? Why would you expect it to always look the same without one?
Still, I'm one of the luckier people with this problem, because I can seem to force RGB / 444 color format on my display just by leaving the display at 120Hz rather than 144, then toggling HDR on and then off again. At least I don't need to plug in the display via multiple HDMI and displayport cables and go into the OSD every time. However, there is no API to adjust, or even discover the chroma format of your connected display's link, and even the accessibility features that supposedly let you drive GUIs are broken in the system settings "Displays" panel12, so you have to do it by sending synthetic keystrokes and hoping you can tab-focus your way to the right place.
Anyway, this is a program which will be useless to anyone else as-is, but if someone else is struggling with the absolute inability to stop fiddling with the OS to try and get colors to look correct on a particular external display, by default, all the time, maybe you could do something to hack on this:
importosfromQuartzimportCGDisplayRegisterReconfigurationCallback,kCGDisplaySetMainFlag,kCGDisplayBeginConfigurationFlagfromColorSyncimportCGDisplayCreateUUIDFromDisplayIDfromCoreFoundationimportCFUUIDCreateStringfromAppKitimportNSApplicationMain,NSApplicationActivationPolicyAccessory,NSApplicationNSApplication.sharedApplication().setActivationPolicy_(NSApplicationActivationPolicyAccessory)CGDirectDisplayID=intCGDisplayChangeSummaryFlags=intMY_EXTERNAL_ULTRAWIDE='48CEABD9-3824-4674-9269-60D1696F0916'MY_INTERNAL_DISPLAY='37D8832A-2D66-02CA-B9F7-8F30A301B230'defcb(display:CGDirectDisplayID,flags:CGDisplayChangeSummaryFlags,userInfo:object)->None:ifflags&kCGDisplayBeginConfigurationFlag:returnifflags&kCGDisplaySetMainFlag:displayUuid=CGDisplayCreateUUIDFromDisplayID(display)uuidString=CFUUIDCreateString(None,displayUuid)print(uuidString,"became the main display")ifuuidString==MY_EXTERNAL_ULTRAWIDE:print("toggling HDR to attempt to clean up subsampling")os.system("/Users/glyph/.local/bin/desubsample")print("HDR toggled.")print("registered",CGDisplayRegisterReconfigurationCallback(cb,None))NSApplicationMain([])
and the linked desubsample is this atrocity, which I substantially cribbed from this helpful example:
#!/usr/bin/osascriptuseAppleScriptversion"2.4"-- Yosemite (10.10) or lateruseframework"Foundation"useframework"AppKit"usescriptingadditionstellapplication"System Settings"quitdelay1activatecurrent application's NSWorkspace's sharedWorkspace()'s openURL:(current application's NSURL's URLWithString:"x-apple.systempreferences:com.apple.Displays-Settings.extension")delay0.5tellapplication"System Events"tellprocess"System Settings"key code48key code48key code48delay0.5key code49delay0.5-- activate hdr on left monitorsethdrtocheckbox1ofgroup3ofscrollarea2of¬group1ofgroup2ofsplittergroup1ofgroup1of¬window"Displays"tellhdrclickitdelay1.0ifvalueis1clickitendifendtellendtellendtellquitendtell
This ridiculous little pair of programs does it automatically, so whenever I reconnect my MacBook to my desktop dock at home, it faffs around with clicking the HDR button for me every time. I am leaving it running in a background tmux session so - hopefully - I can finally stop thinking about this.
One of the challenges of reviewing a lot of code is that many reviews require multiple iterations. I really don't want to do a full review from scratch on the second and subsequent rounds. I need to be able to see what has changed since last time.
I happen to work on projects that care about having a useful Git history. This means that authors of (without loss of generality) pull requests use amend and rebase to change commits and force-push the result. I would like to see the only the changes they made since my last review pass. Especially when the author also rebased onto a new version of the main branch, existing code review tools tend to break down.
Git has a little-known built-in subcommand, git range-diff, which I had been using for a while. It's pretty cool, really: It takes two ranges of commits, old and new, matches old and new commits, and then shows how they changed. The rather huge problem is that its output is a diff of diffs. Trying to make sense of those quickly becomes headache-inducing.
I finally broke down at some point late last year and wrote my own tool, which I'm calling diff-modulo-base. It allows you to look at the difference of the repository contents between old and new in the history below, while ignoring all the changes that are due to differences in the respective base versions A and B.
As a bonus, it actually does explicitly show differences between A and B that would have caused merge conflicts during rebase. This allows a fairly comfortable view of how merge conflicts were resolved.
I've been using this tool for a while now. While there are certainly still some rough edges and to dos, I did put a bunch more effort into it over the winter holidays and am now quite happy with it. I'm making it available for all to try at https://git.sr.ht/~nhaehnle/diff-modulo-base. Let me know if you find it useful!
Better integration with the larger code review flow?
One of the rough edges is that it would be great to integrate tightly with the GitHub notifications workflow. That workflow is surprisingly usable in that you can essentially treat the notifications as an inbox in which you can mark notifications as unread or completed, and can "mute" issues and pull requests all with keyboard shortcut.
What's missing in my workflow is a reliable way to remember the most recent version of a pull request that I have reviewed. My somewhat passable workaround for now is to git fetch before I do a round of reviews, and rely on the local reflog of remote refs. A Git alias allows me to say
Ideally, I'd have a fully local way of interacting with GitHub notifications, which could then remember the reviewed version in a more reliable way. This ought to also fix the terrible lagginess of the web interface. But that's a rant for another time.
Rust
This is the first serious piece of code I've written in Rust. I have to say that experience has really been quite pleasant so far. Rust's tooling is pretty great, mostly thanks to the rust-analyzer LSP server.
The one thing I'd wish is that the borrow checker was able to better understand "partial" borrows. I find it occasionally convenient to tie a bunch of data structures together in a general context structure, and helper functions on such aggregates can't express that they only borrow part of the structure. This can usually be worked around by changing data types, but the fact that I have to do that is annoying. It feels like having to solve a puzzle that isn't part of the inherent complexity of the underlying problem that the code is trying to solve.
And unlike, say, circular references or graph structures in general, where it's clear that expressing and proving the sort of useful lifetime facts that developers might intuitively reason about quickly becomes intractable, improving the support for partial borrows feels like it should be a tractable problem.
Ever since I got involved with open-source Python projects, tox has been vital for testing packages across Python versions (and other factors). However, lately, I've been increasingly using Nox for my projects instead. Since I've been asked why repeatedly, I'll sum up my thoughts.
With FOSDEM just around the corner, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, with the buildup (starting Friday at noon), heralding during the conference and Cleanup (on Sunday evening). No need to worry about missing lunch. Food will be provided. Would you like to be part of the team that makes FOSDEM tick? Sign up here! You could舰
Finally, after a long break, it's FOSDEM time again! Join us at Université Libre de Bruxelles, Campus du Solbosch, in Brussels, Belgium. This year's FOSDEM 2023 will be held on February 4th and 5th.
Our developers will be happy to greet all open source enthusiasts at our Gentoo stand in building H, level 1! Visit this year's wiki page to see who's coming.
Knex recently released a new version this week (2.4.0). Before this version, Knex had a pretty scary SQL injection. Knex currently has 1.3 million weekly downloads and is quite popular.
The security bug is probably one of the worst SQL injections I've seen in recent memory, especially considering the scope and popularity.
This query is not invalid. I don't fully understand fully understand MySQL's behavior, but it causes the WHERE clause to be ignored and the result is equivalent to:
SELECT`id`
Truncated by Planet PHP, read more at the original (another 8765 bytes)
In this monthly update I explain what happened with Xdebug development in this past month. These are normally published on the first Tuesday on or after the 5th of each month.
Patreon and GitHub supporters will get it earlier, around the first of each month.
You can become a patron or support me through GitHub Sponsors. I am currently 45% towards my $2,500 per month goal. If you are leading a team or company, then it is also possible to support Xdebug through a subscription.
In the last month, I spend 25 hours on Xdebug, with 21 hours funded. Sponsorships are continuing to decline, which makes it harder for me to dedicate time for maintenance and development.
Xdebug 3.2
Xdebug 3.2.0 got released at the start of December, to coincide with the release of PHP 8.2 which it supports, after fixing a last crash with code coverage. Since then a few bugs were reported, which I have started to triage. A particularly complicated one seems to revolve on Windows with PHP loaded in Apache, where suddenly all modes are turned on without them having been activated through the xdebug.mode setting. This is a complicated issue that I hope to figure out and fix during January, resulting in the first patch release later this month.
Plans for the Year
Beyond that, I have spend some time away from the computer in the Dutch country side to recharge my battery. I hope to focus on redoing the profiler this year, as well as getting the "recorder" feature to a releasable state.
Smaller feature wise, I hope to implement file/path mappings on the Xdebug side to aide the debugging of generated files containing PHP code.
Xdebug Cloud
Xdebug Cloud is the Proxy As A Service platform to allow for debugging in more scenarios, where it is hard, or impossible, to have Xdebug make a connection to the IDE. It is continuing to operate as Beta release.
Packages start at £49/month, and I have recently introduced a package for larger companies. This has a larger initial set of tokens, and discounted extra tokens.
If you want to be kept up to date with Xdebug Cloud, please sign up to the mailinglist, which I will use to send out an update not more than once a month.
I have continued writing scripts for videos about Xdebug 3.2's features, and am also intending to make a video about "Running Xdebug in Production", as well as one on using the updated "xdebug.client_discovery_header" feature (from Xdebug 3.1).
Recently while browsing the Alpine git repo I noticed they have a function called snapshot, see: https://git.alpinelinux.org/aports/tree/testing/dart/APKBUILD#n45 I am not 100% sure about how that works but a wild guess is that the developers can run that function to fetch the sources and maybe later upload them to the Alpine repo or some sort of (cloud?) storage.
In Portage there exists a pkg_config function used to run miscellaneous configuration for packages. The only major difference between src_snapshot and that would of course be that users would never run snapshot.
Sandbox
Probably only the network sandbox would have to be lifted out… to fetch the sources of course.
But also a few (at least one?) special directories and variables would be useful.
With great pleasure (and an apology for missing the deadline by seven days), we can announce the following projects will have a stand at FOSDEM 2023 (4 & 5th of February). This is the list of stands (in no particular order): Eclipse Foundation FOSSASIA Matrix.org Foundation Software Freedom Conservancy CentOS and RDO FreeBSD Project Free Software Foundation Europe Realtime Lounge Free Culture Podcasts Open Culture Foundation + COSCUP Open Toolchain Foundation Open UK and Book Signing Stand The Apache Software Foundation The Perl/Raku Foundation PostgreSQL GNOME KDE GitLab Homebrew Infobooth on amateur radio (hamradio) IsardVDI Jenkins Fluence La Contre-Voie舰
Good news: the second and last release candidate of Plone 6 has arrived! The release manager for this version is Maurits van Rees (https://github.com/mauritsvanrees).
The Plone community is happy to announce that Volto 16 is ready and shipped! This is the final release for the upcoming Plone 6 and a major achievement from the community. Thank you everyone involved!
Volto is Plone's snappy, modern React front end powered by the RestAPI, and the default for Plone 6.
Volto 16
Volto 16 contains tons of new features, bugfixes and tweaks. Here is a sneak peak to some of them, and you can find the full release notes in GitHub: https://github.com/plone/volto/releases/tag/16.0.0
The new Slate editor - improved inline editing and more possibilities
Content rules - a whole engine for automating processes based on events on the site
Undo - site admins can see and undo transactions
Bugfixes and tweaks - it is shiny and polished
New technology -
More feature highlights
Added default placeholder for videos to embed them more lightly @giuliaghisini
Added new Block Style Wrapper. This implementation is marked as experimental during Volto 16 alpha period. The components, API and the styling are subject to change without issuing a breaking change. You can start using it in your projects and add-ons, but taking this into account. See documentation for more information. @sneridagh
Add default widget views for all type of fields and improve the DefaultView @ionlizarazu
added configurable identifier field for password reset in config.js. @giuliaghisini
added 'show total results' option in Search block configuration. @giuliaghisini
Added viewableInBrowserObjects setting to use in alternative to downloadableObjects, if you want to view file in browser intstead downloading. @giuliaghisini
Disable already chosen criteria in querystring widget @kreafox
Added X-Forwarded-* headers to superagent requests. @mamico
Forward HTTP Range headers to the backend. @mamico
Add default value to color picker, if default is present in the widget schema. @sneridagh
Inject the classnames of the StyleWrapper into the main edit wrapper (it was wrapping directly the Edit component before). This way, the flexibility is bigger and you can act upon the whole edit container and artifacts (handlers, etc) @sneridagh
Deprecate NodeJS 12 since it's out of LTS since April 30, 2022 @sneridagh
Move all cypress actions to the main Makefile, providing better meaningful names. Remove them from package.json script section. @sneridagh
Remove div as default if as prop from RenderBlocks. Now the default is a React.Fragment instead. This could lead to CSS inconsistencies if taken this div into account, specially if used in custom add-ons without. In order to avoid them, set the as property always in your add-ons. @sneridagh
Removed date-fns from dependencies, this was in the build because Cypress depended on it. After the Cypress upgrade it no longer depends on it. If your project still depends on it, add it as a dependency of your project. @sneridagh
Removed all usage of date-fns from core. @sneridagh
Rename src/components/manage/Widgets/ColorPicker.jsx component to src/components/manage/Widgets/ColorPickerWidget.jsx@sneridagh
Remove the style wrapper around the <Block /> component in Edit mode, moved to the main edit wrapper @sneridagh
Staticize Poppins font to be compliant with EU privacy. Import from GoogleFont is disabled in site.variables. @giuliaghisini
Remove the callout button (the one with the megaphone icon) from the slate toolbar since it has the same styling as blockquote. If you need it anyway, you can bring it back in your addon. @sneridagh
Using volto-slate Headline / Subheadline buttons strips all elements in the selection @tiberiuichim
Use Cypress 10.3.0 (migrate from 9.x.x). Cypress 10 has some interesting goodies, being the native support of Apple Silicon Computers the main of it. See https://docs.voltocms.com/upgrade-guide/ for more information. @sneridagh
The complete configuration registry is passed to the add-ons and the project configuration pipeline @sneridagh
Improve Cypress integration, using Cypress official Github Action. Improve some flaky tests that showed up, and were known as problematic. Refactor and rename all the Github actions giving them meaningful names, and group them by type. Enable Cypress Dashboard for Volto. @sneridagh
Stop using xmlrpc library for issuing the setup/teardown in core, use a cy.request instead. @sneridagh
Added Cypress environment variables for adjusting the backend URL of commands @JeffersonBledsoe#3271
Reintroduce Plone 6 acceptance tests using the latests plone.app.robotframework 2.0.0a6 specific Volto fixture. @datakurre@ericof@sneridagh
Upgrade all tests to use plone.app.robotframework 2.0.0a6 @sneridagh
Missing change from the last breaking change (Remove the style wrapper around the <Block /> component in Edit mode, moved to the main edit wrapper). Now, really move it to the main edit wrapper @sneridagh
Fix warning because missing key in VersionOverview component @sneridagh
We would like to thank all the people involved in creating Volto 16. Over 40 people have committed code, documentation and other effort for this. It is amazing how much we were able to accomplish as a community, during the last months.
Where do we go from here? Plone 6! Right now, the only major feature missing were content rules and the new Slate editor, and both were included in Volto 16.
So the work is not over yet. We still need helping hands and contributors to continue the effort to make Plone 6 a reality. Everybody is welcome!
Good news: the first release candidate of Plone 6 has arrived! The release manager for this version is Maurits van Rees (https://github.com/mauritsvanrees).
Various packages: updates to support Python 3.11. See below.
Zope 5.7: This feature release adds full support for Python 3.11 and a ZPublisher encoder for inputting JSON data.
See the Zope changelog for details.
zc.buildout: After long development this has a final release. We use version 3.0.1, which now works nicely with latest pip (using 22.3.1).
Note that it is usually fine if you use different versions of zc.buildout, pip, setuptools, and wheel. We just pin versions that we know work at the moment.
plone.restapi:
Added @upgrade endpoint to preview or run an upgrade of a Plone instance.
This release supports Python 3.8, 3.9, 3.10, and 3.11.
Python 3.11.0 was released in October and we are proud to already be able to say Plone supports it! All tests pass.
This is expected to be faster than other Python versions.
Note that not all add-ons may work yet on 3.11, but in most cases the needed changes should be small.
A big thank you for this goes to the Zope part of the Plone community, especially Jens Vagelpohl and Michael Howitz.
This is a rather long post as it captures the 13 days of lessons learnt and tips I've been sharing on LinkedIn. Definitely interested to hear other's experiences and thoughts, so do drop me a comment below, if you feel like it.
The content here is predominantly aimed at new managers and leaders as well as new managers of managers, but hopefully others can benefit too.
1. Feedback loops become slower and it takes longer to see the impact of your decisions and actions, which can create self-doubt and insecurity creep
Ensure you have measurable definition of success in place for yourself and that you refer to it.
Set clear goal and expectations for what you are focusing on and make sure that you have either reliable qualitative or quantitative way to measure progress and success
Rock your own feedback and actively seek feedback and don't wait until your performance review time. Be aware that due to the position of authority (depending on environmental/cultural factor as well), the quality and quantity of constructive feedback may decrease.
Grow your introspective and reflective skills to self-troubleshoot and expand your mindset and perspectives.
Ensure you have a supporting structure in place, such as a mentor and coach.
2. Context switching increases by factor of N in terms of volume, frequency and types, which can be disorienting and overwhelming
Learn to identify what brain power and energy types you are using in different contexts and optimize your schedule around that. You may need to grow your self-awareness skills through mindfulness practice (e.g. meditation). Keep experimenting. You may find that meetings work best for you in the morning or maybe that focus time works best for you in the morning and so on.
Allocate regular focus time for yourself - your brain has limits. Block it off in your diary and guard it like it's a precious stone. Accept that you won't always be able to hold on to all of it as things may crop up, but do your best. You may find that having a bit of quiet time at the start and end of each day will help you prep and digest.
Do reset your calendar every now and then to defragment as things creep in.
Companies tend to have a heartbeat and a level of predictability associated with - learn to identify that and use it to your advantage as to how you plan your day and weeks. In a start-up that would be adrenaline pumped heart with low predictability, but as the company gets larger there will be times where certain things happen and activity escalates (e.g. performance reviews, quarterly planning).
Look for delegation, and thus sponsorship, opportunities at all times.
Give your brain time to adapt - it's awesome like this.
3. You may feel less part of a team, which will affect your sense of belonging and can leave you feeling lonelier than before
Accept that the moment you step into a position of authority it changes the relationship dynamics. This is something a new manager will start facing, especially if they are doing the transition in the team where they have been an individual contributor before.
A bigger step-change for this is becoming manager of managers, so be mindful as you do make that transition.
Avoid isolating yourself - it will leave you feeling alone with your challenges and increase your stress levels. Instead look to connect with other peers in the company doing the same job and outside (e.g. join meetups and so on).
Make sure you have someone to talk to with whom you feel safe to share with (mentor, coach)
Deliberately invest in building better relationships at work
Take care of yourself outside of work socially and community-wise. It can be easy for work to become the sole source of sense of belonging, which is not a healthy dynamic.
4. The volume of information that you have exposure to may increases dramatically in both depth and breadth, which will feel overwhelming at times
Your brain will adjust over time and will learn how to filter it and what to capture. Perhaps you will experiment and pick up some tools/techniques in the process.
Make conscious deliberate decisions about what information you need vs want and why. The temptation to over-consume information to 'stay on top of things' (sense of control) is dangerous, so be strategic about it as a means to be tactical about your limited time, energy and brain capacity.
Every time you make a step change in leadership reset your expectations and re-review regularly your information needs. Give yourself time and space to figure it out over time.
Avoid being overly reactive to new information and apply self-control over the urge to explore/act on it unless needed.
Learn to let go and live in an imperfect world where you can only focus on what's on N things at a time.
Accept that you will start getting exposed to information that you may not understand fully, so sometimes focus less on the information itself, but the thought processes and approaches behind it. Use coaching and curious questioning. Build trust.
Develop your people's communication skills, so that information is presented at the right level of detail given the context its needed. Give peers feedback on such.
Make sure to allocate time and space to give yourself a chance to process and reflect.
Build a support structure for your self - peer community, coach, mentor. Learn how others are doing things.
5. The general level of uncertainty and ambiguity will rapidly increase which may leave your feeling anxious, scared and out of control
For new engineering managers it can be a big challenge emotionally, because there is no replacement for the predictability and early validation possible in the software development cycle (write code -> validate it works -> ship it to production). There are also no quick fixes after something goes "live".
Accept that with every leadership step-change it will feel like uncertainty will increase - in part because it's an entirely new role, but also because there will be more moving pieces and more people involved in the execution.
Avoid succumbing to the need for control, which is a natural tendency of the anxiety that comes and accept that you will simply not ever be in control. Adding more process, diving deeper into details to assure yourself is common action, but is a good recipe for burnout and dissatisfaction on the other side of it.
Avoid over-thinking and looking for perfection in decisions.
Accept that thing will go wrong - it's simply a matter of fact. The best learnings come from failure (do we even learn anything when we succeed?)
Focus on iterating on your ability to set clear expectations, communicate constrains.
Learn to build trust, to let go and to build the right framework for you to have a "good enough" confidence and comfort with things, without over-indexing on reactivity due to the emotional impact of uncertainty.
When you feel out of control, take a deep breath and re-focus on what you are in control of and deliberately let go of what you can't. Don't bottle that tension and stress up.
Try some mindfulness practice to build emotional resiliency in what may seem like high pressure situations.
Get someone to talk to - a coach, a mentor, peers. The worst thing you can do for yourself is be overwhelmed alone.
6. The mistakes that you make will have more impact than before, which can increase the level of fear, anxiety and internal pressure to get things "right"
Watch out for the fear of failure causing anxiety, vulnerability, and behaviour such as perfectionism, overthinking, etc. Improve your mindfulness skills to pick it up.
Accept that you will make mistakes. It's not a question of if, but more of a question of when. At the end of the day, we are all human and as such we are flawed in many ways, so mistakes are part of our nature.
Accept that you will fail. It's a question of when, not if. Failure is hard emotionally, but is the best source of learning. Do we even learn from success?
Identify your tolerance for acceptable risk and play around with stretching it.
Do accept and acknowledge all the uncomfortable feelings that come with this - it will help build your tolerance, thus emotional resilience to them.
Sanity check your emotions with some introspective questions, such as "What is the worst that will happen?"
Swich from "What if " type of thinking to "How will I handle", but try to avoid getting into overthinking spirals. As you will learn, with time, there is no "right decision" - you can simply optimize for what you know at the time, hope for the best and believe that you will be able to deal with the consequences.
Avoid working in toxic company culture, where failure and mistake is a dirty word.
Use data to evaluate options and decisions.
Don't let your ego (bruised or not) to get in the way of recognising mistakes and failure. Have you seen mistakes being swept under the carpet by leaders before? It's important to be honest, transparent and exercise humility and vulnerability when it happens and focus on the learnings and moving forward. Be real.
Build yourself a support structure around you (manager, mentor, peers)
7. Your sense of achievement and impact may take a hit, which can create dissatisfaction creep and can affect your confidence.
This is very relevant for new engineering managers who are transitioning from doing to enabling/supporting/empowering/facilitating/etc. This is because up until this point the sense of achievement is likely based (and measured) on direct impact made through output (e.g. code), whereas now the impact be less indirect and there will be a sense of lack of be output "to show for it".
This can be an even harder transition, when a manager becomes a manager of managers and thus steps out of the team that is delivering the impact.
It's normal - it's an adjustment of how you measure you worth and value-add as part of the role transition you are going through. It can take time as it's a mindset shift and your brain will need time to rewire.
Re-think and redefine what you see as "win" and "impact" for yourself. Train your brain to start notice even the smallest of wins. Be your own cheerleader.
Avoid self-lowering thinking such as "I am not being useful or not adding value". There might be times where indeed you are not adding value to your teams as they are self-sufficient, so be mindful and aware when to refocus your time between types of focus.
Accept that feedback loops are longer and thus it may take time to see the impact of your actions, so aim to set yourself measurable goals and get regular feedback.
Watch out for your need for creativity and keeping your creative self fulfilled too. This will affect different people at different intensity. Find something 'yours' and focus on it, when you have time. Alternatively look to rebalance outside of work.
8. Your sense of expertise will erode and morph over time, which can challenge your sense of competency and result imposter syndrome
It's important to acknowledge that - the sooner the better. Every time you make a step change in leadership you are going into a new role. And surprise, surprise the goal is not to be an expert in your past role any more, so watch out not to act like one.
As you make step changes up the leadership ladder, some of your powers (e.g. 'knowledge') will diminish, but some others will grow (e.g. influencing). However, there may be a temporary period where you feel the disbalance and it may be frustrating until you hit the rebalancing sweet spot.
Managing someone in a role that you were very recently (e..g managing a manager when you were a manager recently) can trigger all sorts of insecurities if you don't focus on nurturing your growth mindset. For example you may feel inadequate because you suddenly realize the way you did something before was not very good when you compare yourself/your approach to someone you manage. But, actually that's awesome - you have just learnt something new, but also identified someone that's very good at something that you can lean on. It's all about perspectives. Also in general comparing yourself with others is no good anyway, so avoid in general.
Watch out for assuming that you should have a right answer and even an answer to questions that hit you by direct reports. You can't possibly have all the answers and assuming this so will erode your confidence and competency perception, which in turn will fill you with guilt and voila - imposter syndrome. Build some coaching skills and find mentors for your direct reports for areas they need, where you can't support them directly.
Be humble and learn. Ego is the enemy here.
If you care about this, a wonderful way to maintain some level of expertise is to learn from others and the output of others in the company and outside (e.g. system designs/RFC that are being produced by teams). Asking lots of questions in a curious way and being upfront what you want to get out of it (e.g. not interrogating/challenging) can be helpful too. Going to conferences and meetups is another way.
9. Having an increased scope of accountability can put more internal and external pressure, increasing your stress levels
Your emotional resilience will build up over time, but it's absolutely critical to take good care of yourself at all times and find ways to cope with stress, anxiety, etc.
Focus on growing people internally and hiring so that you can delegate and grow other leaders, whilst sharing the load to create sustainability
Focus on improving your expectation management, communication, time management, prioritization, delegation skills.
There are various tools out there that can help you evaluate your accountabilities and responsibilities and evaluate where the priority of your energy-spent should be. Ask your peers, your manager for opinions as well, where is your time and energy best spent as you may end up making the wrong assumptions.
Learn to say no constructively, when needed - it's a skill.
Find a mentor and a coach to support you
10. Supporting (and working with) an increased number of people (direct reports, skip-level reports, etc) will have a toll on you emotionally
Your emotional resilience will build up over time naturally as you get exposed to more scenarios and situations, but it's super duper mega important to take care of yourself meanwhile. Whilst it's easy to say "take care of yourself", it can be harder to achieve as leadership can put you in a new place in life and how to do that may be non-obvious.
Coaching and mentoring skills can help you turn what would be situations ending up with emotional baggage to situations, which end up with no baggage and positive forward momentum.
Build self-awareness (e.g. through mindfulness practice and skill building) skills to help you improve your Emotional Intelligence and help you regulate and manage your emotional state better.
Build self-troubleshooting skills, such as introspection, reflective writing to help you train your brain and build a mindset of seeing things the same experience from different perspectives and angles.
Be clear with yourself what your values are as this will help you identify and acknowledge internal (in yourself) and external (with others) emotional conflict and thus work with it.
Get a coach and a mentor. Don't shy away from working on a deeper level, if needed, with a psychologist/psychotherapy professional as well - working with other people can be a great way to retrigger any relational or other traumas, or simply old unresolved experiences from your past whether you are aware of it or not.
11. It can be hard to say no and/or disappoint people
Hard truth of leadership is that you will not liked by everyone and your job is not to please everyone, so the sooner you get comfortable with that - the better.
Avoid taking ownership ownership or responsibility for the emotional responses of people (e.g. disappointment, frustration, dissatisfaction) as this will lead to unnecessary sticky guilt and feeling bad about stuff (and associated fear of letting them down due to ownership taken). Do maintain a curious mindset and perhaps help them reframe and evaluate things for themselves from different perspectives.
Try to reframe personalization thinking ("I am disappointing them") to de-personalized (the situation, circumstance, decisions and outcomes thereof. And no - I am not suggesting you do that when you need to own up to a mistake you've made, which is often people that don't own their mistakes do, which in turn leads to double the erosion of trust.
Never leave space for assumptions (for both yourself and them) - be honest and direct, even if very uncomfortable. Learn to be uncomfortable, accept it and cherish it - it means you are growing emotionally.
Don't internalize organizational limitations (aka circumstantial/situational limitations), which are negatively impacting people, as a personal failure (e.g. redundancies, teams becoming anaemic due to attrition, etc). Identify what's within your control and influence and what isn't as you don't want to get sticky feelings of powerlessness/disempowerment, which will unnecessarily affect your confidence.
Consider that what on the surface may seem negative, can actually be very positive when looked from different angles. E.g. delivering uncomfortable constructive feedback that may affect the other person emotionally may be tough, but it will be helpful for their growth. Letting go someone from a team may be helpful for the team and for them to find a better fit place, etc. Not having someone promoted might save them from difficult times if they weren't ready yet.
Practice and even do some role play with someone if that helps. Saying no and delivering hard messages is a skill.
12. When people you manage leave, it can create self-doubt and insecurity
Don't overthink it and make assumptions and focus on the facts and what you know and what your data points are telling you (e.g. feedback).
Remember that people are empowered to make a choice for what's right for them and that's a good thing.
Focus on the positives - often people will leave, because they have had a chance to learn and grow and have decided to move on to the next thing and use what they have learnt. Or it could be because they have found a place that they will earn more money, get more challenge, and so on and that's good for them. Most of the time someone leaving can be quite positive, even if it creates practical pains
Focus on having a solid succession plan in place to reduce the practical pain of people leaving.
13. You will be swimming in new and unfamiliar territory a lot of the time and this can make you feel vulnerable
Vulnerability, when acknowledged, can be very uncomfortable. Even acknowledging it can be hard as it may go against the brain's self-preservation instincts. It may take building some mindfulness skills and reducing your emotional reactivity and increasing your tolerance to the feeling get there.
Embrace the feelings that come with vulnerability and don't try to run way as that will only result compensatory/defensive behaviours, which are probably the wrong thing to do (e.g. micromanaging, too much process, etc)
Don't jump too quickly into conclusions and actions - check-in on your motivations.
Ego is the enemy as it aims to protect our vulnerable self and thus deny us the humble growth mindset.
Vulnerability is powerful and not a weakness, as when your emotional tolerance to it is built up it can empower you in many different ways.
We now invite proposals for presentations. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twenty-third edition will take place on Saturday 4th and Sunday 5th February 2023 at the usual location, ULB Campus Solbosch in Brussels. It will also be possible to participate online. Developer Rooms For more details about the Developer Rooms, please refer to the Calls for Papers that are being added to https://fosdem.org/2023/news/2022-11-07-accepted-developer-rooms/ when they are issued. Main Tracks Main舰
Earlier in the year I helped Josh Sanburn and his team put together a podcast series on building Second Life for the Wall Street Journal called "How To Build a Metaverse" which I'm now really enjoying. It's great to hear all of the amazing stories about the origin .
I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.
So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.
Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.
Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org
But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.
The equipment deployed in this installation currently contains:
a rather beefy Supermicro 2U server with EPYC 7113P CPU and 4x PCIe, two of which are populated with Digium TE820 cards resulting in a total of 16 E1 ports
an icE1usb with RS422 interface board connected via 100m RS422 to an Ericsson GPS03 receiver. There's two layers of of over-voltage protection on the RS422 (each with gas discharge tubes and TVS) and two stages of over-voltage protection in the coaxial cable between antenna and GPS receiver.
Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.
In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.
Acknowledgements
I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware
Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.
This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.
My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.
In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.
So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.
If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.
In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.
In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.
So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.
Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.
bursty bit clock changes until link is up
The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4
As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.
As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.
This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.
But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.
clock drift between master and slave cards
Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.
When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(
Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.
This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.
The work-around
If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.
In this setup, your slave card[s] will have perfect bit clock sync/lock.
Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).