02 Feb 2023

feedPlanet Grep

Xavier Mertens: This Blog Has 20 Years!

Twenty years ago… I decided to start a blog to share my thoughts! That's why I called it "/dev/random". How was the Internet twenty years ago? Well, they were good things and bad ones…

With the years, the blog content evolved, and I wrote a lot of technical stuff related to my job, experiences, tools, etc. Then, I had the opportunity to attend a lot of security conferences and started to write wrap-ups. With COVID, fewer conferences and no more reviews. For the last few months, I'm mainly writing diaries for the Internet Storm Center therefore, I publish less private stuff here, and just relay the content published on the ISC website. If you have read my stuff for a long time (or even if you are a newcomer), thank you very much!

A few stats about the site:

I know that these numbers might seem low for many of you but I'm proud of them!

The post This Blog Has 20 Years! appeared first on /dev/random.

02 Feb 2023 8:12pm GMT

Koen Vervloesem: How to stop brltty from claiming your USB UART interface on Linux

Today I wanted to program an ESP32 development board, the ESP-Pico-Kit v4, but when I connected it to my computer's USB port, the serial connection didn't appear in Linux. Suspecting a hardware issue, I tried another ESP32 board, the ESP32-DevKitC v4, but this didn't appear either, so then I tried another one, a NodeMCU ESP8266 board, which had the same problem. Time to investigate...

The dmesg output looked suspicious:

[14965.786079] usb 1-1: new full-speed USB device number 5 using xhci_hcd
[14965.939902] usb 1-1: New USB device found, idVendor=10c4, idProduct=ea60, bcdDevice= 1.00
[14965.939915] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[14965.939920] usb 1-1: Product: CP2102 USB to UART Bridge Controller
[14965.939925] usb 1-1: Manufacturer: Silicon Labs
[14965.939929] usb 1-1: SerialNumber: 0001
[14966.023629] usbcore: registered new interface driver usbserial_generic
[14966.023646] usbserial: USB Serial support registered for generic
[14966.026835] usbcore: registered new interface driver cp210x
[14966.026849] usbserial: USB Serial support registered for cp210x
[14966.026881] cp210x 1-1:1.0: cp210x converter detected
[14966.031460] usb 1-1: cp210x converter now attached to ttyUSB0
[14966.090714] input: PC Speaker as /devices/platform/pcspkr/input/input18
[14966.613388] input: BRLTTY 6.4 Linux Screen Driver Keyboard as /devices/virtual/input/input19
[14966.752131] usb 1-1: usbfs: interface 0 claimed by cp210x while 'brltty' sets config #1
[14966.753382] cp210x ttyUSB0: cp210x converter now disconnected from ttyUSB0
[14966.754671] cp210x 1-1:1.0: device disconnected

So the ESP32 board, with a Silicon Labs, CP2102 USB to UART controller chip, was recognized, and it was attached to the /dev/ttyUSB0 device, as it should normally do. But then suddenly the brltty command intervened and disconnected the serial device.

I looked up what brltty is doing, and apparently this is a system daemon that provides access to the console for a blind person using a braille display. When looking into the contents of the package on my Ubuntu 22.04 system (with dpkg -L brltty), I saw a udev rules file, so I grepped for the product ID of my USB device in the file:

$ grep ea60 /lib/udev/rules.d/85-brltty.rules
ENV{PRODUCT}=="10c4/ea60/*", ATTRS{manufacturer}=="Silicon Labs", ENV{BRLTTY_BRAILLE_DRIVER}="sk", GOTO="brltty_usb_run"

Looking at the context, this file shows:

# Device: 10C4:EA60
# Generic Identifier
# Vendor: Cygnal Integrated Products, Inc.
# Product: CP210x UART Bridge / myAVR mySmartUSB light
# BrailleMemo [Pocket]
# Seika [Braille Display]
ENV{PRODUCT}=="10c4/ea60/*", ATTRS{manufacturer}=="Silicon Labs", ENV{BRLTTY_BRAILLE_DRIVER}="sk", GOTO="brltty_usb_run"

So apparently there's a Braille display with the same CP210x USB to UART controller as a lot of microcontroller development boards have. And because this udev rule claims the interface for the brltty daemon, UART communication with all these development boards isn't possible anymore.

As I'm not using these Braille displays, the fix for me was easy: just find the systemd unit that loads these rules, mask and stop it.

$ systemctl list-units | grep brltty
brltty-udev.service loaded active running Braille Device Support
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
$ sudo systemctl stop brltty-udev.service

After this, I was able to use the serial interface again on all my development boards.

02 Feb 2023 8:12pm GMT

Frederic Descamps: MySQL 8.0.32: thank you for the contributions

The latest MySQL release has been published on January 17th, 2023. MySQL 8.0.32 contains some new features and bug fixes. As usual, it also contains contributions from our great MySQL Community.

I would like to thank all contributors on behalf of the entire Oracle MySQL team !

MySQL 8.0.32 contains patches from Facebook/Meta, Alexander Reinert, Luke Weber, Vilnis Termanis, Naoki Someya, Maxim Masiutin, Casa Zhang from Tencent, Jared Lundell, Zhe Huang, Rahul Malik from Percona, Andrey Turbanov, Dimitry Kudryavtsev, Marcelo Altmann from Percona, Sander van de Graaf, Kamil Holubicki from Percona, Laurynas Biveinis, Seongman Yang, Yamasaki Tadashi, Octavio Valle, Zhao Rong, Henning Pöttker, Gabrielle Gervasi and Nico Pay.

Here is the list of the above contributions and related bugs, we can see that for this release, our connectors got several contributions, always a good sign of their increasing popularity.

We can also notice the return of a major contributor: Laurynas Biveinis!

Connectors

Connector / NET

Connector / Python

Connector / J

Connector / C++

Clients & API

Replication

InnoDB and Clone

Optimizer

If you have patches and you also want to be part of the MySQL Contributors, it's easy, you can send Pull Requests from MySQL's GitHub repositories or send your patches on Bugs MySQL (signing the Oracle Contributor Agreement is required).

Thank you again to all our contributors !

02 Feb 2023 8:12pm GMT

feedFedora People

Richard W.M. Jones: Fedora now has frame pointers

Fedora now has frame pointers. I don't want to dwell on the how of this, it was a somewhat controversial decision and you can read all about it here. But I do want to say a bit about the why, and how it makes performance analysis so much easier.

Recently we've been looking at a performance problem in qemu. To try to understand this I've been looking at FlameGraphs all day, like this one:

<figure class="wp-block-image size-large"></figure>

FlameGraphs rely on the Linux tool perf being able to collect stack traces. The stack traces start in the kernel and go up through userspace often for dozens or even hundreds of frames. They must be collected quickly (my 1 minute long trace has nearly half a million samples) and accurately.

Perf (or actually I think it's some component of the kernel) has various methods to unwind the stack. It can use frame pointers, kernel ORC information or DWARF debug information. The thing is that DWARF unwinding (the only userspace option that doesn't use frame pointers) is really unreliable. In fact it has such serious problems that it's not that usable at all.

For example, here is a broken stack trace from Fedora 37 (with full debuginfo installed):

<figure class="wp-block-image size-large"></figure>

Notice that we go from the qemu-img program, through an "[unknown]" frame, into zlib's inflate.

In the same trace we get completely detached frames too (which are wrong):

<figure class="wp-block-image size-large"></figure>

Upgrading zlib to F38 (with frame pointers) shows what this should look like:

<figure class="wp-block-image size-large"></figure>

Another common problem with lack of frame pointers can be seen in this trace from Fedora 37:

<figure class="wp-block-image size-large"></figure>

It looks like it might be OK, until you compare it to the same workload using Fedora 38 libraries:

<figure class="wp-block-image size-large"></figure>

Look at those beautiful towering peaks! What seems to be happening (I don't know why) is that stack traces start in the wrong place when you don't have frame pointers (note that FlameGraphs show stack traces upside down, with the starting point in the kernel shown at the top). Also if you look closely you'll notice missed frames in the first one, like the "direct" call to __libc_action which actually goes through an intermediate frame.

Before Fedora 38 the only way to get good stack traces was to recompile your software and all of its dependencies with frame pointers, a massive pain in the neck and a major barrier to entry when investigating performance problems.

With Fedora 38, it's simply a matter of using the regular libraries, installing debuginfo if you want (it does still add detail), and you can start using perf straight away by following Brendan Gregg's tutorials.

02 Feb 2023 7:33pm GMT

feedLinux Today

8 Best Window Managers for Linux

Want to organize your windows and use all the screen space you have? These window managers for Linux should come in handy!

The post 8 Best Window Managers for Linux appeared first on Linux Today.

02 Feb 2023 7:00pm GMT

feedPlanet Ubuntu

Ubuntu Blog: From model-centric to data-centric MLOps

MLOps (short for machine learning operations) is slowly evolving into an independent approach to the machine learning lifecycle that includes all steps - from data gathering to governance and monitoring. It will become a standard as artificial intelligence is moving towards becoming part of everyday business, rather than an innovative activity.

Get an intro to MLOps on the 15th of February with Canonical's experts.

Register now

Over time, there have been different approaches used in MLOps. The most popular ones are model-driven and data-driven approaches. The split between them is defined by the main focus of the AI system: data or code. Which one should you choose? The decision challenges data scientists to choose which component will play a more important role in the development of a robust model. In this blog, we will evaluate both.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/c3c4/1_Capture.jpg" width="720" /> </noscript>

Model-centric development

Model-driven development focuses, as the name suggests, on machine learning model performance. It uses different methods of experimentation in order to improve the performance of the model, without altering the data. The main goal of this approach is to work on the code and optimise it as much as possible. It includes code, model architecture and training processes as well.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/d736/1_Capture.jpg" width="720" /> </noscript>

If you look deeper into this development method, the model-driven approach is all about high-quality ML models. What it means, in reality, is that developers focus on using the best set of ML algorithms and AI platforms. The approach also is a basis for great advancements in the AI space, such as the development of specialised frameworks like Tensorflow or PyTorch.

Model-centric development has been around since the early days of the discipline, so it benefits from widespread adoption across a variety of AI applications. The reason for this can be traced back to the fact that AI was initially a research-focused area. Historically, this approach was designed for challenging problems and huge datasets, which ML specialists were meant to solve by optimising AI models. It has also been driven by the wide adoption of open source, which allows free access to various GitHub repositories. Model-driven development encourages developers to experiment with the latest bits of technology and try to get the best results by fine-tuning the model. From an organisational perspective, it is suited for enterprises which have enough data to train machine-learning models.

When it comes to pitfalls, the model-centric approach requires a lot of manual work at the various stages of the ML lifecycle. For example, data scientists have to spend a lot of time on data labelling, data validation or training the model. The approach may result in slower project delivery, higher costs and little return on investment. This is the main reason why practitioners considered trying to tackle this problem from a different perspective with data-centric development.

Data-centric development

As it is often mentioned, data is the heart of any AI initiative. The data-centric approach takes this statement seriously, by systematically interacting with the datasets in order to obtain better results and increase the accuracy of machine learning applications.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/0279/2_Capture.jpg" width="720" /> </noscript>

When compared to the model-centric approach, in this case, the ML model is fixed, and all improvements are related to the data. These enhancements range from better data labelling to using different data samples for training or increasing the size of the data set. This approach improves data handling as well, by creating a common understanding of the datasets.

The data-centric approach has a few essential guidelines that look after:

Data labelling for data-centric development

Data labelling assigns labels to data. The process provides information about the datasets that are then used by algorithms to learn. It emphasises both content and structure information, so it often includes various data types, measurement units, or time periods represented in the dataset. Having correct and consistent labels can define the success of an AI project.

Data-centric development often highlights the importance of correct labelling. There are various examples of how to approach it; the key goal is avoiding inconsistencies and ambiguities. Below you can find an image that Andrew Ng offers as an example of data labels in practice. In this case, the labels are used for two adjectives: inconsistency and ambiguity.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/yuNU1LsB5c3JAAeSzLktJtPjJPqdCKi0Zu43hLC0vpbvqx0YwudLSQ34hAZ4Zw9A3MJt28OM1N6UZbAtIjUv6qv70GQFxbps93JGYOQfy-5qvLxL-6ccvce56cBU08jsiq5yPz_an68mJVHZZOJJz94" width="720" /> </noscript>

Data augmentation for data-centric development

Data augmentation is a process that consists of the generation of new data based on various means, such as interpolation or explorations. It is not always needed, but in some instances, there are models that require a larger amount of data at various stages of the ML lifecycle: training, validation, and data synthesis.

Whenever you perform this activity, checking data quality and ensuring the elimination of noise is also part of the guidelines.

Error analysis for data-centric development

Error analysis is a process performed once a model is trained. Its main goal is to identify a subset that can be used for improving the dataset. It is a task that requires diligence, as it needs to be performed repeatedly, in order to get gradual improvements in both data quality and model performance.

Data versioning for data-centric development

Data versioning tracks changes that happen within the datasets, in order to identify performance changes within the model. It enables collaboration, eases the data management process and fastens the delivery of machine learning pipelines from experimentation to production.

When it comes to pitfalls, the data-centric method struggles mostly with data. On one hand, it can be hard to manage and control. On the other hand, it can be biased if it does not represent the actual population, leading to models that underperform in real life. Lastly, because of the data requirements, it can easily be expensive or suitable only for projects which have collected data for a longer period of time.

Model-centric and data-centric development with MLOps

In reality, both of these approaches are tightly linked to MLOps. Regardless of the option that data scientists choose, they need to follow MLOps guidelines and integrate their method within the tooling that they choose. Developers can use the same tool but have different approaches across different projects. The main difference could occur at the level of the ML lifecycle where changes are happening. It's important to note that the approach will affect how the model is optimised for the specific initiative, so choosing it with care is important to position your project for success.

Get an intro to MLOps on the 15th of February with Canonical's experts.

Register now

Charmed Kubeflow is an end-to-end MLOps tooling, designed for scaling machine learning models to production. Because of its features and integrations, it has the ability to support both model-centric and data-centric development. It is an open-source platform, which encourages contributions and represents the foundations of a growing MLOps ecosystem that Canonical is moving towards, with various integrations at various levels: hardware, tooling and AI frameworks.

Learn more about MLOps

02 Feb 2023 6:26pm GMT

feedLinux Today

How to Delete Files With Specific Extensions From the Command Line

Here's how you can delete a large number of files with the same extension or a similar pattern of files you need to remove from your system.

The post How to Delete Files With Specific Extensions From the Command Line appeared first on Linux Today.

02 Feb 2023 6:00pm GMT

John the Ripper: Password Cracking Tutorial and Review

John the Ripper is a popular open-source password cracking tool that can be used to perform brute-force attacks. Learn more here.

The post John the Ripper: Password Cracking Tutorial and Review appeared first on Linux Today.

02 Feb 2023 5:00pm GMT

feedFedora People

Fedora Community Blog: Outreachy Summer’23: Call for Projects and Mentors!

The Fedora Project is participating in the upcoming round of Outreachy. We need more project ideas and mentors! The last day to propose a project or to apply as a general mentor is February 24, 2023, at 4pm UTC.

Outreachy provides a unique opportunity for underrepresented groups to gain valuable experience in open-source and gain access to a supportive community of mentors and peers. By participating in this program, the Fedora community can help create a more diverse and inclusive tech community.

If you have a project idea for the upcoming round of Outreachy, please open a ticket in the mentored projects repository. You can also volunteer to be a mentor for a project that's not yours. As a supporting mentor, you will guide interns through the completion of the project.

A good project proposal makes all the difference. It saves time for both the mentors and the applicants.

What makes a good project proposal

The Mentored Projects Coordinators will review your ideas and help you prep your project proposal to be submitted to Outreachy.

How to participate

Project Mentor

Signing up as a mentor is a commitment. Before signing up, please consider the following

Please read through the mentor-faq page from Outreachy.

General Mentor

We are also looking for general mentors for the facilitation of proper communication of feedback and evaluation with the interns working on the selected projects.

Submit your proposals

Please submit your project ideas and mentorship availability as soon as possible. The last date for project idea submission is February 24, 2023

Mentoring can be a fulfilling pursuit. It is a great opportunity to contribute to the community and shape the future of Fedora by mentoring a talented intern who will work on your project. Don't miss out on this exciting opportunity to make a difference in the Fedora community and the tech industry as a whole. Together, we can make the open-source community even more diverse and inclusive.

The post Outreachy Summer'23: Call for Projects and Mentors! appeared first on Fedora Community Blog.

02 Feb 2023 4:51pm GMT

feedKernel Planet

Linux Plumbers Conference: Preliminary Dates and Location for LPC2023

The 2023 LPC PC is pleased to announce that we've begun exclusive negotiations with the Omni Hotel in Richmond, VA to host Plumbers 2023 from 13-15 November. Note: These dates are not yet final (nor is the location; we have had one failure at this stage of negotiations from all the Plumbers venues we've chosen). We will let you know when this preliminary location gets finalized (please don't book irrevocable travel until then).

The November dates were the only ones that currently work for the venue, but Richmond is on the same latitude as Seville in Spain, so it should still be nice and warm.

02 Feb 2023 4:18pm GMT

feedLXer Linux News

Open source Ray 2.2 boosts machine learning observability to help scale services like OpenAI's ChatGPT

Ray, the popular open-source machine learning (ML) framework, has released its 2.2 version with improved performance and observability capabilities, as well as features that can help to enable reproducibility.

02 Feb 2023 3:03pm GMT

Red Hat gives an ARM up to OpenShift Kubernetes operations

With the new release, Red Hat is integrating new capabilities to help improve security and compliance for OpenShift, as well as new deployment options on ARM-based architectures. The OpenShift 4.12 release comes as Red Hat continues to expand its footprint, announcing partnerships with Oracle and SAP this week.

02 Feb 2023 12:40pm GMT

feedKernel Planet

Matthew Garrett: Blocking free API access to Twitter doesn't stop abuse

In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.

There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?

To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.

The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.

comment count unavailable comments

02 Feb 2023 10:36am GMT

feedPlanet GNOME

Matthew Garrett: Blocking free API access to Twitter doesn't stop abuse

In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.

There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?

To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.

The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.

comment count unavailable comments

02 Feb 2023 10:21am GMT

feedFedora People

Matthew Garrett: Blocking free API access to Twitter doesn't stop abuse

In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.

There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?

To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.

The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.

comment count unavailable comments

02 Feb 2023 10:21am GMT

feedLXer Linux News

Revisited: termusic – terminal-based music player

When we reviewed termusic back in April 2022 we lamented that this music player was a strong candidate for someone looking for a terminal-based music player with one exception. The software lacked gapless playback.

02 Feb 2023 10:17am GMT

feedPlanet Maemo

Rekomendasi Game Judi Online Casino Mudah Menang

Setiap pemain yang menargetkan kemenangan di dalam permainan judi online Kasino harus tahu tentang game yang bisa dimainkan. Saat ini pilihan game yang tersedia dan ditawarkan serta bisa dimainkan memang banyak sekali pilihannya. Namun meski demikian anda disarankan untuk bisa hanya fokus mencari pilihan game yang mudah dimenangkan. Biasanya memang permainan kasino tersebut banyak sekali peminat dan penggunanya sehingga Anda bisa meyakinkan diri bahwa bermain game tersebut akan memudahkan anda untuk menang dengan mudah.

Kemenangan mudah dan keuntungan besar dalam sebuah permainan taruhan judi akan sangat bergantung pada beberapa teknik dan strategi yang Anda gunakan. Dalam hal ini penting juga untuk anda cari tahu terlebih dahulu trik dan strategi yang nantinya akan anda gunakan. Tapi di samping itu sebelumnya penting untuk cari tahu terlebih dahulu beberapa koleksi dan pilihan permainan yang populer. Anda bisa mencoba dan memilih serta memainkan salah satu pilihan game maju dia memang paling populer dan paling banyak peminat dan penggunanya.

Data Rekomendasi Game Judi Online Casino Mudah Menang

Saat ini rekomendasi pilihan permainan judi casino yang bisa dimainkan memang banyak sekali dan disarankan untuk Anda memilih pilihan game yang mudah menang. Permainan slot yang mudah untuk dimenangkan adalah pilihan terbaik yang recommended untuk dipilih karena itu akan memudahkan untuk anda segera balik modal. Maka dari itu coba pelajari dan juga cari tahu beberapa pilihan game Mbak judi online Kasino yang selama ini memang nggak banyak sekali peminatnya. Anda bisa memilih dan memainkan salah satu dari beberapa pilihan permainan berikut yang memang terbukti bagus dan bisa digunakan.

  1. Sicbo - Permainan dadu yang satu ini juga merupakan salah satu pilihan permainan yang populer di Indonesia karena memang permainan ini memiliki keseruan di mana kita bermain dengan 3 buah dadu. Kemudian selanjutnya kita diharuskan untuk menebak berapa mata dadu yang akan muncul setelah kita lempar.
  2. Dragon Tiger - Kemudian jenis permainan selanjutnya yang populer adalah permainan Dragon Tiger yang merupakan permainan yang cukup unik dan menarik dimana kita ada 2 buah yakni Dragon dan tiger. Permainan ini memiliki daya tarik serta memiliki keunikan yang membuat banyak orang memainkan permainan itu.
  3. Casino Holdem Poker - Ini juga permainan kartu yang memang cukup populer di Indonesia yang tersedia dalam Casino Online sehingga layak untuk Anda coba mainkan. Permainan ini memungkinkan anda untuk bisa mengasah skill dan insting serta keseruan serta tantangan juga.
  4. Blackjack- game judi online blackjack juga merupakan pilihan terbaik yang recommended dan bagus untuk dipilih karena memang punya banyak penawaran menarik yang menguntungkan. Permainan ini sangatlah menarik dan juga memiliki sensasi klasik karena sudah ada sejak zaman dulu dan sampai sekarang masih menjadi pilihan yang Primadona dan banyak peminatnya.
  5. Bacarat - selanjutnya juga Anda bisa mencoba bermain permainan taruhan judi online bacarat. Sebagaimana diketahui bahwa permainan tersebut menjadi pilihan game terbaik yang menguntungkan yang akan bisa memberikan peluang dan penghasilan yang besar. Kesempatan terbaik untuk Anda bisa bermain permainan judi dengan sistem lain yang aman dan nyaman sekaligus juga terintegrasi.

Jadi pastikan agar supaya Anda bisa memilih dan memainkan salah satu pilihan game jadi terbaik dengan berbagai keuntungan terbesar. Pelajari dan juga cari tahu bagaimana cara bermain permainan taruhan judi dengan aman dan nyaman serta menguntungkan.

The post Rekomendasi Game Judi Online Casino Mudah Menang appeared first on VALERIOVALERIO.

0 Add to favourites0 Bury

02 Feb 2023 10:10am GMT

feedPlanet Python

Codementor: Roadmap to Functional Programming

roadmap to learn functional programming

02 Feb 2023 8:58am GMT

Justin Mayer: Python Development Environment on MacOS Ventura and Monterey

While installing Python and Virtualenv on MacOS Ventura and Monterey can be done several ways, this tutorial will guide you through the process of configuring a stock Mac system into a solid Python development environment.

First steps

This guide assumes that you have already installed Homebrew. For details, please follow the steps in the MacOS Configuration Guide.

Python

We are going to install the latest version of Python via asdf and its Python plugin. Why bother, you ask, when Apple includes Python along with MacOS? Here are some reasons:

Use the following command to install asdf and Python build dependencies via Homebrew:

brew install asdf openssl readline sqlite3 xz zlib

Next we ensure asdf is loaded for both current and future shell sessions. If you are using Fish shell:

# Load asdf for this session
source (brew --prefix)/opt/asdf/asdf.fish

# Ensure asdf loads for all subsequent sessions
echo source (brew --prefix)/opt/asdf/asdf.fish >> ~/.config/fish/config.fish

# Ensure asdf doesn't disrupt activated virtual environments
echo 'if set -q VIRTUAL_ENV; source "$VIRTUAL_ENV/bin/activate.fish"; end' >> ~/.config/fish/config.fish

For Zsh (the default shell on MacOS):

. $(brew --prefix asdf)/asdf.sh
echo -e "\n. $(brew --prefix asdf)/asdf.sh" >> ~/.zshrc

Install the asdf Python plugin and the latest version of Python:

asdf plugin add python
asdf install python latest

Note the Python version number that was just installed. For the purpose of this guide, we will assume version 3.11.1, so replace that number below with the version number you actually just installed.

Set the default global Python version:

asdf global python 3.11.1

Confirm the Python version matches the latest version we just installed:

python --version



Pip

Let's say you want to install a Python package, such as the Virtualenv environment isolation tool. While many Python-related articles for MacOS tell the reader to install Virtualenv via sudo pip install virtualenv, the downsides of this method include:

  1. installs with root permissions
  2. installs into the system /Library
  3. yields a less reliable environment when using Python built with asdf

As you might have guessed by now, we are going to use the asdf Python plugin to install the Python packages that we want to be globally available. When installing via python -m pip […], packages will be installed to: ~/.asdf/installs/python/{version}/lib/python{version}/site-packages/

First, let's ensure we are using the latest version of Pip and Setuptools:

python -m pip install --upgrade pip setuptools

In the next section, we'll use Pip to install our first globally-available Python package.

Virtualenv

Python packages installed via Pip are global in the sense that they are available across all of your projects. That can be convenient at times, but it can also create problems. For example, sometimes one project needs the latest version of Django, while another project needs an older Django version to retain compatibility with a critical third-party extension. This is one of many use cases that Virtualenv was designed to solve. On my systems, only a handful of general-purpose Python packages (including Virtualenv) are globally available - every other package is confined to virtual environments.

With that explanation behind us, let's install Virtualenv:

python -m pip install virtualenv
asdf reshim python

Create some directories to store our projects, virtual environments, and Pip configuration file, respectively:

mkdir -p ~/Projects ~/Virtualenvs ~/.config/pip

We'll then open Pip's configuration file (which may be created if it doesn't exist yet)…

vim ~/.config/pip/pip.conf

… and add some lines to it:

[install]
require-virtualenv = true

[uninstall]
require-virtualenv = true

Now we have Virtualenv installed and ready to create new virtual environments, which we will store in ~/Virtualenvs. New virtual environments can be created via:

cd ~/Virtualenvs
virtualenv project-a

If you have both Python 3.10.x and 3.11.x installed and want to create a Python 3.10.9 virtual environment:

virtualenv -p ~/.asdf/installs/python/3.10.9/bin/python project-b



Restricting Pip to virtual environments

What happens if we think we are working in an active virtual environment, but there actually is no virtual environment active, and we install something via python -m pip install foobar? Well, in that case the foobar package gets installed into our global site-packages, defeating the purpose of our virtual environment isolation.

Thankfully, Pip has an undocumented setting (source) that tells it to bail out if there is no active virtual environment, which is exactly what we want. In fact, we've already set that above, via the require-virtualenv = true directive in Pip's configuration file. For example, let's see what happens when we try to install a package in the absence of an activated virtual environment:

python -m pip install markdown
Could not find an activated virtualenv (required).

Perfect! But once that option is set, how do we install or upgrade a global package? We can temporarily turn off this restriction by defining a new function in ~/.zshrc:

gpip(){
   PIP_REQUIRE_VIRTUALENV="0" python -m pip "$@"
}

(As usual, after adding the above you must run source ~/.zshrc for the change to take effect.)

If in the future we want to upgrade our global packages, the above function enables us to do so via:

gpip install --upgrade pip setuptools virtualenv

You could achieve the same effect via PIP_REQUIRE_VIRTUALENV="0" python -m pip install --upgrade […], but that's much more cumbersome to type every time.

Creating virtual environments

Let's create a virtual environment for Pelican, a Python-based static site generator:

cd ~/Virtualenvs
virtualenv pelican

Change to the new environment and activate it via:

cd pelican
source bin/activate

To install Pelican into the virtual environment, we'll use Pip:

python -m pip install pelican markdown

For more information about virtual environments, read the Virtualenv docs.

Dotfiles

These are obviously just the basic steps to getting a Python development environment configured. Feel free to also check out my dotfiles.



If you found this article to be useful, feel free to find me on Twitter.

02 Feb 2023 7:00am GMT

Codementor: Hello World program in C [In series]

read previous post before jumping to this program

02 Feb 2023 5:32am GMT

feedDjango community aggregator: Community blog posts

MongoDB - Mark Smith

Support the Show

This podcast does not have any ads or sponsors. To support the show, please consider purchasing a book, signing up for Button, or reading the Django News newsletter.

02 Feb 2023 12:00am GMT

feedPlanet KDE | English

KDE Gear 22.12.2

Over 120 individual programs plus dozens of programmer libraries and feature plugins are released simultaneously as part of KDE Gear.

Today they all get new bugfix source releases with updated translations, including:

Distro and app store packagers should update their application packages.

02 Feb 2023 12:00am GMT

01 Feb 2023

feedPlanet Ubuntu

Ubuntu Blog: Multipass 1.11 brings enhanced performance for Linux on Mac and Windows

Multipass 1.11 is here!

This release has some particularly interesting features that we've been wanting to ship for a while now. We're excited to share them with you!

For those who aren't familiar with Multipass, it's software that streamlines every aspect of managing and working with virtual machines. We've found that development, particularly for cloud applications, can often involve a huge amount of tedious work setting up development and testing environments. Multipass aims to solve that by making the process of creating and destroying VMs as simple as a single command, and by integrating the VM into your host machine and your development flow as much as possible.

That principle of integration is one of the main focuses we had for the 1.11 release. There are two major features out today that make Multipass much more integrated with your host machine - native mounts and directory mapping.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/a7e1/james-harrison-vpOeXr5wmR4-unsplash.jpg" width="720" /> </noscript>

Performance boost

Performance has always been in Multipass' DNA - we try to keep it as lightweight as we can so that nothing gets between developers and their work. With the 1.11 release, we've taken another big step forward.

With the new native mounts feature, Multipass is getting a major performance boost. This feature uses platform-optimized software to make filesystems shared between the host computer and the virtual machine much faster than before. In benchmarking, we've seen speed gains of around 10x! For people sharing data with Multipass from their host machine, this is a huge time saver.

Multipass is one of the few VM management tools available to developers on Apple silicon. Performance mounts make the M1 and M2 even faster platforms for Ubuntu. For those who don't remember, Multipass can launch VMs on the Apple M1 and M2 in less than 20 seconds.

User experience

Multipass' performance leveled up with this release, and the user experience did as well! Directory mapping is a new way to be more efficient than ever with Multipass. Multipass has supported command aliasing for some time now, but one drawback of aliasing alone is that it loses the context of where the command is executed in the filesystem. Commands like docker-compose, for example, are context sensitive. They may rely on certain files being present in the working directory, or give different results depending on where they are run.

Directory mapping maintains the context of an aliased command, meaning that an aliased command sent from the host will be executed in the same context on the VM. This feature has the potential to make it feel like you are running linux programs natively on your Mac or Windows terminal.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/7844/nubelson-fernandes-iE71-TMrrkE-unsplash.jpg" width="720" /> </noscript>

Other upgrades

In addition to directory mapping, Blueprints now allow for alias and workspace definitions, meaning you can now spin up a new VM and start using aliased (and context-sensitive) commands in a shared filespace with no additional configuration required. Look for some examples in the near future!

Some other notable upgrades include the `transfer` command and UEFI booting. The `transfer` command now allows for recursive file transfers. This should make it much easier to transfer entire directories as opposed to individual folders or files. Multipass now boots its instances via UEFI which means we are able to support Ubuntu Core 20 and 22 for our IoT developers.

To get started with Multipass, head to our install page or check out our tutorials. We always love to hear feedback from our community, so please let us know what you're up to by posting in discourse, or dropping in for our office hours.

01 Feb 2023 3:35pm GMT

Julian Andres Klode: Ubuntu 2022v1 secure boot key rotation and friends

This is the story of the currently progressing changes to secure boot on Ubuntu and the history of how we got to where we are.

taking a step back: how does secure boot on Ubuntu work?

Booting on Ubuntu involves three components after the firmware:

  1. shim
  2. grub
  3. linux

Each of these is a PE binary signed with a key. The shim is signed by Microsoft's 3rd party key and embeds a self-signed Canonical CA certificate, and optionally a vendor dbx (a list of revoked certificates or binaries). grub and linux (and fwupd) are then signed by a certificate issued by that CA

In Ubuntu's case, the CA certificate is sharded: Multiple people each have a part of the key and they need to meet to be able to combine it and sign things, such as new code signing certificates.

BootHole

When BootHole happened in 2020, travel was suspended and we hence could not rotate to a new signing certificate. So when it came to updating our shim for the CVEs, we had to revoke all previously signed kernels, grubs, shims, fwupds by their hashes.

This generated a very large vendor dbx which caused lots of issues as shim exported them to a UEFI variable, and not everyone had enough space for such large variables. Sigh.

We decided we want to rotate our signing key next time.

This was also when upstream added SBAT metadata to shim and grub. This gives a simple versioning scheme for security updates and easy revocation using a simple EFI variable that shim writes to and reads from.

Spring 2022 CVEs

We still were not ready for travel in 2021, but during BootHole we developed the SBAT mechanism, so one could revoke a grub or shim by setting a single EFI variable.

We actually missed rotating the shim this cycle as a new vulnerability was reported immediately after it, and we decided to hold on to it.

2022 key rotation and the fall CVEs

This caused some problems when the 2nd CVE round came, as we did not have a shim with the latest SBAT level, and neither did a lot of others, so we ended up deciding upstream to not bump the shim SBAT requirements just yet. Sigh.

Anyway, in October we were meeting again for the first time at a Canonical sprint, and the shardholders got together and created three new signing keys: 2022v1, 2022v2, and 2022v3. It took us until January before they were installed into the signing service and PPAs setup to sign with them.

We also submitted a shim 15.7 with the old keys revoked which came back at around the same time.

Now we were in a hurry. The 22.04.2 point release was scheduled for around middle of February, and we had nothing signed with the new keys yet, but our new shim which we need for the point release (so the point release media remains bootable after the next round of CVEs), required new keys.

So how do we ensure that users have kernels, grubs, and fwupd signed with the new key before we install the new shim?

upgrade ordering

grub and fwupd are simple cases: For grub, we depend on the new version. We decided to backport grub 2.06 to all releases (which moved focal and bionic up from 2.04), and kept the versioning of the -signed packages the same across all releases, so we were able to simply bump the Depends for grub to specify the new minimum version. For fwupd-efi, we added Breaks.

(Actually, we also had a backport of the CVEs for 2.04 based grub, and we did publish that for 20.04 signed with the old keys before backporting 2.06 to it.)

Kernels are a different story: There are about 60 kernels out there. My initial idea was that we could just add Breaks for all of them. So our meta package linux-image-generic which depends on linux-image-$(uname -r)-generic, we'd simply add Breaks: linux-image-generic (« 5.19.0-31) and then adjust those breaks for each series. This would have been super annoying, but ultimately I figured this would be the safest option. This however caused concern, because it could be that apt decides to remove the kernel metapackage.

I explored checking the kernels at runtime and aborting if we don't have a trusted kernel in preinst. This ensures that if you try to upgrade shim without having a kernel, it would fail to install. But this ultimately has a couple of issues:

  1. It aborts the entire transaction at that point, so users will be unable to run apt upgrade until they have a recent kernel.
  2. We cannot even guarantee that a kernel would be unpacked first. So even if you got a new kernel, apt/dpkg might attempt to unpack it first and then the preinst would fail because no kernel is present yet.

Ultimately we believed the danger to be too large given that no kernels had yet been released to users. If we had kernels pushed out for 1-2 months already, this would have been a viable choice.

So in the end, I ended up modifying the shim packaging to install both the latest shim and the previous one, and an update-alternatives alternative to select between the two:

In it's post-installation maintainer script, shim-signed checks whether all kernels with a version greater or equal to the running one are not revoked, and if so, it will setup the latest alternative with priority 100 and the previous with a priority of 50. If one or more of those kernels was signed with a revoked key, it will swap the priorities around, so that the previous version is preferred.

Now this is fairly static, and we do want you to switch to the latest shim eventually, so I also added hooks to the kernel install to trigger the shim-signed postinst script when a new kernel is being installed. It will then update the alternatives based on the current set of kernels, and if it now points to the latest shim, reinstall shim and grub to the ESP.

Ultimately this means that once you install your 2nd non-revoked kernel, or you install a non-revoked kernel and then reconfigure shim or the kernel, you will get the latest shim. When you install your first non-revoked kernel, your currently booted kernel is still revoked, so it's not upgraded immediately. This has a benefit in that you will most likely have two kernels you can boot without disabling secure boot.

regressions

Of course, the first version I uploaded had still some remaining hardcoded "shimx64" in the scripts and so failed to install on arm64 where "shimaa64" is used. And if that were not enough, I also forgot to include support for gzip compressed kernels there. Sigh, I need better testing infrastructure to be able to easily run arm64 tests as well (I only tested the actual booting there, not the scripts).

shim-signed migrated to the release pocket in lunar fairly quickly, but this caused images to stop working, because the new shim was installed into images, but no kernel was available yet, so we had to demote it to proposed and block migration. Despite all the work done for end users, we need to be careful to roll this out for image building.

another grub update for OOM issues.

We had two grubs to release: First there was the security update for the recent set of CVEs, then there also was an OOM issue for large initrds which was blocking critical OEM work.

We fixed the OOM issue by cherry-picking all 2.12 memory management patches, as well as the red hat patches to the loader we take from there. This ended up a fairly large patch set and I was hesitant to tie the security update to that, so I ended up pushing the security update everywhere first, and then pushed the OOM fixes this week.

With the OOM patches, you should be able to boot initrds of between 400M and 1GB, it also depends on the memory layout of your machine and your screen resolution and background images. So OEM team had success testing 400MB irl, and I tested up to I think it was 1.2GB in qemu, I ran out of FAT space then and stopped going higher :D

other features in this round

am I using this yet?

The new signing keys are used in:

If you were able to install shim-signed, your grub and fwupd-efi will have the correct version as that is ensured by packaging. However your shim may still point to the old one. To check which shim will be used by grub-install, you can check the status of the shimx64.efi.signed or (on arm64) shimaa64.efi.signed alternative. The best link needs to point to the file ending in latest:

$ update-alternatives --display shimx64.efi.signed
shimx64.efi.signed - auto mode
  link best version is /usr/lib/shim/shimx64.efi.signed.latest
  link currently points to /usr/lib/shim/shimx64.efi.signed.latest
  link shimx64.efi.signed is /usr/lib/shim/shimx64.efi.signed
/usr/lib/shim/shimx64.efi.signed.latest - priority 100
/usr/lib/shim/shimx64.efi.signed.previous - priority 50

If it does not, but you have installed a new kernel compatible with the new shim, you can switch immediately to the new shim after rebooting into the kernel by running dpkg-reconfigure shim-signed. You'll see in the output if the shim was updated, or you can check the output of update-alternatives as you did above after the reconfiguration has finished.

For the out of memory issues in grub, you need grub2-signed 1.187.3~ (same binaries as above).

how do I test this (while it's in proposed)?

  1. upgrade your kernel to proposed and reboot into that
  2. upgrade your grub-efi-amd64-signed, shim-signed, fwupd-signed to proposed.

If you already upgraded your shim before your kernel, don't worry:

  1. upgrade your kernel and reboot
  2. run dpkg-reconfigure shim-signed

And you'll be all good to go.

deep dive: uploading signed boot assets to Ubuntu

For each signed boot asset, we build one version in the latest stable release and the development release. We then binary copy the built binaries from the latest stable release to older stable releases. This process ensures two things: We know the next stable release is able to build the assets and we also minimize the number of signed assets.

OK, I lied. For shim, we actually do not build in the development release but copy the binaries upward from the latest stable, as each shim needs to go through external signing.

The entire workflow looks something like this:

  1. Upload the unsigned package to one of the following "build" PPAs:

  2. Upload the signed package to the same PPA

  3. For stable release uploads:

    • Copy the unsigned package back across all stable releases in the PPA
    • Upload the signed package for stable releases to the same PPA with ~<release>.1 appended to the version
  4. Submit a request to canonical-signing-jobs to sign the uploads.

    The signing job helper copies the binary -unsigned packages to the primary-2022v1 PPA where they are signed, creating a signing tarball, then it copies the source package for the -signed package to the same PPA which then downloads the signing tarball during build and places the signed assets into the -signed deb.

    Resulting binaries will be placed into the proposed PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed

  5. Review the binaries themselves

  6. Unembargo and binary copy the binaries from the proposed PPA to the proposed-public PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed-public.

    This step is not strictly necessary, but it enables tools like sru-review to work, as they cannot access the packages from the normal private "proposed" PPA.

  7. Binary copy from proposed-public to the proposed queue(s) in the primary archive

Lots of steps!

WIP

As of writing, only the grub updates have been released, other updates are still being verified in proposed. An update for fwupd in bionic will be issued at a later point, removing the EFI bits from the fwupd 1.2 packaging and using the separate fwupd-efi project instead like later release series.

01 Feb 2023 1:40pm GMT

feedPlanet Maemo

Apa Permainan Judi Online yang Populer di Indonesia?

Tahukah anda bahwa di Indonesia ada banyak sekali pilihan permainan judi online yang tersedia dan bisa anda mainkan. Anda sebetulnya bebas aja memilih permainan mana saja sesuai dengan yang anda mau makan sebaiknya Coba Anda cari manakah yang memang paling populer dan menguntungkan. Dengan cara begitu kita bisa menemukan dan mendapatkan salah satu pilihan permainan yang memang bisa memberikan kita dan membawa kita pada keberuntungan. Untuk bisa menemukan dan mendapatkan pilihan permainan yang seperti itu tentu kita harus melakukan pencarian sampai kemudian bisa menemukan salah satunya.

Memilih permainan casino yang populer tentu bisa menjadi salah satu cara dan langkah terbaik yang bisa kita lakukan. Kenapa demikian? Ya ketika memang permainan itu populer dan banyak dimainkan oleh banyak orang itu artinya memang ada banyak keuntungan dan kelebihan yang ditawarkan oleh permainan tersebut. Tidak akan menyesal jika bisa bergabung dan bermain di salah satu pilihan permainan yang populer. Justru akan ada banyak keuntungan dan kelebihan besar yang bisa kita peroleh dari keputusan kita bergabung di salah satu pilihan permainan yang populer.

Apa Saja Permainan Judi Online yang Populer di Indonesia?

Indonesia ada banyak sekali pilihan permainan casino yang tersedia dan bisa kita mainkan dan tentunya kita bisa memilih jenis permainan yang paling bagus dan paling populer. Dari banyak pilihan permainan yang ada, memang ada beberapa diantaranya yang sangat populer dan banyak diminati oleh banyak kalangan. Ketika memang ada banyak orang yang meminati untuk bermain itu juga bisa menjadi salah satu pilihan yang tepat. Apa saja Pilihan permainan yang banyak dimainkan itu? Beberapa diantaranya dan penjelasannya di bawah ini:

Beberapa permainan di atas juga di Indonesia masih banyak permainan judi Casino versi online lainnya yang tersedia dan bisa anda pilih dan mainkan. Segitunya untuk Anda bisa memilih dan memainkan permainan mana saja namun cobalah untuk Anda cek Apakah terbuat dari permainan itu dan bagaimana caranya agar Anda bisa mendapatkan keuntungan finansial dari permainan tersebut. Kamu itu harus dipikirkan dengan baik sebelum kita terjun ke dalam permainan itu agar kita berhasil memperoleh banyak penghasilan dan juga keseluruhan dan kesenangan dari Casino Online itu. Selamat mencoba dan selamat memainkan salah satu atau beberapa permainan yang tersedia.

The post Apa Permainan Judi Online yang Populer di Indonesia? appeared first on VALERIOVALERIO.

0 Add to favourites0 Bury

01 Feb 2023 10:08am GMT

feedDjango community aggregator: Community blog posts

Securing FastAPI with JWT Token-based Authentication

This tutorial shows how to secure a FastAPI application with JWT Token-based Authentication.

01 Feb 2023 4:28am GMT

feedPlanet KDE | English

My work in KDE for January 2023

This is a non-comprehensive list of all of the major work I've done for KDE this month of January. I think I got a lot done this month! I also was accepted as a KDE Developer near the start of the month, so I'm pretty happy about that.

Sorry that it's pretty much only text, a lot of this stuff isn't either not screenshottable or I'm too lazy to attach an image. Next month should be better!

Custom icon theme in Tokodon

I threw all of the custom icons we use in Tokodon into a proper custom icon theme, which should automatically match your theme and includes a dark theme variant. In the future, I'd like to recolor these better and eventually upstream them into Breeze.

See the merge request.

KXMLGUI tooltips

As part of cleaning up some KDE games-related stuff, I also looked into the issue of duplicate "What's This?" tooltips. This also fixes that visual bug where you can close normal tooltips that don't have "What's This?" information to actually open.

See the merge request.

KBlocks background changes

This one isn't merged yet, but in the future - KBlock themes authors will be able to specify where to pin the background instead of having it stretched by default.

See the merge request.

Kirigami "About KDE" dialog

I added something that's been wanted for a while, Kirigami's own "About KDE" dialog! It's currently sitting in Add-ons, but will most likely be moved in the future. If you would like to suggest what we do about the About pages/windows in KDE, please check out the proposal.

Kirigami Add-on&rsquo;s About KDE dialog
Kirigami Add-on's About KDE dialog

See the merge request.

Media improvements in Tokodon

I did a lot of work improving media in Tokodon this month, including fixing the aspect ratios scaling correctly, video support (not merged yet) and other miscellaneous fixes. I also caught a bunch of blurhash bugs along with making the timeline fixed-width so images aren't absurdly sized on a typical desktop display.

Tokodon on a large display
Tokodon on a large display

See the media layout fixes, three attachment fix, and the video support merge requests.

Krita.org dark theme

I'm starting to get involved in improving the KDE websites, and currently working on the new Krita.org website and adding a proper dark theme to it.

Krita.org in the dark
Krita.org in the dark

See the work-in-progress merge request.

Gwenview MPRIS fixes

Not merged yet (due to MPRIS bugginess in general?) but I cracked a shot at improving the MPRIS situation with Gwenview. Notably, slideshow controls no longer "hang around" until a slideshow is actually happening.

See the open merge request.

CMake Package Installer

I worked a little on solving the kdesrc-build issue of manual package lists, and created cmake-package-installer. It parses your CMake log and installs the relevant packages for you. I want to start looking into hooking this into kdesrc-build!

See the repository.

KDE Wiki improvements

I made some misc changes to the Community Wiki this month, mostly centered around fixing some long-standing formatting issues I've noticed. The homepage should be more descriptive, important pages no longer misformatted (or just missing?) and the Get Involved/Development page should be better organized.

Misc Qt patches

I cherry-picked a Qt6 commit fixing video playback in QML, which should appear in the next Qt KDE Patch collection update, mostly for use in Tokodon when video support lands. I also submitted an upstream Qt patch fixing WebP loading, meant for NeoChat where I see the most WebP images.

See the GStreamer cherry-pick and the WebP patch.

Window Decoration KCM overhaul

This isn't merged yet (but it's close!) so it barely misses the mark for January, but I'll include it anyway. I'm working on making the Window Decoration KCM frameless and give it a new look that matches the other KCMs.

New Window Decoration KCM
New Window Decoration KCM

See the merge request.

01 Feb 2023 12:00am GMT

feedplanet.freedesktop.org

Mike Blumenkrantz: Fastlink

Fast-linking: This Is Your Howto

I previously wrote a post talking about some optimization work that's been done with RADV to improve fast-link performance. As promised, that wasn't the end of the story. Today's post will be a bit different, however, as I'll be assuming all the graphics experts in the audience are already well-versed in all the topics I'm covering.

Also I'm assuming you're all driver developers interested in improving your GPL fast-link performance.

The one exception is that today I'll be using a specific definition for fast when it comes to fast-linking: to be fast, a driver should be able to fast-link in under 0.01ms. In an extremely CPU-intensive application, this should allow for even the explodiest of pipeline explosions (100+ fast-links in a single frame) to avoid any sort of hitching/stuttering.

Which drivers have what it takes to be fast?

Testing

To begin evaluating fast-link performance, it's important to have test cases. Benchmarks. The sort that can be easily run, easily profiled, easily understood.

vkoverhead is the premier tool for evaluating CPU overhead in Vulkan drivers, and thanks to Valve, it now has legal support for GPL fast-link using real pipelines from Dota2. That's right. Acing this synthetic benchmark will have real world implications.

For anyone interested in running these cases, it's as simple as building and then running:

./vkoverhead -start 135

These benchmark cases will call vkCreateGraphicsPipelines in a tight loop to perform a fast-link on GPL-created pipeline libraries, fast-linking thousands of times per second for easy profiling. The number of iterations per second, in thousands, is then printed.

vkoverhead works with any Vulkan driver on any platform (including Windows!), which means it's possible to use it to profile and optimize any driver.

Optimization

vkoverhead currently has two cases for GPL fast-link. As they are both extracted directly from Dota2, they have a number of properties in common:

Each case tests the following:

Various tools are available on different platforms for profiling, and I'm not going to go into details here. What I'm going to do instead is look into strategies for optimizing drivers. Strategies that I (and others) have employed in real drivers. Strategies that you, if you aren't shipping a fast-linking implementation of GPL, might be interested in.

First Strategy: Move NO-OP Fragment Shader To Device

The depthonly case explicitly tests whether drivers are creating a new fragment shader for every pipeline that lacks one. Drivers should not do this.

Instead, create a single fragment shader on the device object and reuse it like these drivers do:

In addition to being significantly faster, this also saves some memory.

Second Strategy: Avoid Copying Shader IR

Regular, optimized pipeline creation typically involves running optimization passes across the shader binaries, possibly even the entire pipeline, to ensure that various speedups can be found. Many drivers copy the internal shader IR in the course of pipeline creation to handle shader variants.

Don't copy shader IR when trying to fast-link a pipeline.

Copying IR is very expensive, especially in larger shaders. Instead, either precompile unoptimized shader binaries in their corresponding GPL stage or refcount IR structures that must exist during execution. Examples:

Third Strategy: Avoid Compiling Shaders

This one seems obvious, but it has to be stated.

Do not compile shaders when attempting to achieve fast-link speed.

If you are compiling shaders, this is a very easy place to start optimizing.

Fourth Strategy: Avoid Caching Fast-link Pipelines

There's no reason to cache a fast-linked pipeline. The amount of time saved by retrieving a cached pipeline should be outweighed by the amount of time required to:

I say should because ideally a driver should be so fast at combining a GPL pipeline that even a cache hit is only comparable performance, if not slower outright. Skip all aspects of caching for these pipelines.

Fifth Strategy: Misc Profiling

If a driver is still slow after checking for the above items, it's time to try profiling. It's surprising what slowdowns drivers will hit. The classics I've seen are large memset calls and avoidable allocations.

Some examples:

A Mystery Solved

In my previous post, I alluded to a driver that was shipping a GPL implementation that advertised fast-link but wasn't actually fast. I saw a lot of guesses. Nobody got it right.

scooby.jpg

It was Lavapipe (me) all along.

As hinted at above, however, this is no longer the case. In fact, after going through the listed strategies, Lavapipe now has the fastest GPL linking in the world.

Obviously it would have to if I'm writing a blog post about optimizing fast-linking, right?

Fast-linking: Initial Comparisons

How fast is Lavapipe's linking, you might ask?

To answer this, let's first apply a small patch to bump up Lavapipe's descriptor limits so it can handle the beefy Dota2 pipelines. With that done, here's a look at comparisons to other, more legitimate drivers, all running on the same system.

NVIDIA is the gold standard for GPL fast-linking considering how long they've been shipping it. They're pretty fast.

$ VK_ICD_FILENAMES=nvidia_icd.json ./vkoverhead -start 135 -duration 5
vkoverhead running on NVIDIA GeForce RTX 2070:
        * misc numbers are reported as thousands of operations per second
        * percentages for misc cases should be ignored
 135, misc_compile_fastlink_depthonly,                      444,          100.0%
 136, misc_compile_fastlink_slow,                           243,          100.0%

RADV (with pending MRs applied) has gotten incredibly fast over the past week-ish.

$ RADV_PERFTEST=gpl ./vkoverhead -start 135 -duration 5
vkoverhead running on AMD Radeon RX 5700 XT (RADV NAVI10):
        * misc numbers are reported as thousands of operations per second
        * percentages for misc cases should be ignored
 135, misc_compile_fastlink_depthonly,                      579,          100.0%
 136, misc_compile_fastlink_slow,                           537,          100.0%

Lavapipe (with pending MRs applied) blows them both out of the water.

$ VK_ICD_FILENAMES=lvp_icd.x86_64.json ./vkoverhead -start 135 -duration 5
vkoverhead running on llvmpipe (LLVM 15.0.6, 256 bits):
        * misc numbers are reported as thousands of operations per second
        * percentages for misc cases should be ignored
 135, misc_compile_fastlink_depthonly,                     1485,         100.0%
 136, misc_compile_fastlink_slow,                          1464,         100.0%

Even if the NVIDIA+RADV numbers are added together, it's still not close.

Fast-linking: More Comparisons

If I switch over to a different machine, Intel's ANV driver has a MR for GPL open, and it's seeing some movement. Here's a head-to-head with the champion.

$ ./vkoverhead -start 135 -duration 5
vkoverhead running on Intel(R) Iris(R) Plus Graphics (ICL GT2):
        * misc numbers are reported as thousands of operations per second
        * percentages for misc cases should be ignored
 135, misc_compile_fastlink_depthonly,                      384,          100.0%
 136, misc_compile_fastlink_slow,                           276,          100.0%

$ VK_ICD_FILENAMES=lvp_icd.x86_64.json ./vkoverhead -start 135 -duration 5
vkoverhead running on llvmpipe (LLVM 15.0.6, 256 bits):
        * misc numbers are reported as thousands of operations per second
        * percentages for misc cases should be ignored
 135, misc_compile_fastlink_depthonly,                     1785,         100.0%
 136, misc_compile_fastlink_slow,                          1779,         100.0%

On yet another machine, here's Turnip, which advertises the fast-link feature. This driver requires a small patch to modify MAX_SETS=5 since this is hardcoded at 4. I've also pinned execution here to the big cores for consistency.

# turnip ooms itself with -duration
$ ./vkoverhead -start 135
vkoverhead running on Turnip Adreno (TM) 618:
        * misc numbers are reported as thousands of operations per second
        * percentages for misc cases should be ignored
 135, misc_compile_fastlink_depthonly,                       73,           100.0%
 136, misc_compile_fastlink_slow,                            23,           100.0%

$ VK_ICD_FILENAMES=lvp_icd.aarch64.json ./vkoverhead -start 135 -duration 5
vkoverhead running on llvmpipe (LLVM 14.0.6, 128 bits):
        * misc numbers are reported as thousands of operations per second
        * percentages for misc cases should be ignored
 135, misc_compile_fastlink_depthonly,                      690,          100.0%
 136, misc_compile_fastlink_slow,                           699,          100.0%

More Analysis

We've seen that Lavapipe is unequivocally the champion of fast-linking in every head-to-head, but what does this actually look like in timings?

Here's a chart that shows the breakdown in milliseconds.

Driver ` misc_compile_fastlink_depthonly ` ` misc_compile_fastlink_slow `
NVIDIA 0.002ms 0.004ms
RADV 0.0017ms 0.0019ms
Lavapipe 0.0007ms 0.0007ms
ANV 0.0026ms 0.0036ms
Lavapipe 0.00056ms 0.00056ms
Turnip 0.0137ms 0.0435ms
Lavapipe 0.001ms 0.001ms

As we can see, all of these drivers are "fast". A single fast-link pipeline isn't likely to cause any of them to drop a frame.

The driver I've got my eye on, however, is Turnip, which is the only one of the tested group that doesn't quite hit that 0.01ms target. A little bit of profiling might show some easy gains here.

Even More Analysis

For another view of these drivers, let's examine the relative performance. Since GPL fast-linking is inherently a CPU task that has no relation to the GPU, it stands to reason that a CPU-based driver should be able to optimize for it the best given that there's already all manner of hackery going on to defer and delay execution. Indeed, reality confirms this, and looking at any profile of Lavapipe for the benchmark cases reveals that the only remaining bottleneck is the speed of malloc, which is to say the speed with which the returned pipeline object can be allocated.

Thus, ignoring potential micro-optimizations of pipeline struct size, it can be said that Lavapipe has effectively reached the maximum speed of the system for fast-linking. From there, we can say that any other driver running on the same system is utilizing some fraction of this power.

Therefore, every other driver's fast-link performance can be visualized in units of Lavapipe (lvps) to determine how much gain is possible if things like refactoring time and feasibility are ignored.

Driver misc_compile_fastlink_depthonly misc_compile_fastlink_slow
NVIDIA 0.299lvps 0.166lvps
RADV 0.390lvps 0.367lvps
ANV 0.215lvps 0.155lvps
Turnip 0.106lvps 0.033lvps

The great thing about lvps is that these are comparable units.

At last, we finally have a way to evaluate all these drivers in a head-to-head across different systems.

The results are a bit surprising to me:

Key Takeaways

Aside from the strategies outlined above, the key takeaway for me is that there shouldn't be any hardware limitation to implementing fast-linking. It's a CPU-based architectural problem, and with enough elbow grease, any driver can aspire to reach nonzero lvps in vkoverhead's benchmark cases.

01 Feb 2023 12:00am GMT

31 Jan 2023

feedPlanet GNOME

Jussi Pakkanen: PDF with font subsetting and a look in the future

After several days of head scratching, debugging and despair I finally got font subsetting working in PDF. The text renders correctly in Okular, goes througg Ghostscript without errors and even passes an online PDF validator I found. But not Acrobat Reader, which chokes on it completely and refuses to show anything. Sigh.

The most likely cause is that the subset font that gets generated during this operation is not 100% valid. The approach I use is almost identical to what LO does, but for some reason their PDFs work. Opening both files in FontForge seems to indicate that the .notdef glyph definition is somehow not correct, but offers no help as to why.

In any case it seems like there would be a need for a library for PDF generation. Existing libs either do not handle non-RGB color spaces or are implemented in Java, Ruby or other languages that are hard to use from languages other than themselves. Many programs, like LO and Scribus, have their own libraries for generating PDF files. It would be nice if there could be a single library for that.

Is this a reasonable idea that people would actually be interested in? I don't know, but let's try to find out. I'm going to spend the next weekend in FOSDEM. So if you are going too and are interested in PDF generation, write a comment below or send me an email, I guess? Maybe we can have a shadow meeting in the cafeteria.

31 Jan 2023 11:59pm GMT

feedPlanet KDE | English

Building flatpaks and Freedesktop SDK from scratch

Flatpak applications are based on runtimes such as KDE or Gnome Runtimes. Both of these runtimes are actually based on Freedesktop SDK which contains essential libraries and services such as Wayland or D-Bus.

Recently there were a lot of discussion about supply chain attacks, so it might be interesting to ask how Freedesktop SDK was built. The answer can be found in freedesktop-sdk repository:

sources:
- kind: ostree
url: freedesktop-sdk:releases/
gpg-key: keys/freedesktop-sdk.gpg
track: runtime/org.freedesktop.Sdk.PreBootstrap/x86_64/21.08
ref: 0ecba7699760ffc05c8920849856a20ebb3305da9f1f0377ddb9ca5600be710b

So it is built using an older version of Freedesktop SDK image. There is now an approved merge request that completely reworks bootstrapping of Freedesktop SDK. It uses another intermediate docker image freedesktop-sdk-binary-seed that bridges the gap between freedesktop-sdk and live-bootstrap.

So what is this live-bootstrap? If you look at parts.rst you'll see that it is a build chain that starts with 256 byte hex assembler that can build itself from its source and also 640-byte trivial shell that can read list of commands from the file and executes them. Then it proceeds building 130 (as of the moment of writing) other components and in the process builds GCC, Python, Guile, Perl and lots of other supporting packages. Furthermore, each component is built reproducibly (and this is checked using SHA256 hash).

Some caveat: at the moment freedesktop-sdk-binary-seed still uses older binary of rustc to build rustc but in principle one could leverage mrustc to build it. Or possibly rust-gcc will become more capable in future versions and will be able to bootstrap rustc.

So unless your flatpak application uses rust, it will soon be buildable from sub 1 KiB binary seed.

31 Jan 2023 11:48pm GMT

feedDjango community aggregator: Community blog posts

How to Install Django

This tutorial covers how to properly install the latest version of [Django (4.1)](https://www.djangoproject.com/) and [Python (3.11)](https://www.python.org). As the [official docs note](https://docs.djangoproject.com/en/dev/topics/install/), if you are already familiar with the command line, …

31 Jan 2023 8:38pm GMT

feedPlanet Maemo

Strategi Bermain Judi Online Agar Mudah Menang

Memilih permainan taruhan memang harus dipikirkan dengan baik dan benar dan tidak boleh dilakukan dengan sembarangan. Kalaupun ada banyak koleksi permainan yang tersedia tetap kita harus bisa selektif memutuskan dan memilih salah satu pilihan yang paling tepat. Dengan cara demikian inilah yang kemudian akan memungkinkan dan memudahkan kita untuk bisa memperoleh kemenangan. Kita bisa mendapatkan dan memperoleh kemenangan lebih mudah dari permainan yang kita lakukan jika kita bisa bermain dengan cara yang tepat.

Kenapa membutuhkan strategi?

Permainan casino sebetulnya bukan sepenuhnya permainan yang harus mengandalkan yang namanya keberuntungan, akan tetapi kita harus bisa menguasai beragam strategi bermain agar bisa menang mudah dalam permainan itu. Makin banyak strategi bermain yang kita kuasai maka semakin besar pula peluang keuntungan yang bisa kita peroleh dan dapatkan. Oleh karena itu sebisa mungkin kita harus pelajari strategi bermain seperti apa yang harus kita gunakan. Dengan cara demikian ini bisa memungkinkan dan memudahkan kita untuk menang lebih mudah dan lebih seru. Bermain dengan strategi juga bisa memudahkan kita untuk bisa menggapai target yang ingin kita capai.

Strategi Bermain Judi Online Menang Terus

Bermain game judi online agar supaya bisa menang terus terkadang memang membutuhkan beberapa trik dan juga strategi yang jitu. Kita bisa coba pelajari dan cari tahu bagaimana sebetulnya trik yang bisa digunakan bergantung pada pilihan permainan. Misalnya Anda mau bermain permainan seperti slot pragmatic, joker 123 dan lain sebagainya, sebaiknya gunakan teknik dan strategi yang benar.

Jika anda sudah bisa memilih salah satu pilihan permen yang yang tepat, anda tinggal mainkan saja permainan itu dan kemudian Anda rasakan keseruan dari permainan tersebut.

Pilih game yang punya potensi menang tinggi

Cara yang paling mudah pertama sebetulnya adalah memilih game yang memiliki potensi kemenangan yang tinggi. Oleh karena itu anda perlu lakukan beberapa pencarian untuk bisa menemukan pilihan game tersebut. Saat ini banyak sekali pilihan yang tersedia dan bisa dipilih namun juga harus disadari bahwa tidak semua pilihan game memiliki tingkat RTP tinggi.

Menyiapkan modal yang besar

Setiap pemain yang mau menang dan untung banyak di dalam permainan game maju di online judi pasti disarankan untuk bisa memilih untuk menyiapkan modal yang cukup. Persiapan modal yang cukup akan mempengaruhi banyak hal termasuk juga mempengaruhi tingkat kemenangan dan besar keuntungan yang bisa didapatkan. Jadi sebisa mungkin Anda harus bisa siapkan modal yang cukup agar bisa memiliki kesempatan untuk bisa bermain dengan lebih banyak keuntungan besar di dalamnya.

Mempelajari celah kemenangan

Setiap permainan apapun itu juga pasti akan selalu ada celah kemenangan yang bisa kita pelajari. Jadi pastikan supaya Anda bisa belajar dan cari tahu apa saja dan bagaimana saja sebetulnya cara kemenangan yang bisa kita gunakan agar bisa mendapatkan keuntungan yang besar dan berkali Lipat. Bahkan Anda juga bisa coba belajar dari beberapa sumber di beberapa media untuk bisa mendapatkannya informasinya.

Akan tetapi bagi Anda yang memang masih pemula sangat disarankan untuk bisa mengetahui dengan baik bahwa memang ada beberapa cara dan strategi tertentu yang bisa anda gunakan. Ada beberapa panduan khusus mendasar dan Advance yang memang harus dipahami dengan baik. Hal itu bertujuan agar Kemudian Anda bisa memperoleh dan mendapatkan keuntungan besar dari permainan tersebut. Ini juga yang memudahkan anda untuk bisa berjalan dengan lancar dan mendapatkan potensi atau peluang kemenangan yang lebih besar lagi dari permainan tersebut.

The post Strategi Bermain Judi Online Agar Mudah Menang appeared first on VALERIOVALERIO.

0 Add to favourites0 Bury

31 Jan 2023 10:08am GMT

29 Jan 2023

feedPlanet GNOME

Jakub Steiner: GNOME 44 Wallpapers

As we gear up for the release of GNOME 44, let's take a moment to reflect on the visual design updates.

GNOME 44 Wallpapers

We've made big strides in visual consistency with the growing number of apps that have been ported to gtk4 and libadwaita, embracing the modern look. Sam has also given the high-contrast style in GNOME Shell some love, keeping it in line with gtk's updates last cycle.

The default wallpaper stays true to our brand, but the supplementary set has undergone some bolder changes. From the popular simple shape blends to a nostalgic nod to the past with keypad and pixel art designs, there's something for everyone. The pixelized icons made their debut in the last release, but this time we focus on GNOME Circle apps, rather than the core apps.

Another exciting development is the continued use of geometry nodes in Blender. Although the tool has a steep learning curve, I'm starting to enjoy my time with it. I gave a talk on geometry nodes and its use for GNOME wallpaper design at the Fedora Creative Freedom Summit. You can watch the stream archive recording here (and part2).

Previously, Previously, Previously

29 Jan 2023 11:00pm GMT

27 Jan 2023

feedKernel Planet

Matthew Garrett: Further adventures in Apple PKCS#11 land

After my previous efforts, I wrote up a PKCS#11 module of my own that had no odd restrictions about using non-RSA keys and I tested it. And things looked much better - ssh successfully obtained the key, negotiated with the server to determine that it was present in authorized_keys, and then went to actually do the key verification step. At which point things went wrong - the Sign() method in my PKCS#11 module was never called, and a strange
debug1: identity_sign: sshkey_sign: error in libcrypto
sign_and_send_pubkey: signing failed for ECDSA "testkey": error in libcrypto"

error appeared in the ssh output. Odd. libcrypto was originally part of OpenSSL, but Apple ship the LibreSSL fork. Apple don't include the LibreSSL source in their public source repo, but do include OpenSSH. I grabbed the OpenSSH source and jumped through a whole bunch of hoops to make it build (it uses the macosx.internal SDK, which isn't publicly available, so I had to cobble together a bunch of headers from various places), and also installed upstream LibreSSL with a version number matching what Apple shipped. And everything worked - I logged into the server using a hardware-backed key.

Was the difference in OpenSSH or in LibreSSL? Telling my OpenSSH to use the system libcrypto resulted in the same failure, so it seemed pretty clear this was an issue with the Apple version of the library. The way all this works is that when OpenSSH has a challenge to sign, it calls ECDSA_do_sign(). This then calls ECDSA_do_sign_ex(), which in turn follows a function pointer to the actual signature method. By default this is a software implementation that expects to have the private key available, but you can also register your own callback that will be used instead. The OpenSSH PKCS#11 code does this by calling EC_KEY_set_method(), and as a result calling ECDSA_do_sign() ends up calling back into the PKCS#11 code that then calls into the module that communicates with the hardware and everything works.

Except it doesn't under macOS. Running under a debugger and setting a breakpoint on EC_do_sign(), I saw that we went down a code path with a function called ECDSA_do_sign_new(). This doesn't appear in any of the public source code, so seems to be an Apple-specific patch. I pushed Apple's libcrypto into Ghidra and looked at ECDSA_do_sign() and found something that approximates this:

nid = EC_GROUP_get_curve_name(curve);
if (nid == NID_X9_62_prime256v1) {
  return ECDSA_do_sign_new(dgst,dgst_len,eckey);
}
return ECDSA_do_sign_ex(dgst,dgst_len,NULL,NULL,eckey);

What this means is that if you ask ECDSA_do_sign() to sign something on a Mac, and if the key in question corresponds to the NIST P256 elliptic curve type, it goes down the ECDSA_do_sign_new() path and never calls the registered callback. This is the only key type supported by the Apple Secure Enclave, so I assume it's special-cased to do something with that. Unfortunately the consequence is that it's impossible to use a PKCS#11 module that uses Secure Enclave keys with the shipped version of OpenSSH under macOS. For now I'm working around this with an SSH agent built using Go's agent module, forwarding most requests through to the default session agent but appending hardware-backed keys and implementing signing with them, which is probably what I should have done in the first place.

comment count unavailable comments

27 Jan 2023 11:39pm GMT

feedPlanet Gentoo

Handy commands to clean up old ~arch-only packages

Here's a bunch of handy commands that I've conceived to semi-automatically remove old versions of packages that do not have stable keywords (and therefore are not subject to post-stabilization cleanups that I do normally).

Requirements

The snippets below require the following packages:

They should be run in top directory of a ::gentoo checkout, ideally with no other changes queued.

Remove redundant versions

First, a pipeline that finds all packages without stable amd64 keywords (note: this is making an assumption that there are no packages that are stable only on some other architecture), then scans these packages for redundant versions and removes them. The example command operates on dev-python/*:

pkgcheck scan 'dev-python/*' -c UnstableOnlyCheck -a amd64 \
    -R FormatReporter --format '{category}/{package}' |
  sort -u |
  xargs pkgcheck scan -c RedundantVersionCheck \
    -R FormatReporter --format \
    '{category}/{package}/{package}-{version}.ebuild' |
  xargs git rm

Check for broken revdeps

The next step is to check for broken revdeps. Start with:

rdep-fetch-cache
check-revdep $(
  git diff --name-only --cached | cut -d'/' -f1-2 | sort -u
)

Use git restore -WS ... to restore versions as necessary and repeat until it comes out clean.

Check for stale files

Finally, iterate over packages with files/ to check for stale patches:

(
  for x in $(
    git diff --name-only --cached | cut -d'/' -f1-2 | sort -u
  ); do
    [[ -d ${x}/files ]] && ( cd ${x}; bash )
  done
)

This will start bash inside every cleaned up package (note: I'm assuming that there are no other changes in the repo) that has a files/ directory. Use a quick grep followed by FILESDIR lookup:

grep FILES *.ebuild
ls files/

Remove the files that are no longer referenced.

Commit the removals

for x in $(
  git diff --name-only --cached | cut -d'/' -f1-2 | sort -u
); do
  (
    cd ${x} && pkgdev manifest &&
    pkgcommit . -sS -m 'Remove old'
  )
done
pkgcheck scan --commits

27 Jan 2023 8:07pm GMT

feedplanet.freedesktop.org

Mike Blumenkrantz: Through The Loop

A New Level Of Speed

I know everyone's been eagerly awaiting the return of the pasta maker.

The wait is over.

But today we're going to move away from those dangerous, addictivive synthetic benchmarks to look at a different kind of speed. That's right. Today we're looking at pipeline compile speed. Some of you are scoffing, mouse pointer already inching towards the close button on the tab.

Pipeline compile speed in the current year? Why should anyone care when we have great tools like Fossilize that can precompile everything for a game over the course of several hours to ensure there's no stuttering?

It turns out there's at least one type of pipeline compile that still matters going forward. Specifically, I'm talking about fast-linked pipelines using VK_EXT_graphics_pipeline_library.

Let's get an appetizer going, some exposition under our belts before we get to the spaghetti we're all craving.

Pipelines: They're In Your Games

All my readers are graphics experts. It won't come as any surprise when I say that a pipeline is a program containing shaders which is used by the GPU. And you all know how VK_EXT_graphics_pipeline_library enables compiling partial pipelines into libraries that can then be combined into a full pipeline. None of you need a refresher on this, and we all acknowledge that I'm just padding out the word count of this post for posterity.

Some of you experts, however, have been so deep into getting those green triangles on the screen to pass various unit tests that you might not be fully aware of the fast-linking property of VK_EXT_graphics_pipeline_library.

In general, compiling shaders during gameplay is (usually) bad. This is (usually) what causes stuttering: the compilation of a pipeline takes longer than the available time to draw the frame, and rendering blocks until compilation completes. The fast-linking property of VK_EXT_graphics_pipeline_library changes this paradigm by enabling pipelines, e.g., for shader variants, to be created fast enough to avoid stuttering.

Typically, this is utilized in applications through the following process:

In this way, no draw is blocked by a pipeline creation, and optimized pipelines are still used for the majority of GPU operations.

But Why…

…would I care about this if I have Fossilize and a brand new gaming supercomputer with 256 cores all running at 12GHz?

I know you're wondering, and the answer is simple: not everyone has these things.

Some people don't have extremely modern computers, which means Fossilize pre-compile of shaders can take hours. Who wants to sit around waiting that long to play a game they just downloaded?

Some games don't use Fossilize, which means there's no pre-compile. In these situations, there are two options:

The former option here gives us load times that remind us of the original Skyrim release. The latter probably yields stuttering.

Thus, VK_EXT_graphics_pipeline_library (henceforth GPL) with fast-linking.

The Obvious Problem

What does the "fast" in fast-linking really mean?

How fast is "fast"?

These are great questions that nobody knows the answer to. The only limitation here is that "fast" has to be "fast enough" to avoid stuttering.

Given that RADV is in the process of bringing up GPL for general use, and given that Zink is relying on fast-linking to eliminate compile stuttering, I thought I'd take out my perf magnifying glass and see what I found.

eyeofsauron.jpg

Uh-oh

Obviously we wouldn't be advertising fast-linking on RADV if it wasn't fast.

Obviously.

It goes without saying that we care about performance. No credible driver developer would advertise a performance-related feature if it wasn't performant.

RIGHT?

And it's not like I tried running Tomb Raider on zink and discovered that the so-called "fast"-link pipelines were being created at a non-fast speed. That would be insane to even consider-I mean, it's literally in the name of the feature, so if using it caused the game to stutter, or if, for example, I was seeing "fast"-link pipelines being created in 10ms+

mesa.png

Surely I didn't see that though.

rotated_mesa.png

Surely I didn't see fast-link pipelines taking more than an entire frame's worth of time to create.

It's Fine

Long-time readers know that this is fine. I'm unperturbed by seeing numbers like this, and I can just file a ticket and move on with my life like a normal per-

angry_mesa.png

OBVIOUSLY I CAN'T.

Obviously.

And just as obviously I had to get a second opinion on this, which is why I took my testing over to the only game I know which uses GPL with fast-link: 3D Pinball: Space Cadet DOTA 2!

Naturally it would be DOTA2, along with any other Source Engine 2 game, that uses this functionality.

Thus, I fired up my game, and faster than I could scream MID OR MEEPO into my mic, I saw the unthinkable spewing out in my console:

COMPILE 11425115
COMPILE 39491
COMPILE 11716326
COMPILE 35963
COMPILE 11057200
COMPILE 37115
COMPILE 10738436

Yes, those are all "fast"-linked pipeline compile times in nanoseconds.

Yes, half of those are taking more than 10ms.

pastamaker.jpg

First Steps

The first step is always admitting that you have a problem, but I don't have a problem. I'm fine. Not upset at all. Don't read more into it.

As mentioned above, we have great tools in the Vulkan ecosystem like Fossilize to capture pipelines and replay them outside of applications. This was going to be a great help.

I thought.

I fired up a 32bit build of Fossilize, set it to run on Tomb Raider, and immediately it exploded.

fossilize-works.png

Zink has, historically, been the final boss for everything Vulkan-related, so I was unsurprised by this turn of events. I filed an issue, finger-painted ineffectually, and then gave up because I had called in the expert.

That's right.

Friend of the blog, artisanal bit-wrangler, and a developer whose only speed is -O3 -ffast-math, Hans-Kristian Arntzen took my hand-waving, unintelligible gibbering, and pointing in the wrong direction and churned out a masterpiece in less time than it took RADV to "fast"-link some of those pipelines.

While I waited, I was working at the picosecond-level with perf to isolate the biggest bottleneck in fast-linking.

Fast-linking: Stop Compiling.

My caveman-like, tool-less hunt yielded immediate results: nir_shader_clone during fast-link was taking an absurd amount of time, and then also the shaders were being compiled at this point.

This was a complex problem to solve, and I had lots of other things to do (so many things), which meant I needed to call in another friend of the blog to take over while I did all the things I had to do.

Some of you know his name, and others just know him as "that RADV guy", but Samuel Pitoiset is the real deal when it comes to driver development. He can crank out an entire extension implementation in less time than it takes me to write one of these long-winded, benchmark-number-free introductions to a blog post, and when I told him we had a huge problem, he dropped* everything and jumped on board.
* and when I say "dropped" I mean he finished finding and fixing another Halo Infinite hang in the time it took me to explain the problem

With lightning speed, Samuel reworked pipeline creation to not do that thing I didn't want it to do. Because doing any kind of compiling when the driver is instead supposed to be "fast" is bad. Really bad.

How did that affect my numbers?

By now I was tired of dealing with the 32bit nonsense of Tomb Raider and had put all my eggs in the proverbial DOTA2 basket, so I again fired up a round, went to AFK in jungle, and checked my debug prints.

COMPILE 55699
COMPILE 55998
COMPILE 58016
COMPILE 56825
COMPILE 60288
COMPILE 110663
COMPILE 59679
COMPILE 50614
COMPILE 54316

Do my eyes deceive me or is that a 20,000% speedup from a single patch?!

Problem Solved

And so the problem was solved. I went to Dan Ginsburg, who I'm sure everyone knows as the author of this incredible blog post about GPL, and I showed him the improvements and our new timings, and I asked what he thought about the performance now.

Dan looked at me. Looked at the numbers I showed him. Shook his head a single time.

It shook me.

I don't know what I was thinking.

In my defense, a 20,000% speedup is usually enough to call it quits on a given project. In this case, however, I had the shadow of a competitor looming overhead.

While RADV was now down to 0.05-0.11ms for a fast-link, NVIDIA can apparently do this consistently in 0.02ms.

That's pretty fast.

Even Faster

By now, the man, the myth, @themaister, Hans-Kristian Arntzen had finished fixing every Fossilize bug that had ever existed and would ever exist in the future, which meant I could now capture and replay GPL pipelines from DOTA2. Fossilize also has another cool feature: it allows for extraction of single pipelines from a larger .foz file, which is great for evaluating performance.

The catch? It doesn't have any way to print per-pipeline compile timings during a replay, nor does it have a way to sort pipeline hashes based on compile times.

Either I was going to have to write some C++ to add this functionality to Fossilize, or I was going to have to get creative. With my Chromium PTSD in mind, I found myself writing out this construct:

for x in $(fossilize-list --tag 6 dota2.foz); do
        echo "PIPELINE $x"
    RADV_PERFTEST=gpl fossilize-replay --pipeline-hash $x dota2.foz 2>&1|grep COMPILE
done

I'd previously added some in-driver printfs to output compile times for the fast-link pipelines, so this gave me a file with the pipeline hash on one line and the compile timing on the next. I could then sort this and figure out some outliers to extract, yielding slow.foz, a fast-link that consistently took longer than 0.1ms.

I took this to Samuel, and we put our perfs together. Immediately, he spotted another bottleneck: SHA1Transform() was taking up a considerable amount of CPU time. This was occurring because the fast-linked pipelines were being added to the shader cache for reuse.

But what's the point of adding an unoptimized, fast-linked pipeline to a cache when it should take less time to just fast-link and return?

Blammo, another lightning-fast patch from Samuel, and fast-linked pipelines were no longer being considered for cache entries, cutting off even more compile time.

slow.foz was now consistently down to 0.07-0.08ms.

Are We There Yet?

No.

A post-Samuel flamegraph showed a few immediate issues:

memset.png

First, and easiest, a huge memset. Get this thing out of here.

Now slow.foz was fast-linking in 0.06-0.07ms. Where was the flamegraph at on this?

post-memset.png

Now the obvious question: What the farfalloni was going on with still creating a shader?!

It turns out this particular pipeline was being created without a fragment shader, and that shader was being generated during the fast-link process. Incredible coverage testing from an incredible game.

Fixing this proved trickier, and it still remains tricky. An unsolved problem.

However.

<zmike> can you get me a hack that I can use for that foz ?
* zmike just needs to get numbers for the blog
<hakzsam> hmm
<hakzsam> I'm trying

Like a true graphics hero, that hack was delivered just in time for me to run it through the blogginator. What kinds of gains would be had from this untested mystery patch?

slow.foz was now down to 0.023 ms (23566 ns).

final.png

We Did It

Thanks to Hans-Kristian enabling us and Samuel doing a lot of heavy and unsafe lifting while I sucked wind on the sidelines, we hit our target time of 0.02ms, which is a 50,000% improvement from where things started.

What does this mean?

If You're A User…

This means in the very near future, you can fire up RADV_PERFTEST=gpl and run DOTA2 (or zink) on RADV without any kind of shader pre-caching and still have zero stuttering.

If You're A Game Developer…

This means you can write apps relying on fast-linking and be assured that your users will not see stuttering on RADV.

If You're A Driver Developer…

So far, there aren't many drivers out there that implement GPL with true fast-linking. Aside from (a near-future version of) RADV, I'm reasonably certain the only driver that both advertises fast-linking and actually has fast linking is NVIDIA.

If you're from one of those companies that has yet to take the plunge and implement GPL, or if you've implemented it and decided to advertise the fast-linking feature without actually being fast, here's some key takeaways from a week in GPL optimization:

You might be thinking that profiling a single operation like this is tricky, and it's hard to get good results from a single fossilize-replay that also compiles multiple library pipelines.

Never fear, vkoverhead is here to save the day.

You thought I wouldn't plug it again, but here we are. In the very near future (ideally later today), vkoverhead will have some cases that isolate GPL fast-linking. This should prove useful for anyone looking to go from "fast" to fast.

There's no big secret about being truly fast, and there's no architectural limitations on speed. It just takes a little bit of elbow grease and some profiling.

But Also

The goal is to move GPL out of RADV_PERFTEST with Mesa 23.1 to enable it by default. There's still some functional work to be done, but we're not done optimizing here either.

One day I'll be able to say with confidence that RADV has the fastest fast-link in the world, or my name isn't Spaghetti Good Code.

27 Jan 2023 12:00am GMT

24 Jan 2023

feedPlanet PHP

Mastobot: For your Fediverse PHP posting needs

Mastobot: For your Fediverse PHP posting needs

Like much of the world I've been working to migrate off of Twitter to Mastodon and the rest of the Fediverse. Along with a new network is the need for new automation tools, and I've taken this opportunity to scratch my own itch and finally build an auto-posting bot for my own needs. And it is, of course, available as Free Software.

Announcing Mastobot! Your PHP-based Mastodon auto-poster.

Continue reading this post on PeakD.

Larry 23 January 2023 - 10:13pm

24 Jan 2023 4:13am GMT

23 Jan 2023

feedPlanet Twisted

Glyph Lefkowitz: A Very Silly Program

One of the persistently lesser-known symptoms of ADHD is hyperfocus. It is sometimes quasi-accurately described as a "superpower"1 2, which it can be. In the right conditions, hyperfocus is the ability to effortlessly maintain a singular locus of attention for far longer than a neurotypical person would be able to.

However, as a general rule, it would be more accurate to characterize hyperfocus not as an "ability to focus on X" but rather as "an inability to focus on anything other than X". Sometimes hyperfocus comes on and it just digs its claws into you and won't let go until you can achieve some kind of closure.

Recently, the X I could absolutely not stop focusing on - for days at a time - was this extremely annoying picture:

chroma subsampling carnage

Which lead to me writing the silliest computer program I have written in quite some time.


You see, for some reason, macOS seems to prefer YUV422 chroma subsampling3 on external displays, even when the bitrate of the connection and selected refresh rate support RGB.4 Lots of people have been trying to address this for a literal decade5 6 7 8 9 10 11, and the problem has gotten worse with Apple Silicon, where the operating system no longer even supports the EDID-override functionality available on every other PC operating system that supports plugging in a monitor.

In brief, this means that every time I unplug my MacBook from its dock and plug it back in more than 5 minutes later, its color accuracy is destroyed and red or blue text on certain backgrounds looks like that mangled mess in the picture above. Worse, while the color distinction is definitely noticeable, it's so subtle that it's like my display is constantly gaslighting me. I can almost hear it taunting me:

Magenta? Yeah, magenta always looked like this. Maybe it's the ambient lighting in this room. You don't even have a monitor hood. Remember how you had to use one of those for print design validation? Why would you expect it to always look the same without one?

Still, I'm one of the luckier people with this problem, because I can seem to force RGB / 444 color format on my display just by leaving the display at 120Hz rather than 144, then toggling HDR on and then off again. At least I don't need to plug in the display via multiple HDMI and displayport cables and go into the OSD every time. However, there is no API to adjust, or even discover the chroma format of your connected display's link, and even the accessibility features that supposedly let you drive GUIs are broken in the system settings "Displays" panel12, so you have to do it by sending synthetic keystrokes and hoping you can tab-focus your way to the right place.

Anyway, this is a program which will be useless to anyone else as-is, but if someone else is struggling with the absolute inability to stop fiddling with the OS to try and get colors to look correct on a particular external display, by default, all the time, maybe you could do something to hack on this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import os
from Quartz import CGDisplayRegisterReconfigurationCallback, kCGDisplaySetMainFlag, kCGDisplayBeginConfigurationFlag
from ColorSync import CGDisplayCreateUUIDFromDisplayID
from CoreFoundation import CFUUIDCreateString
from AppKit import NSApplicationMain, NSApplicationActivationPolicyAccessory, NSApplication

NSApplication.sharedApplication().setActivationPolicy_(NSApplicationActivationPolicyAccessory)

CGDirectDisplayID = int
CGDisplayChangeSummaryFlags = int

MY_EXTERNAL_ULTRAWIDE = '48CEABD9-3824-4674-9269-60D1696F0916'
MY_INTERNAL_DISPLAY = '37D8832A-2D66-02CA-B9F7-8F30A301B230'

def cb(display: CGDirectDisplayID, flags: CGDisplayChangeSummaryFlags, userInfo: object) -> None:
    if flags & kCGDisplayBeginConfigurationFlag:
        return
    if flags & kCGDisplaySetMainFlag:
        displayUuid = CGDisplayCreateUUIDFromDisplayID(display)
        uuidString = CFUUIDCreateString(None, displayUuid)
        print(uuidString, "became the main display")
        if uuidString == MY_EXTERNAL_ULTRAWIDE:
            print("toggling HDR to attempt to clean up subsampling")
            os.system("/Users/glyph/.local/bin/desubsample")
            print("HDR toggled.")

print("registered", CGDisplayRegisterReconfigurationCallback(cb, None))

NSApplicationMain([])

and the linked desubsample is this atrocity, which I substantially cribbed from this helpful example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#!/usr/bin/osascript

use AppleScript version "2.4" -- Yosemite (10.10) or later
use framework "Foundation"
use framework "AppKit"
use scripting additions

tell application "System Settings"
    quit
    delay 1
    activate
    current application's NSWorkspace's sharedWorkspace()'s openURL:(current application's NSURL's URLWithString:"x-apple.systempreferences:com.apple.Displays-Settings.extension")
    delay 0.5

    tell application "System Events"
    tell process "System Settings"
        key code 48
        key code 48
        key code 48
            delay 0.5
        key code 49
        delay 0.5
        -- activate hdr on left monitor

        set hdr to checkbox 1 of group 3 of scroll area 2 of ¬
                group 1 of group 2 of splitter group 1 of group 1 of ¬
                window "Displays"
        tell hdr
                click it
                delay 1.0
                if value is 1
                    click it
                end if
        end tell

    end tell
    end tell
    quit
end tell

This ridiculous little pair of programs does it automatically, so whenever I reconnect my MacBook to my desktop dock at home, it faffs around with clicking the HDR button for me every time. I am leaving it running in a background tmux session so - hopefully - I can finally stop thinking about this.

23 Jan 2023 3:06am GMT

21 Jan 2023

feedplanet.freedesktop.org

Nicolai Hähnle: Diff modulo base, a CLI tool to assist with incremental code reviews

One of the challenges of reviewing a lot of code is that many reviews require multiple iterations. I really don't want to do a full review from scratch on the second and subsequent rounds. I need to be able to see what has changed since last time.

I happen to work on projects that care about having a useful Git history. This means that authors of (without loss of generality) pull requests use amend and rebase to change commits and force-push the result. I would like to see the only the changes they made since my last review pass. Especially when the author also rebased onto a new version of the main branch, existing code review tools tend to break down.

Git has a little-known built-in subcommand, git range-diff, which I had been using for a while. It's pretty cool, really: It takes two ranges of commits, old and new, matches old and new commits, and then shows how they changed. The rather huge problem is that its output is a diff of diffs. Trying to make sense of those quickly becomes headache-inducing.

I finally broke down at some point late last year and wrote my own tool, which I'm calling diff-modulo-base. It allows you to look at the difference of the repository contents between old and new in the history below, while ignoring all the changes that are due to differences in the respective base versions A and B.

As a bonus, it actually does explicitly show differences between A and B that would have caused merge conflicts during rebase. This allows a fairly comfortable view of how merge conflicts were resolved.

I've been using this tool for a while now. While there are certainly still some rough edges and to dos, I did put a bunch more effort into it over the winter holidays and am now quite happy with it. I'm making it available for all to try at https://git.sr.ht/~nhaehnle/diff-modulo-base. Let me know if you find it useful!

Better integration with the larger code review flow?

One of the rough edges is that it would be great to integrate tightly with the GitHub notifications workflow. That workflow is surprisingly usable in that you can essentially treat the notifications as an inbox in which you can mark notifications as unread or completed, and can "mute" issues and pull requests all with keyboard shortcut.

What's missing in my workflow is a reliable way to remember the most recent version of a pull request that I have reviewed. My somewhat passable workaround for now is to git fetch before I do a round of reviews, and rely on the local reflog of remote refs. A Git alias allows me to say

git dmb-origin $pull_request_id

and have that become

git diff-modulo-base origin/main origin/pull/$pull_request_id/head@{1} origin/pull/$pull_request_id/head

which is usually what I want.

Ideally, I'd have a fully local way of interacting with GitHub notifications, which could then remember the reviewed version in a more reliable way. This ought to also fix the terrible lagginess of the web interface. But that's a rant for another time.

Rust

This is the first serious piece of code I've written in Rust. I have to say that experience has really been quite pleasant so far. Rust's tooling is pretty great, mostly thanks to the rust-analyzer LSP server.

The one thing I'd wish is that the borrow checker was able to better understand "partial" borrows. I find it occasionally convenient to tie a bunch of data structures together in a general context structure, and helper functions on such aggregates can't express that they only borrow part of the structure. This can usually be worked around by changing data types, but the fact that I have to do that is annoying. It feels like having to solve a puzzle that isn't part of the inherent complexity of the underlying problem that the code is trying to solve.

And unlike, say, circular references or graph structures in general, where it's clear that expressing and proving the sort of useful lifetime facts that developers might intuitively reason about quickly becomes intractable, improving the support for partial borrows feels like it should be a tractable problem.

21 Jan 2023 4:19pm GMT

18 Jan 2023

feedPlanet Twisted

Hynek Schlawack: Why I Like Nox

Ever since I got involved with open-source Python projects, tox has been vital for testing packages across Python versions (and other factors). However, lately, I've been increasingly using Nox for my projects instead. Since I've been asked why repeatedly, I'll sum up my thoughts.

18 Jan 2023 12:00pm GMT

15 Jan 2023

feedFOSDEM 2023

Call for volunteers

With FOSDEM just around the corner, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, with the buildup (starting Friday at noon), heralding during the conference and Cleanup (on Sunday evening). No need to worry about missing lunch. Food will be provided. Would you like to be part of the team that makes FOSDEM tick? Sign up here! You could舰

15 Jan 2023 11:00pm GMT

13 Jan 2023

feedPlanet Gentoo

FOSDEM 2023

FOSDEM logo

Finally, after a long break, it's FOSDEM time again! Join us at Université Libre de Bruxelles, Campus du Solbosch, in Brussels, Belgium. This year's FOSDEM 2023 will be held on February 4th and 5th.

Our developers will be happy to greet all open source enthusiasts at our Gentoo stand in building H, level 1! Visit this year's wiki page to see who's coming.

13 Jan 2023 6:00am GMT

12 Jan 2023

feedPlanet PHP

Knex (with MySQL) had a very scary SQL injection

Knex recently released a new version this week (2.4.0). Before this version, Knex had a pretty scary SQL injection. Knex currently has 1.3 million weekly downloads and is quite popular.

The security bug is probably one of the worst SQL injections I've seen in recent memory, especially considering the scope and popularity.

If you want to get straight to the details:

My understanding of this bug

If I understand the vulnerability correctly, I feel this can impact a very large number of sites using Knex. Even more so if you use Express.

I'll try to explain through a simple example. Say, you have MySQL table structured like this:

CREATE TABLE `users` (
  `id` int NOT NULL AUTO_INCREMENT,
  `name` varchar(100) DEFAULT NULL,
  PRIMARY KEY (`id`)
)

And you have a query that does a SELECT using Knex:

const lookupId = 2;

const result = await knex('users')
  .select(['id', 'name'])
  .where({
    id: lookupId
  });

You'd expect the query to end up roughly like this

SELECT `id`, `name` FROM `users` WHERE `id` = 2

The issue is when the user controls the value of lookupId. If somehow they can turn this into an object like this:

const lookupId = {
  name: 'foo'
}

You might expect an error from Knex, but instead it generates the following query:

SELECT `id`, `name` FROM `users` WHERE `id` = `name` = 'foo'

This query is not invalid. I don't fully understand fully understand MySQL's behavior, but it causes the WHERE clause to be ignored and the result is equivalent to:

SELECT `id`

Truncated by Planet PHP, read more at the original (another 8765 bytes)

12 Jan 2023 9:31pm GMT

10 Jan 2023

feedPlanet PHP

Xdebug Update: December 2022

Xdebug Update: December 2022

In this monthly update I explain what happened with Xdebug development in this past month. These are normally published on the first Tuesday on or after the 5th of each month.

Patreon and GitHub supporters will get it earlier, around the first of each month.

You can become a patron or support me through GitHub Sponsors. I am currently 45% towards my $2,500 per month goal. If you are leading a team or company, then it is also possible to support Xdebug through a subscription.

In the last month, I spend 25 hours on Xdebug, with 21 hours funded. Sponsorships are continuing to decline, which makes it harder for me to dedicate time for maintenance and development.

Xdebug 3.2

Xdebug 3.2.0 got released at the start of December, to coincide with the release of PHP 8.2 which it supports, after fixing a last crash with code coverage. Since then a few bugs were reported, which I have started to triage. A particularly complicated one seems to revolve on Windows with PHP loaded in Apache, where suddenly all modes are turned on without them having been activated through the xdebug.mode setting. This is a complicated issue that I hope to figure out and fix during January, resulting in the first patch release later this month.

Plans for the Year

Beyond that, I have spend some time away from the computer in the Dutch country side to recharge my battery. I hope to focus on redoing the profiler this year, as well as getting the "recorder" feature to a releasable state.

Smaller feature wise, I hope to implement file/path mappings on the Xdebug side to aide the debugging of generated files containing PHP code.

Xdebug Cloud

Xdebug Cloud is the Proxy As A Service platform to allow for debugging in more scenarios, where it is hard, or impossible, to have Xdebug make a connection to the IDE. It is continuing to operate as Beta release.

Packages start at £49/month, and I have recently introduced a package for larger companies. This has a larger initial set of tokens, and discounted extra tokens.

If you want to be kept up to date with Xdebug Cloud, please sign up to the mailinglist, which I will use to send out an update not more than once a month.

Xdebug Videos

I have published two new videos:

I have continued writing scripts for videos about Xdebug 3.2's features, and am also intending to make a video about "Running Xdebug in Production", as well as one on using the updated "xdebug.client_discovery_header" feature (from Xdebug 3.1).

You can find all previous videos on my YouTube channel.

Business Supporter Scheme and Funding

In December, no new business supporters signed up.

If you, or your company, would also like to support Xdebug, head over to the support page!

Besides business support, I also maintain a Patreon page, a profile on GitHub sponsors, as well as an OpenCollective organisation.

Become a Patron!

10 Jan 2023 9:06am GMT

09 Jan 2023

feedPlanet Twisted

Hynek Schlawack: Surprising Consequences of macOS’s Environment Variable Sanitization

Or: Why does DYLD_LIBRARY_PATH keep disappearing!?

09 Jan 2023 8:00am GMT

30 Dec 2022

feedPlanet Gentoo

Src_Snapshot

Prototype

Recently while browsing the Alpine git repo I noticed they have a function called snapshot, see: https://git.alpinelinux.org/aports/tree/testing/dart/APKBUILD#n45 I am not 100% sure about how that works but a wild guess is that the developers can run that function to fetch the sources and maybe later upload them to the Alpine repo or some sort of (cloud?) storage.

In Portage there exists a pkg_config function used to run miscellaneous configuration for packages. The only major difference between src_snapshot and that would of course be that users would never run snapshot.

Sandbox

Probably only the network sandbox would have to be lifted out… to fetch the sources of course.

But also a few (at least one?) special directories and variables would be useful.

30 Dec 2022 2:03am GMT

07 Dec 2022

feedFOSDEM 2023

Accepted stands for FOSDEM 2023

With great pleasure (and an apology for missing the deadline by seven days), we can announce the following projects will have a stand at FOSDEM 2023 (4 & 5th of February). This is the list of stands (in no particular order): Eclipse Foundation FOSSASIA Matrix.org Foundation Software Freedom Conservancy CentOS and RDO FreeBSD Project Free Software Foundation Europe Realtime Lounge Free Culture Podcasts Open Culture Foundation + COSCUP Open Toolchain Foundation Open UK and Book Signing Stand The Apache Software Foundation The Perl/Raku Foundation PostgreSQL GNOME KDE GitLab Homebrew Infobooth on amateur radio (hamradio) IsardVDI Jenkins Fluence La Contre-Voie舰

07 Dec 2022 11:00pm GMT

06 Dec 2022

feedPlanet Plone - Where Developers And Integrators Write

PLONE.ORG: Plone 6 RC 2 Released

Good news: the second and last release candidate of Plone 6 has arrived! The release manager for this version is Maurits van Rees (https://github.com/mauritsvanrees).

Thank you to everyone involved!

Read more about the upcoming Plone 6 and Plone 6 FAQ.

Highlights

Major changes since 6.0.0rc1:

  • None really. Lots of packages have gotten a final release, losing their alpha, beta or release candidate markers.

We are in a bugfix-only mode. An upgrade from rc1 to rc2 should be painless and is recommended for everyone.

Volto frontend

The default frontend for Plone 6 is Volto. Latest release is 16.3.0 2. See the changelog 1.
Note that this is a JavaScript frontend that you need to run in a separate process with NodeJS.
The ClassicUI is still available when you only run the Python process.

Python compatibility

This release supports Python 3.8, 3.9, 3.10, and 3.11.

Installation

For installation instructions, see the documentation.
This documentation is under development, but this should get you up and running. No worries.
We expect to switch https://docs.plone.org to show the Plone 6 documentation sometime this week.

Final release date: December 12, 2022

Unless blocking issues are found that require more work, we expect to release Plone 6.0.0 final on December 12, 2022.

If you find any issues, blocking or not, please report them in the main issue tracker.

Try Plone 6!

For installation instructions, see the documentation.

See Plone 6 in action at https://6.demo.plone.org/

Read more at the community forum:
https://community.plone.org/t/plone-6-0-0rc2-released/15945

06 Dec 2022 3:45pm GMT

27 Nov 2022

feedPlanet Plone - Where Developers And Integrators Write

PLONE.ORG: Volto 16 Released - Ready for Plone 6

The Plone community is happy to announce that Volto 16 is ready and shipped! This is the final release for the upcoming Plone 6 and a major achievement from the community. Thank you everyone involved!

Volto is Plone's snappy, modern React front end powered by the RestAPI, and the default for Plone 6.

Volto 16

Volto 16 contains tons of new features, bugfixes and tweaks. Here is a sneak peak to some of them, and you can find the full release notes in GitHub: https://github.com/plone/volto/releases/tag/16.0.0

And to get the latest version go to https://github.com/plone/volto/releases/tag/16.2.0

Top features in Volto 16

  • The new Slate editor - improved inline editing and more possibilities
  • Content rules - a whole engine for automating processes based on events on the site
  • Undo - site admins can see and undo transactions
  • Bugfixes and tweaks - it is shiny and polished
  • New technology -

More feature highlights

Volto grid block

  • Added default placeholder for videos to embed them more lightly @giuliaghisini
  • Added new Block Style Wrapper. This implementation is marked as experimental during Volto 16 alpha period. The components, API and the styling are subject to change without issuing a breaking change. You can start using it in your projects and add-ons, but taking this into account. See documentation for more information. @sneridagh
  • Add default widget views for all type of fields and improve the DefaultView @ionlizarazu
  • added configurable identifier field for password reset in config.js. @giuliaghisini
  • Add expandToBackendURL helper @sneridagh
  • added 'show total results' option in Search block configuration. @giuliaghisini
  • Added viewableInBrowserObjects setting to use in alternative to downloadableObjects, if you want to view file in browser intstead downloading. @giuliaghisini
  • Disable already chosen criteria in querystring widget @kreafox
  • Added X-Forwarded-* headers to superagent requests. @mamico
  • Updated Brazilian Portuguese translation @ericof
  • Forward HTTP Range headers to the backend. @mamico
  • Add default value to color picker, if default is present in the widget schema. @sneridagh
  • Inject the classnames of the StyleWrapper into the main edit wrapper (it was wrapping directly the Edit component before). This way, the flexibility is bigger and you can act upon the whole edit container and artifacts (handlers, etc) @sneridagh
  • Refactor image block: make it schema extensible @nileshgulia1 @sneridagh
  • Add control panel via config.settings @ksuess #3426
  • Add noindex metadata tag @steffenri
  • Adding Schema for Maps Block in Sidebar @iRohitSingh
  • Add a Pluggable to the sharing page @JeffersonBledsoe #3372
  • Add listing variation schemaEnhancer to the search block schema @ionlizarazu
  • And much much more at https://github.com/plone/volto/releases/tag/16.0.0

Breaking changes Content rules demo

  • Deprecate NodeJS 12 since it's out of LTS since April 30, 2022 @sneridagh
  • Move all cypress actions to the main Makefile, providing better meaningful names. Remove them from package.json script section. @sneridagh
  • Remove div as default if as prop from RenderBlocks. Now the default is a React.Fragment instead. This could lead to CSS inconsistencies if taken this div into account, specially if used in custom add-ons without. In order to avoid them, set the as property always in your add-ons. @sneridagh
  • Removed date-fns from dependencies, this was in the build because Cypress depended on it. After the Cypress upgrade it no longer depends on it. If your project still depends on it, add it as a dependency of your project. @sneridagh
  • Removed all usage of date-fns from core. @sneridagh
  • Rename src/components/manage/Widgets/ColorPicker.jsx component to src/components/manage/Widgets/ColorPickerWidget.jsx @sneridagh
  • Remove the style wrapper around the <Block /> component in Edit mode, moved to the main edit wrapper @sneridagh
  • New cloneDeepSchema helper @sneridagh
  • Action listUsersto be called with Object. Distinguish between search for id or search for fullname, email, username @ksuess
  • Integrate volto-state add-on. @tiberiuichim @razvanMiu @eea
  • Staticize Poppins font to be compliant with EU privacy. Import from GoogleFont is disabled in site.variables. @giuliaghisini
  • Remove the callout button (the one with the megaphone icon) from the slate toolbar since it has the same styling as blockquote. If you need it anyway, you can bring it back in your addon. @sneridagh
  • Using volto-slate Headline / Subheadline buttons strips all elements in the selection @tiberiuichim
  • Use Cypress 10.3.0 (migrate from 9.x.x). Cypress 10 has some interesting goodies, being the native support of Apple Silicon Computers the main of it. See https://docs.voltocms.com/upgrade-guide/ for more information. @sneridagh
  • The complete configuration registry is passed to the add-ons and the project configuration pipeline @sneridagh
  • And much much more at https://github.com/plone/volto/releases/tag/16.0.0
See https://6.dev-docs.plone.org/volto/upgrade-guide/index.html for more information about all the breaking changes.


Bugfix

  • Fix Search page visit crashes /contents view @dobri1408
  • Fix sidebar full size bottom opacity on edit page when sidebar is collapsed @ichim-david
  • Fix toolbar bottom opacity on edit page when toolbar is collapsed @ichim-david
  • Fix content view regression, height issue @danielamormocea
  • Fixed secure cookie option. @giuliaghisini
  • Changed addon order in addon controlpanel to mimic Classic UI @erral
  • Fixed error when loading content in a language for which a Volto translation is not available. @davisagli
  • Fix for clipped dropdown menus when the table has few or no records in Contents view @mihaislobozeanu
  • fixed view video list from youtube in Video block. @giuliaghisini
  • Fixed ICS URL in event view in seamless mode @sneridagh
  • Fix withStylingSchemaEnhancer enhancer mechanism @sneridagh
  • Add correct query parameters to the redirect @robgietema
  • Fix RenderBlocks: path @ksuess
  • Fix field id creation in dexterity control panel to have slugified id @erral
  • Changed to get intl.locale always from state @ionlizarazu
  • Fix regression, compound lang names (eg. pt-BR) no longer working @sneridagh
  • fix TokenWidget choices when editing a recently created content. @giuliaghisini
  • Fix color picker defaults implementation #2 @sneridagh
  • Enable default color in backgroundColor default StyleWrapper field which wasn't sync with the default value setting @sneridagh
  • Fix Block style wrapper: Cannot read properties of undefined (reading 'toString') @avoinea #3410
  • fix schema when content contains lock informations. @giuliaghisini
  • Don't render junk when no facets are added to the search block @tiberiuichim
  • Fix visibility of toolbar workflow dropdown for more states as fitting in .toolbar-content. @ksuess
  • Fix the video block for anonymous user @iFlameing
  • And much much more at https://github.com/plone/volto/releases/tag/16.0.0


Internal

  • Improve Cypress integration, using Cypress official Github Action. Improve some flaky tests that showed up, and were known as problematic. Refactor and rename all the Github actions giving them meaningful names, and group them by type. Enable Cypress Dashboard for Volto. @sneridagh
  • Stop using xmlrpc library for issuing the setup/teardown in core, use a cy.request instead. @sneridagh
  • Added Cypress environment variables for adjusting the backend URL of commands @JeffersonBledsoe #3271
  • Reintroduce Plone 6 acceptance tests using the latests plone.app.robotframework 2.0.0a6 specific Volto fixture. @datakurre @ericof @sneridagh
  • Upgrade all tests to use plone.app.robotframework 2.0.0a6 @sneridagh
  • Upgrade Sentry to latest version because of #3346 @sneridagh
  • Update Cypress to version 9.6.1 @sneridagh
  • Missing change from the last breaking change (Remove the style wrapper around the <Block /> component in Edit mode, moved to the main edit wrapper). Now, really move it to the main edit wrapper @sneridagh
  • Fix warning because missing key in VersionOverview component @sneridagh
  • Mock all loadable libraries. @mamico
  • And much much more at https://github.com/plone/volto/releases/tag/16.0.0

Documentation

  • Move Cypress documentation from README.md to the docs. Improve the docs with the new Makefile commands.
  • Improve English grammar and syntax in backend docs. @stevepiercy
  • Fix JSX syntax highlighting. Remove duplicate heading. @stevepiercy
  • fix make task docs-linkcheckbroken if grep has exit code 1 (no lines found)
  • Updated simple.md @MdSahil-oss
  • Fix indentation in nginx configuration in simple.md @stevepiercy
  • Remove sphinx_sitemap configuration because Volto's docs are now imported into the main docs, making this setting unnecessary. @stevepiercy
  • Set the ogp_site_url to main docs, instead of training. @stevepiercy
  • aria-* attributes are now parsed correctly by jsx-lexer 2.0. @stevepiercy
  • volto-slate documentation @nileshgulia1
  • And much much more at https://github.com/plone/volto/releases/tag/16.0.0

Thank you!

We would like to thank all the people involved in creating Volto 16. Over 40 people have committed code, documentation and other effort for this. It is amazing how much we were able to accomplish as a community, during the last months.

See https://6.dev-docs.plone.org/volto/upgrade-guide/index.html for more information.

What's Next - Plone 6 final release

Where do we go from here? Plone 6! Right now, the only major feature missing were content rules and the new Slate editor, and both were included in Volto 16.

So the work is not over yet. We still need helping hands and contributors to continue the effort to make Plone 6 a reality. Everybody is welcome!


Try Plone 6 today

Feel free to try out Plone 6 with Volto 16:

27 Nov 2022 5:40pm GMT

PLONE.ORG: Plone 6 RC 1 Released

Good news: the first release candidate of Plone 6 has arrived! The release manager for this version is Maurits van Rees (https://github.com/mauritsvanrees).

Thank you to everyone involved!

Read more about the upcoming Plone 6 and Plone 6 FAQ.

Installation

Highlights

Major changes since 6.0.0b3:

  • Various packages: updates to support Python 3.11. See below.

  • Zope 5.7: This feature release adds full support for Python 3.11 and a ZPublisher encoder for inputting JSON data.
    See the Zope changelog for details.

  • zc.buildout: After long development this has a final release. We use version 3.0.1, which now works nicely with latest pip (using 22.3.1).
    Note that it is usually fine if you use different versions of zc.buildout, pip, setuptools, and wheel. We just pin versions that we know work at the moment.

  • plone.restapi:

    • Added @upgrade endpoint to preview or run an upgrade of a Plone instance.

    • Added @rules endpoint with GET/POST/DELETE/PATCH.

    • Added link integrity support for slate blocks.

  • plone.scale: Add support for animated GIFs.

Volto 16 released

The default frontend for Plone 6 is Volto. Latest release is 16.2 and Volto 16 is the final release needed for Plone 6.
https://plone.org/news/2022/volto-16-released

Python compatibility

This release supports Python 3.8, 3.9, 3.10, and 3.11.

Python 3.11.0 was released in October and we are proud to already be able to say Plone supports it! All tests pass.
This is expected to be faster than other Python versions.
Note that not all add-ons may work yet on 3.11, but in most cases the needed changes should be small.

A big thank you for this goes to the Zope part of the Plone community, especially Jens Vagelpohl and Michael Howitz.

Read more on https://plone.org/download/releases/6.0.0rc1

Installation

For installation instructions, see the documentation.
This documentation is under development, but this should get you up and running. No worries.

Help wanted

Plone 6 final needs just the final push! Wondering how you can help?

Plone 6

Plone 6 editing experience combines the robust usability of Plone with a blazingly fast JavaScript frontend

Plone 6 editor

Try Plone 6 Beta!

For installation instructions, see the documentation.

See Plone 6 in action at https://6.demo.plone.org/

Read more at the community forum:
https://community.plone.org/t/plone-6-0-0rc1-released/15885

27 Nov 2022 2:55pm GMT

24 Nov 2022

feedMonologue

Ivan Zlatev: 13 Emotional Resilience Challenges in Engineering Leadership and Management + Tips

This is a rather long post as it captures the 13 days of lessons learnt and tips I've been sharing on LinkedIn. Definitely interested to hear other's experiences and thoughts, so do drop me a comment below, if you feel like it.

The content here is predominantly aimed at new managers and leaders as well as new managers of managers, but hopefully others can benefit too.

1. Feedback loops become slower and it takes longer to see the impact of your decisions and actions, which can create self-doubt and insecurity creep

2. Context switching increases by factor of N in terms of volume, frequency and types, which can be disorienting and overwhelming

3. You may feel less part of a team, which will affect your sense of belonging and can leave you feeling lonelier than before

4. The volume of information that you have exposure to may increases dramatically in both depth and breadth, which will feel overwhelming at times

5. The general level of uncertainty and ambiguity will rapidly increase which may leave your feeling anxious, scared and out of control

6. The mistakes that you make will have more impact than before, which can increase the level of fear, anxiety and internal pressure to get things "right"

7. Your sense of achievement and impact may take a hit, which can create dissatisfaction creep and can affect your confidence.

8. Your sense of expertise will erode and morph over time, which can challenge your sense of competency and result imposter syndrome

9. Having an increased scope of accountability can put more internal and external pressure, increasing your stress levels

10. Supporting (and working with) an increased number of people (direct reports, skip-level reports, etc) will have a toll on you emotionally

11. It can be hard to say no and/or disappoint people

12. When people you manage leave, it can create self-doubt and insecurity

13. You will be swimming in new and unfamiliar territory a lot of the time and this can make you feel vulnerable

24 Nov 2022 12:00am GMT

12 Nov 2022

feedFOSDEM 2023

Presentations - Call for Participation

We now invite proposals for presentations. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twenty-third edition will take place on Saturday 4th and Sunday 5th February 2023 at the usual location, ULB Campus Solbosch in Brussels. It will also be possible to participate online. Developer Rooms For more details about the Developer Rooms, please refer to the Calls for Papers that are being added to https://fosdem.org/2023/news/2022-11-07-accepted-developer-rooms/ when they are issued. Main Tracks Main舰

12 Nov 2022 11:00pm GMT

05 Oct 2022

feedMonologue

Jim Purbrick: How (Not) To Build a Metaverse

Earlier in the year I helped Josh Sanburn and his team put together a podcast series on building Second Life for the Wall Street Journal called "How To Build a Metaverse" which I'm now really enjoying. It's great to hear all of the amazing stories about the origin .

05 Oct 2022 9:27pm GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT