02 Feb 2023

feedPlanet Grep

Xavier Mertens: This Blog Has 20 Years!

Twenty years ago… I decided to start a blog to share my thoughts! That's why I called it "/dev/random". How was the Internet twenty years ago? Well, they were good things and bad ones…

With the years, the blog content evolved, and I wrote a lot of technical stuff related to my job, experiences, tools, etc. Then, I had the opportunity to attend a lot of security conferences and started to write wrap-ups. With COVID, fewer conferences and no more reviews. For the last few months, I'm mainly writing diaries for the Internet Storm Center therefore, I publish less private stuff here, and just relay the content published on the ISC website. If you have read my stuff for a long time (or even if you are a newcomer), thank you very much!

A few stats about the site:

I know that these numbers might seem low for many of you but I'm proud of them!

The post This Blog Has 20 Years! appeared first on /dev/random.

02 Feb 2023 8:22pm GMT

Koen Vervloesem: How to stop brltty from claiming your USB UART interface on Linux

Today I wanted to program an ESP32 development board, the ESP-Pico-Kit v4, but when I connected it to my computer's USB port, the serial connection didn't appear in Linux. Suspecting a hardware issue, I tried another ESP32 board, the ESP32-DevKitC v4, but this didn't appear either, so then I tried another one, a NodeMCU ESP8266 board, which had the same problem. Time to investigate...

The dmesg output looked suspicious:

[14965.786079] usb 1-1: new full-speed USB device number 5 using xhci_hcd
[14965.939902] usb 1-1: New USB device found, idVendor=10c4, idProduct=ea60, bcdDevice= 1.00
[14965.939915] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[14965.939920] usb 1-1: Product: CP2102 USB to UART Bridge Controller
[14965.939925] usb 1-1: Manufacturer: Silicon Labs
[14965.939929] usb 1-1: SerialNumber: 0001
[14966.023629] usbcore: registered new interface driver usbserial_generic
[14966.023646] usbserial: USB Serial support registered for generic
[14966.026835] usbcore: registered new interface driver cp210x
[14966.026849] usbserial: USB Serial support registered for cp210x
[14966.026881] cp210x 1-1:1.0: cp210x converter detected
[14966.031460] usb 1-1: cp210x converter now attached to ttyUSB0
[14966.090714] input: PC Speaker as /devices/platform/pcspkr/input/input18
[14966.613388] input: BRLTTY 6.4 Linux Screen Driver Keyboard as /devices/virtual/input/input19
[14966.752131] usb 1-1: usbfs: interface 0 claimed by cp210x while 'brltty' sets config #1
[14966.753382] cp210x ttyUSB0: cp210x converter now disconnected from ttyUSB0
[14966.754671] cp210x 1-1:1.0: device disconnected

So the ESP32 board, with a Silicon Labs, CP2102 USB to UART controller chip, was recognized, and it was attached to the /dev/ttyUSB0 device, as it should normally do. But then suddenly the brltty command intervened and disconnected the serial device.

I looked up what brltty is doing, and apparently this is a system daemon that provides access to the console for a blind person using a braille display. When looking into the contents of the package on my Ubuntu 22.04 system (with dpkg -L brltty), I saw a udev rules file, so I grepped for the product ID of my USB device in the file:

$ grep ea60 /lib/udev/rules.d/85-brltty.rules
ENV{PRODUCT}=="10c4/ea60/*", ATTRS{manufacturer}=="Silicon Labs", ENV{BRLTTY_BRAILLE_DRIVER}="sk", GOTO="brltty_usb_run"

Looking at the context, this file shows:

# Device: 10C4:EA60
# Generic Identifier
# Vendor: Cygnal Integrated Products, Inc.
# Product: CP210x UART Bridge / myAVR mySmartUSB light
# BrailleMemo [Pocket]
# Seika [Braille Display]
ENV{PRODUCT}=="10c4/ea60/*", ATTRS{manufacturer}=="Silicon Labs", ENV{BRLTTY_BRAILLE_DRIVER}="sk", GOTO="brltty_usb_run"

So apparently there's a Braille display with the same CP210x USB to UART controller as a lot of microcontroller development boards have. And because this udev rule claims the interface for the brltty daemon, UART communication with all these development boards isn't possible anymore.

As I'm not using these Braille displays, the fix for me was easy: just find the systemd unit that loads these rules, mask and stop it.

$ systemctl list-units | grep brltty
brltty-udev.service loaded active running Braille Device Support
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
$ sudo systemctl stop brltty-udev.service

After this, I was able to use the serial interface again on all my development boards.

02 Feb 2023 8:22pm GMT

Frederic Descamps: MySQL 8.0.32: thank you for the contributions

The latest MySQL release has been published on January 17th, 2023. MySQL 8.0.32 contains some new features and bug fixes. As usual, it also contains contributions from our great MySQL Community.

I would like to thank all contributors on behalf of the entire Oracle MySQL team !

MySQL 8.0.32 contains patches from Facebook/Meta, Alexander Reinert, Luke Weber, Vilnis Termanis, Naoki Someya, Maxim Masiutin, Casa Zhang from Tencent, Jared Lundell, Zhe Huang, Rahul Malik from Percona, Andrey Turbanov, Dimitry Kudryavtsev, Marcelo Altmann from Percona, Sander van de Graaf, Kamil Holubicki from Percona, Laurynas Biveinis, Seongman Yang, Yamasaki Tadashi, Octavio Valle, Zhao Rong, Henning Pöttker, Gabrielle Gervasi and Nico Pay.

Here is the list of the above contributions and related bugs, we can see that for this release, our connectors got several contributions, always a good sign of their increasing popularity.

We can also notice the return of a major contributor: Laurynas Biveinis!

Connectors

Connector / NET

Connector / Python

Connector / J

Connector / C++

Clients & API

Replication

InnoDB and Clone

Optimizer

If you have patches and you also want to be part of the MySQL Contributors, it's easy, you can send Pull Requests from MySQL's GitHub repositories or send your patches on Bugs MySQL (signing the Oracle Contributor Agreement is required).

Thank you again to all our contributors !

02 Feb 2023 8:22pm GMT

feedLXer Linux News

useradd Vs. adduser

Linux is a popular open-source operating system that runs on a variety of hardware platforms, including desktops, servers, and smartphones. One of the key features of Linux is the command-line interface (CLI), which allows users to perform a wide range of tasks using text-based commands.

02 Feb 2023 8:21pm GMT

feedFedora People

Richard W.M. Jones: Fedora now has frame pointers

Fedora now has frame pointers. I don't want to dwell on the how of this, it was a somewhat controversial decision and you can read all about it here. But I do want to say a bit about the why, and how it makes performance analysis so much easier.

Recently we've been looking at a performance problem in qemu. To try to understand this I've been looking at FlameGraphs all day, like this one:

<figure class="wp-block-image size-large"></figure>

FlameGraphs rely on the Linux tool perf being able to collect stack traces. The stack traces start in the kernel and go up through userspace often for dozens or even hundreds of frames. They must be collected quickly (my 1 minute long trace has nearly half a million samples) and accurately.

Perf (or actually I think it's some component of the kernel) has various methods to unwind the stack. It can use frame pointers, kernel ORC information or DWARF debug information. The thing is that DWARF unwinding (the only userspace option that doesn't use frame pointers) is really unreliable. In fact it has such serious problems that it's not that usable at all.

For example, here is a broken stack trace from Fedora 37 (with full debuginfo installed):

<figure class="wp-block-image size-large"></figure>

Notice that we go from the qemu-img program, through an "[unknown]" frame, into zlib's inflate.

In the same trace we get completely detached frames too (which are wrong):

<figure class="wp-block-image size-large"></figure>

Upgrading zlib to F38 (with frame pointers) shows what this should look like:

<figure class="wp-block-image size-large"></figure>

Another common problem with lack of frame pointers can be seen in this trace from Fedora 37:

<figure class="wp-block-image size-large"></figure>

It looks like it might be OK, until you compare it to the same workload using Fedora 38 libraries:

<figure class="wp-block-image size-large"></figure>

Look at those beautiful towering peaks! What seems to be happening (I don't know why) is that stack traces start in the wrong place when you don't have frame pointers (note that FlameGraphs show stack traces upside down, with the starting point in the kernel shown at the top). Also if you look closely you'll notice missed frames in the first one, like the "direct" call to __libc_action which actually goes through an intermediate frame.

Before Fedora 38 the only way to get good stack traces was to recompile your software and all of its dependencies with frame pointers, a massive pain in the neck and a major barrier to entry when investigating performance problems.

With Fedora 38, it's simply a matter of using the regular libraries, installing debuginfo if you want (it does still add detail), and you can start using perf straight away by following Brendan Gregg's tutorials.

02 Feb 2023 7:33pm GMT

feedLinux Today

8 Best Window Managers for Linux

Want to organize your windows and use all the screen space you have? These window managers for Linux should come in handy!

The post 8 Best Window Managers for Linux appeared first on Linux Today.

02 Feb 2023 7:00pm GMT

feedPlanet Ubuntu

Ubuntu Blog: From model-centric to data-centric MLOps

MLOps (short for machine learning operations) is slowly evolving into an independent approach to the machine learning lifecycle that includes all steps - from data gathering to governance and monitoring. It will become a standard as artificial intelligence is moving towards becoming part of everyday business, rather than an innovative activity.

Get an intro to MLOps on the 15th of February with Canonical's experts.

Register now

Over time, there have been different approaches used in MLOps. The most popular ones are model-driven and data-driven approaches. The split between them is defined by the main focus of the AI system: data or code. Which one should you choose? The decision challenges data scientists to choose which component will play a more important role in the development of a robust model. In this blog, we will evaluate both.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/c3c4/1_Capture.jpg" width="720" /> </noscript>

Model-centric development

Model-driven development focuses, as the name suggests, on machine learning model performance. It uses different methods of experimentation in order to improve the performance of the model, without altering the data. The main goal of this approach is to work on the code and optimise it as much as possible. It includes code, model architecture and training processes as well.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/d736/1_Capture.jpg" width="720" /> </noscript>

If you look deeper into this development method, the model-driven approach is all about high-quality ML models. What it means, in reality, is that developers focus on using the best set of ML algorithms and AI platforms. The approach also is a basis for great advancements in the AI space, such as the development of specialised frameworks like Tensorflow or PyTorch.

Model-centric development has been around since the early days of the discipline, so it benefits from widespread adoption across a variety of AI applications. The reason for this can be traced back to the fact that AI was initially a research-focused area. Historically, this approach was designed for challenging problems and huge datasets, which ML specialists were meant to solve by optimising AI models. It has also been driven by the wide adoption of open source, which allows free access to various GitHub repositories. Model-driven development encourages developers to experiment with the latest bits of technology and try to get the best results by fine-tuning the model. From an organisational perspective, it is suited for enterprises which have enough data to train machine-learning models.

When it comes to pitfalls, the model-centric approach requires a lot of manual work at the various stages of the ML lifecycle. For example, data scientists have to spend a lot of time on data labelling, data validation or training the model. The approach may result in slower project delivery, higher costs and little return on investment. This is the main reason why practitioners considered trying to tackle this problem from a different perspective with data-centric development.

Data-centric development

As it is often mentioned, data is the heart of any AI initiative. The data-centric approach takes this statement seriously, by systematically interacting with the datasets in order to obtain better results and increase the accuracy of machine learning applications.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/0279/2_Capture.jpg" width="720" /> </noscript>

When compared to the model-centric approach, in this case, the ML model is fixed, and all improvements are related to the data. These enhancements range from better data labelling to using different data samples for training or increasing the size of the data set. This approach improves data handling as well, by creating a common understanding of the datasets.

The data-centric approach has a few essential guidelines that look after:

Data labelling for data-centric development

Data labelling assigns labels to data. The process provides information about the datasets that are then used by algorithms to learn. It emphasises both content and structure information, so it often includes various data types, measurement units, or time periods represented in the dataset. Having correct and consistent labels can define the success of an AI project.

Data-centric development often highlights the importance of correct labelling. There are various examples of how to approach it; the key goal is avoiding inconsistencies and ambiguities. Below you can find an image that Andrew Ng offers as an example of data labels in practice. In this case, the labels are used for two adjectives: inconsistency and ambiguity.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/yuNU1LsB5c3JAAeSzLktJtPjJPqdCKi0Zu43hLC0vpbvqx0YwudLSQ34hAZ4Zw9A3MJt28OM1N6UZbAtIjUv6qv70GQFxbps93JGYOQfy-5qvLxL-6ccvce56cBU08jsiq5yPz_an68mJVHZZOJJz94" width="720" /> </noscript>

Data augmentation for data-centric development

Data augmentation is a process that consists of the generation of new data based on various means, such as interpolation or explorations. It is not always needed, but in some instances, there are models that require a larger amount of data at various stages of the ML lifecycle: training, validation, and data synthesis.

Whenever you perform this activity, checking data quality and ensuring the elimination of noise is also part of the guidelines.

Error analysis for data-centric development

Error analysis is a process performed once a model is trained. Its main goal is to identify a subset that can be used for improving the dataset. It is a task that requires diligence, as it needs to be performed repeatedly, in order to get gradual improvements in both data quality and model performance.

Data versioning for data-centric development

Data versioning tracks changes that happen within the datasets, in order to identify performance changes within the model. It enables collaboration, eases the data management process and fastens the delivery of machine learning pipelines from experimentation to production.

When it comes to pitfalls, the data-centric method struggles mostly with data. On one hand, it can be hard to manage and control. On the other hand, it can be biased if it does not represent the actual population, leading to models that underperform in real life. Lastly, because of the data requirements, it can easily be expensive or suitable only for projects which have collected data for a longer period of time.

Model-centric and data-centric development with MLOps

In reality, both of these approaches are tightly linked to MLOps. Regardless of the option that data scientists choose, they need to follow MLOps guidelines and integrate their method within the tooling that they choose. Developers can use the same tool but have different approaches across different projects. The main difference could occur at the level of the ML lifecycle where changes are happening. It's important to note that the approach will affect how the model is optimised for the specific initiative, so choosing it with care is important to position your project for success.

Get an intro to MLOps on the 15th of February with Canonical's experts.

Register now

Charmed Kubeflow is an end-to-end MLOps tooling, designed for scaling machine learning models to production. Because of its features and integrations, it has the ability to support both model-centric and data-centric development. It is an open-source platform, which encourages contributions and represents the foundations of a growing MLOps ecosystem that Canonical is moving towards, with various integrations at various levels: hardware, tooling and AI frameworks.

Learn more about MLOps

02 Feb 2023 6:26pm GMT

feedLinux Today

How to Delete Files With Specific Extensions From the Command Line

Here's how you can delete a large number of files with the same extension or a similar pattern of files you need to remove from your system.

The post How to Delete Files With Specific Extensions From the Command Line appeared first on Linux Today.

02 Feb 2023 6:00pm GMT

John the Ripper: Password Cracking Tutorial and Review

John the Ripper is a popular open-source password cracking tool that can be used to perform brute-force attacks. Learn more here.

The post John the Ripper: Password Cracking Tutorial and Review appeared first on Linux Today.

02 Feb 2023 5:00pm GMT

feedFedora People

Fedora Community Blog: Outreachy Summer’23: Call for Projects and Mentors!

The Fedora Project is participating in the upcoming round of Outreachy. We need more project ideas and mentors! The last day to propose a project or to apply as a general mentor is February 24, 2023, at 4pm UTC.

Outreachy provides a unique opportunity for underrepresented groups to gain valuable experience in open-source and gain access to a supportive community of mentors and peers. By participating in this program, the Fedora community can help create a more diverse and inclusive tech community.

If you have a project idea for the upcoming round of Outreachy, please open a ticket in the mentored projects repository. You can also volunteer to be a mentor for a project that's not yours. As a supporting mentor, you will guide interns through the completion of the project.

A good project proposal makes all the difference. It saves time for both the mentors and the applicants.

What makes a good project proposal

The Mentored Projects Coordinators will review your ideas and help you prep your project proposal to be submitted to Outreachy.

How to participate

Project Mentor

Signing up as a mentor is a commitment. Before signing up, please consider the following

Please read through the mentor-faq page from Outreachy.

General Mentor

We are also looking for general mentors for the facilitation of proper communication of feedback and evaluation with the interns working on the selected projects.

Submit your proposals

Please submit your project ideas and mentorship availability as soon as possible. The last date for project idea submission is February 24, 2023

Mentoring can be a fulfilling pursuit. It is a great opportunity to contribute to the community and shape the future of Fedora by mentoring a talented intern who will work on your project. Don't miss out on this exciting opportunity to make a difference in the Fedora community and the tech industry as a whole. Together, we can make the open-source community even more diverse and inclusive.

The post Outreachy Summer'23: Call for Projects and Mentors! appeared first on Fedora Community Blog.

02 Feb 2023 4:51pm GMT

feedKernel Planet

Linux Plumbers Conference: Preliminary Dates and Location for LPC2023

The 2023 LPC PC is pleased to announce that we've begun exclusive negotiations with the Omni Hotel in Richmond, VA to host Plumbers 2023 from 13-15 November. Note: These dates are not yet final (nor is the location; we have had one failure at this stage of negotiations from all the Plumbers venues we've chosen). We will let you know when this preliminary location gets finalized (please don't book irrevocable travel until then).

The November dates were the only ones that currently work for the venue, but Richmond is on the same latitude as Seville in Spain, so it should still be nice and warm.

02 Feb 2023 4:18pm GMT

feedLXer Linux News

Open source Ray 2.2 boosts machine learning observability to help scale services like OpenAI's ChatGPT

Ray, the popular open-source machine learning (ML) framework, has released its 2.2 version with improved performance and observability capabilities, as well as features that can help to enable reproducibility.

02 Feb 2023 3:03pm GMT

Red Hat gives an ARM up to OpenShift Kubernetes operations

With the new release, Red Hat is integrating new capabilities to help improve security and compliance for OpenShift, as well as new deployment options on ARM-based architectures. The OpenShift 4.12 release comes as Red Hat continues to expand its footprint, announcing partnerships with Oracle and SAP this week.

02 Feb 2023 12:40pm GMT

feedKernel Planet

Matthew Garrett: Blocking free API access to Twitter doesn't stop abuse

In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.

There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?

To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.

The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.

comment count unavailable comments

02 Feb 2023 10:36am GMT

feedPlanet GNOME

Matthew Garrett: Blocking free API access to Twitter doesn't stop abuse

In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.

There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?

To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.

The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.

comment count unavailable comments

02 Feb 2023 10:21am GMT

feedPlanet Debian

Matthew Garrett: Blocking free API access to Twitter doesn't stop abuse

In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.

There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?

To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.

The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.

comment count unavailable comments

02 Feb 2023 10:21am GMT

feedFedora People

Matthew Garrett: Blocking free API access to Twitter doesn't stop abuse

In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.

There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?

To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.

The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.

comment count unavailable comments

02 Feb 2023 10:21am GMT

feedPlanet Maemo

Rekomendasi Game Judi Online Casino Mudah Menang

Setiap pemain yang menargetkan kemenangan di dalam permainan judi online Kasino harus tahu tentang game yang bisa dimainkan. Saat ini pilihan game yang tersedia dan ditawarkan serta bisa dimainkan memang banyak sekali pilihannya. Namun meski demikian anda disarankan untuk bisa hanya fokus mencari pilihan game yang mudah dimenangkan. Biasanya memang permainan kasino tersebut banyak sekali peminat dan penggunanya sehingga Anda bisa meyakinkan diri bahwa bermain game tersebut akan memudahkan anda untuk menang dengan mudah.

Kemenangan mudah dan keuntungan besar dalam sebuah permainan taruhan judi akan sangat bergantung pada beberapa teknik dan strategi yang Anda gunakan. Dalam hal ini penting juga untuk anda cari tahu terlebih dahulu trik dan strategi yang nantinya akan anda gunakan. Tapi di samping itu sebelumnya penting untuk cari tahu terlebih dahulu beberapa koleksi dan pilihan permainan yang populer. Anda bisa mencoba dan memilih serta memainkan salah satu pilihan game maju dia memang paling populer dan paling banyak peminat dan penggunanya.

Data Rekomendasi Game Judi Online Casino Mudah Menang

Saat ini rekomendasi pilihan permainan judi casino yang bisa dimainkan memang banyak sekali dan disarankan untuk Anda memilih pilihan game yang mudah menang. Permainan slot yang mudah untuk dimenangkan adalah pilihan terbaik yang recommended untuk dipilih karena itu akan memudahkan untuk anda segera balik modal. Maka dari itu coba pelajari dan juga cari tahu beberapa pilihan game Mbak judi online Kasino yang selama ini memang nggak banyak sekali peminatnya. Anda bisa memilih dan memainkan salah satu dari beberapa pilihan permainan berikut yang memang terbukti bagus dan bisa digunakan.

  1. Sicbo - Permainan dadu yang satu ini juga merupakan salah satu pilihan permainan yang populer di Indonesia karena memang permainan ini memiliki keseruan di mana kita bermain dengan 3 buah dadu. Kemudian selanjutnya kita diharuskan untuk menebak berapa mata dadu yang akan muncul setelah kita lempar.
  2. Dragon Tiger - Kemudian jenis permainan selanjutnya yang populer adalah permainan Dragon Tiger yang merupakan permainan yang cukup unik dan menarik dimana kita ada 2 buah yakni Dragon dan tiger. Permainan ini memiliki daya tarik serta memiliki keunikan yang membuat banyak orang memainkan permainan itu.
  3. Casino Holdem Poker - Ini juga permainan kartu yang memang cukup populer di Indonesia yang tersedia dalam Casino Online sehingga layak untuk Anda coba mainkan. Permainan ini memungkinkan anda untuk bisa mengasah skill dan insting serta keseruan serta tantangan juga.
  4. Blackjack- game judi online blackjack juga merupakan pilihan terbaik yang recommended dan bagus untuk dipilih karena memang punya banyak penawaran menarik yang menguntungkan. Permainan ini sangatlah menarik dan juga memiliki sensasi klasik karena sudah ada sejak zaman dulu dan sampai sekarang masih menjadi pilihan yang Primadona dan banyak peminatnya.
  5. Bacarat - selanjutnya juga Anda bisa mencoba bermain permainan taruhan judi online bacarat. Sebagaimana diketahui bahwa permainan tersebut menjadi pilihan game terbaik yang menguntungkan yang akan bisa memberikan peluang dan penghasilan yang besar. Kesempatan terbaik untuk Anda bisa bermain permainan judi dengan sistem lain yang aman dan nyaman sekaligus juga terintegrasi.

Jadi pastikan agar supaya Anda bisa memilih dan memainkan salah satu pilihan game jadi terbaik dengan berbagai keuntungan terbesar. Pelajari dan juga cari tahu bagaimana cara bermain permainan taruhan judi dengan aman dan nyaman serta menguntungkan.

The post Rekomendasi Game Judi Online Casino Mudah Menang appeared first on VALERIOVALERIO.

0 Add to favourites0 Bury

02 Feb 2023 10:10am GMT

feedPlanet Debian

John Goerzen: Using Yggdrasil As an Automatic Mesh Fabric to Connect All Your Docker Containers, VMs, and Servers

Sometimes you might want to run Docker containers on more than one host. Maybe you want to run some at one hosting facility, some at another, and so forth.

Maybe you'd like run VMs at various places, and let them talk to Docker containers and bare metal servers wherever they are.

And maybe you'd like to be able to easily migrate any of these from one provider to another.

There are all sorts of very complicated ways to set all this stuff up. But there's also a simple one: Yggdrasil.

My blog post Make the Internet Yours Again With an Instant Mesh Network explains some of the possibilities of Yggdrasil in general terms. Here I want to show you how to use Yggdrasil to solve some of these issues more specifically. Because Yggdrasil is always Encrypted, some of the security lifting is done for us.

Background

Often in Docker, we connect multiple containers to a single network that runs on a given host. That much is easy. Once you start talking about containers on multiple hosts, then you start adding layers and layers of complexity. Once you start talking multiple providers, maybe multiple continents, then the complexity can increase. And, if you want to integrate everything from bare metal servers to VMs into this - well, there are ways, but they're not easy.

I'm a believer in the KISS principle. Let's not make things complex when we don't have to.

Enter Yggdrasil

As I've explained before, Yggdrasil can automatically form a global mesh network. This is pretty cool! As most people use it, they join it to the main Yggdrasil network. But Yggdrasil can be run entirely privately as well. You can run your own private mesh, and that's what we'll talk about here.

All we have to do is run Yggdrasil inside each container, VM, server, or whatever. We handle some basics of connectivity, and bam! Everything is host- and location-agnostic.

Setup in Docker

The installation of Yggdrasil on a regular system is pretty straightforward. Docker is a bit more complicated for several reasons:

Normally, Yggdrasil could auto-discover peers on a LAN interface. However, aside from some esoteric Docker networking approaches, Docker doesn't permit that. So my approach is going to be setting up one or more Yggdrasil "router" containers on a given Docker host. All the other containers talk directly to the "router" container and it's all good.

Basic installation

In my Dockerfile, I have something like this:

FROM jgoerzen/debian-base-security:bullseye
RUN echo "deb http://deb.debian.org/debian bullseye-backports main" >> /etc/apt/sources.list && \
    apt-get --allow-releaseinfo-change update && \
    apt-get -y --no-install-recommends -t bullseye-backports install yggdrasil
...
COPY yggdrasil.conf /etc/yggdrasil/
RUN set -x; \
    chown root:yggdrasil /etc/yggdrasil/yggdrasil.conf && \
    chmod 0750 /etc/yggdrasil/yggdrasil.conf && \
    systemctl enable yggdrasil

The magic parameters to docker run to make Yggdrasil work are:

--cap-add=NET_ADMIN --sysctl net.ipv6.conf.all.disable_ipv6=0 --device=/dev/net/tun:/dev/net/tun

This example uses my docker-debian-base images, so if you use them as well, you'll also need to add their parameters.

Note that it is NOT necessary to use --privileged. In fact, due to the network namespaces in use in Docker, this command does not let the container modify the host's networking (unless you use --net=host, which I do not recommend).

The --sysctl parameter was the result of a lot of banging my head against the wall. Apparently Docker tries to disable IPv6 in the container by default. Annoying.

Configuration of the router container(s)

The idea is that the router node (or more than one, if you want redundancy) will be the only ones to have an open incoming port. Although the normal Yggdrasil case of directly detecting peers in a broadcast domain is more convenient and more robust, this can work pretty well too.

You can, of course, generate a template yggdrasil.conf with yggdrasil -genconf like usual. Some things to note for this one:

Configuration of the non-router nodes

Again, you can start with a simple configuration. Some notes here:

Using the interfaces

At this point, you should be able to ping6 between your containers. If you have multiple hosts running Docker, you can simply set up the router nodes on each to connect to each other. Now you have direct, secure, container-to-container communication that is host-agnostic! You can also set up Yggdrasil on a bare metal server or VM using standard procedures and everything will just talk nicely!

Security notes

Yggdrasil's mesh is aggressively greedy. It will peer with any node it can find (unless told otherwise) and will find a route to anywhere it can. There are two main ways to make sure your internal comms stay private: by restricting who can talk to your mesh, and by firewalling the Yggdrasil interface. Both can be used, and they can be used simultaneously.

By disabling multicast discovery, you eliminate the chance for random machines on the LAN to join the mesh. By making sure that you firewall off (outside of Yggdrasil) who can connect to a Yggdrasil node with a listening port, you can authorize only your own machines. And, by setting AllowedPublicKeys on the nodes with listening ports, you can authenticate the Yggdrasil peers. Note that part of the benefit of the Yggdrasil mesh is normally that you don't have to propagate a configuration change to every participatory node - that's a nice thing in general!

You can also run a firewall inside your container (I like firehol for this purpose) and aggressively firewall the IPs that are allowed to connect via the Yggdrasil interface. I like to set a stable interface name like ygg0 in yggdrasil.conf, and then it becomes pretty easy to firewall the services. The Docker parameters that allow Yggdrasil to run are also sufficient to run firehol.

Naming Yggdrasil peers

You probably don't want to hard-code Yggdrasil IPs all over the place. There are a few solutions:

Other hints & conclusion

Here are some other helpful use cases:

This is just an idea. The point of Yggdrasil is expanding our ideas of what we can do with a network, so here's one such expansion. Have fun!


Note: This post also has a permanent home on my webiste, where it may be periodically updated.

02 Feb 2023 4:18am GMT

Dirk Eddelbuettel: RInside 0.2.18 on CRAN: Maintenance

A new release 0.2.18 of RInside arrived on CRAN and in Debian today. This is the first release in ten months since the 0.2.17 release. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.

This release brings a contributed change to how the internal REPL is called: Dominick found the current form more reliable when embedding R on Windows. We also updated a few other things around the package.

The list of changes since the last release:

Changes in RInside version 0.2.18 (2023-02-01)

  • The random number initialization was updated as in R.

  • The main REPL is now running via 'run_Rmainloop()'.

  • Small routine update to package and continuous integration.

My CRANberries also provides a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page, or to issues tickets at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 Feb 2023 12:17am GMT

feedPlanet KDE | English

KDE Gear 22.12.2

Over 120 individual programs plus dozens of programmer libraries and feature plugins are released simultaneously as part of KDE Gear.

Today they all get new bugfix source releases with updated translations, including:

Distro and app store packagers should update their application packages.

02 Feb 2023 12:00am GMT

feedPlanet Arch Linux

Call for participation: Git packaging POC

https://lists.archlinux.org/archives/li … WE6GFWUJN/ Hi everyone! Levente and I have been busy preparing a test environment for the new git package workflow, which is going to replace the svn repository. To test the new git package setup install `devtools-git-poc` from the [community] repository and use the new `pkgctl` utility. Please check each time if there is a new upgrade before playing around. The goal of the testing is to figure out UX issues, bugs and larger issues that would need to be dealt with before a git migration can happen. It's therefor very important that people sit down and play around …

02 Feb 2023 12:00am GMT

01 Feb 2023

feedOMG! Ubuntu!

This GNOME Extension Makes the ‘Activities’ Label More Useful

GNOME Shell's 'Activities' button is iconic, but could the space it takes be put to better use? One GNOME extension developer thinks so, and this is how…

This post, This GNOME Extension Makes the 'Activities' Label More Useful is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

01 Feb 2023 11:59pm GMT

feedLinuxiac

How to Install VMware Workstation Player on Fedora

How to Install VMware Workstation Player on Fedora

Get the most out of your Fedora's virtualization capabilities by installing VMware Workstation Player. Learn how here!

01 Feb 2023 11:42pm GMT

feedPlanet Ubuntu

Ubuntu Blog: Multipass 1.11 brings enhanced performance for Linux on Mac and Windows

Multipass 1.11 is here!

This release has some particularly interesting features that we've been wanting to ship for a while now. We're excited to share them with you!

For those who aren't familiar with Multipass, it's software that streamlines every aspect of managing and working with virtual machines. We've found that development, particularly for cloud applications, can often involve a huge amount of tedious work setting up development and testing environments. Multipass aims to solve that by making the process of creating and destroying VMs as simple as a single command, and by integrating the VM into your host machine and your development flow as much as possible.

That principle of integration is one of the main focuses we had for the 1.11 release. There are two major features out today that make Multipass much more integrated with your host machine - native mounts and directory mapping.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/a7e1/james-harrison-vpOeXr5wmR4-unsplash.jpg" width="720" /> </noscript>

Performance boost

Performance has always been in Multipass' DNA - we try to keep it as lightweight as we can so that nothing gets between developers and their work. With the 1.11 release, we've taken another big step forward.

With the new native mounts feature, Multipass is getting a major performance boost. This feature uses platform-optimized software to make filesystems shared between the host computer and the virtual machine much faster than before. In benchmarking, we've seen speed gains of around 10x! For people sharing data with Multipass from their host machine, this is a huge time saver.

Multipass is one of the few VM management tools available to developers on Apple silicon. Performance mounts make the M1 and M2 even faster platforms for Ubuntu. For those who don't remember, Multipass can launch VMs on the Apple M1 and M2 in less than 20 seconds.

User experience

Multipass' performance leveled up with this release, and the user experience did as well! Directory mapping is a new way to be more efficient than ever with Multipass. Multipass has supported command aliasing for some time now, but one drawback of aliasing alone is that it loses the context of where the command is executed in the filesystem. Commands like docker-compose, for example, are context sensitive. They may rely on certain files being present in the working directory, or give different results depending on where they are run.

Directory mapping maintains the context of an aliased command, meaning that an aliased command sent from the host will be executed in the same context on the VM. This feature has the potential to make it feel like you are running linux programs natively on your Mac or Windows terminal.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/7844/nubelson-fernandes-iE71-TMrrkE-unsplash.jpg" width="720" /> </noscript>

Other upgrades

In addition to directory mapping, Blueprints now allow for alias and workspace definitions, meaning you can now spin up a new VM and start using aliased (and context-sensitive) commands in a shared filespace with no additional configuration required. Look for some examples in the near future!

Some other notable upgrades include the `transfer` command and UEFI booting. The `transfer` command now allows for recursive file transfers. This should make it much easier to transfer entire directories as opposed to individual folders or files. Multipass now boots its instances via UEFI which means we are able to support Ubuntu Core 20 and 22 for our IoT developers.

To get started with Multipass, head to our install page or check out our tutorials. We always love to hear feedback from our community, so please let us know what you're up to by posting in discourse, or dropping in for our office hours.

01 Feb 2023 3:35pm GMT

Julian Andres Klode: Ubuntu 2022v1 secure boot key rotation and friends

This is the story of the currently progressing changes to secure boot on Ubuntu and the history of how we got to where we are.

taking a step back: how does secure boot on Ubuntu work?

Booting on Ubuntu involves three components after the firmware:

  1. shim
  2. grub
  3. linux

Each of these is a PE binary signed with a key. The shim is signed by Microsoft's 3rd party key and embeds a self-signed Canonical CA certificate, and optionally a vendor dbx (a list of revoked certificates or binaries). grub and linux (and fwupd) are then signed by a certificate issued by that CA

In Ubuntu's case, the CA certificate is sharded: Multiple people each have a part of the key and they need to meet to be able to combine it and sign things, such as new code signing certificates.

BootHole

When BootHole happened in 2020, travel was suspended and we hence could not rotate to a new signing certificate. So when it came to updating our shim for the CVEs, we had to revoke all previously signed kernels, grubs, shims, fwupds by their hashes.

This generated a very large vendor dbx which caused lots of issues as shim exported them to a UEFI variable, and not everyone had enough space for such large variables. Sigh.

We decided we want to rotate our signing key next time.

This was also when upstream added SBAT metadata to shim and grub. This gives a simple versioning scheme for security updates and easy revocation using a simple EFI variable that shim writes to and reads from.

Spring 2022 CVEs

We still were not ready for travel in 2021, but during BootHole we developed the SBAT mechanism, so one could revoke a grub or shim by setting a single EFI variable.

We actually missed rotating the shim this cycle as a new vulnerability was reported immediately after it, and we decided to hold on to it.

2022 key rotation and the fall CVEs

This caused some problems when the 2nd CVE round came, as we did not have a shim with the latest SBAT level, and neither did a lot of others, so we ended up deciding upstream to not bump the shim SBAT requirements just yet. Sigh.

Anyway, in October we were meeting again for the first time at a Canonical sprint, and the shardholders got together and created three new signing keys: 2022v1, 2022v2, and 2022v3. It took us until January before they were installed into the signing service and PPAs setup to sign with them.

We also submitted a shim 15.7 with the old keys revoked which came back at around the same time.

Now we were in a hurry. The 22.04.2 point release was scheduled for around middle of February, and we had nothing signed with the new keys yet, but our new shim which we need for the point release (so the point release media remains bootable after the next round of CVEs), required new keys.

So how do we ensure that users have kernels, grubs, and fwupd signed with the new key before we install the new shim?

upgrade ordering

grub and fwupd are simple cases: For grub, we depend on the new version. We decided to backport grub 2.06 to all releases (which moved focal and bionic up from 2.04), and kept the versioning of the -signed packages the same across all releases, so we were able to simply bump the Depends for grub to specify the new minimum version. For fwupd-efi, we added Breaks.

(Actually, we also had a backport of the CVEs for 2.04 based grub, and we did publish that for 20.04 signed with the old keys before backporting 2.06 to it.)

Kernels are a different story: There are about 60 kernels out there. My initial idea was that we could just add Breaks for all of them. So our meta package linux-image-generic which depends on linux-image-$(uname -r)-generic, we'd simply add Breaks: linux-image-generic (« 5.19.0-31) and then adjust those breaks for each series. This would have been super annoying, but ultimately I figured this would be the safest option. This however caused concern, because it could be that apt decides to remove the kernel metapackage.

I explored checking the kernels at runtime and aborting if we don't have a trusted kernel in preinst. This ensures that if you try to upgrade shim without having a kernel, it would fail to install. But this ultimately has a couple of issues:

  1. It aborts the entire transaction at that point, so users will be unable to run apt upgrade until they have a recent kernel.
  2. We cannot even guarantee that a kernel would be unpacked first. So even if you got a new kernel, apt/dpkg might attempt to unpack it first and then the preinst would fail because no kernel is present yet.

Ultimately we believed the danger to be too large given that no kernels had yet been released to users. If we had kernels pushed out for 1-2 months already, this would have been a viable choice.

So in the end, I ended up modifying the shim packaging to install both the latest shim and the previous one, and an update-alternatives alternative to select between the two:

In it's post-installation maintainer script, shim-signed checks whether all kernels with a version greater or equal to the running one are not revoked, and if so, it will setup the latest alternative with priority 100 and the previous with a priority of 50. If one or more of those kernels was signed with a revoked key, it will swap the priorities around, so that the previous version is preferred.

Now this is fairly static, and we do want you to switch to the latest shim eventually, so I also added hooks to the kernel install to trigger the shim-signed postinst script when a new kernel is being installed. It will then update the alternatives based on the current set of kernels, and if it now points to the latest shim, reinstall shim and grub to the ESP.

Ultimately this means that once you install your 2nd non-revoked kernel, or you install a non-revoked kernel and then reconfigure shim or the kernel, you will get the latest shim. When you install your first non-revoked kernel, your currently booted kernel is still revoked, so it's not upgraded immediately. This has a benefit in that you will most likely have two kernels you can boot without disabling secure boot.

regressions

Of course, the first version I uploaded had still some remaining hardcoded "shimx64" in the scripts and so failed to install on arm64 where "shimaa64" is used. And if that were not enough, I also forgot to include support for gzip compressed kernels there. Sigh, I need better testing infrastructure to be able to easily run arm64 tests as well (I only tested the actual booting there, not the scripts).

shim-signed migrated to the release pocket in lunar fairly quickly, but this caused images to stop working, because the new shim was installed into images, but no kernel was available yet, so we had to demote it to proposed and block migration. Despite all the work done for end users, we need to be careful to roll this out for image building.

another grub update for OOM issues.

We had two grubs to release: First there was the security update for the recent set of CVEs, then there also was an OOM issue for large initrds which was blocking critical OEM work.

We fixed the OOM issue by cherry-picking all 2.12 memory management patches, as well as the red hat patches to the loader we take from there. This ended up a fairly large patch set and I was hesitant to tie the security update to that, so I ended up pushing the security update everywhere first, and then pushed the OOM fixes this week.

With the OOM patches, you should be able to boot initrds of between 400M and 1GB, it also depends on the memory layout of your machine and your screen resolution and background images. So OEM team had success testing 400MB irl, and I tested up to I think it was 1.2GB in qemu, I ran out of FAT space then and stopped going higher :D

other features in this round

am I using this yet?

The new signing keys are used in:

If you were able to install shim-signed, your grub and fwupd-efi will have the correct version as that is ensured by packaging. However your shim may still point to the old one. To check which shim will be used by grub-install, you can check the status of the shimx64.efi.signed or (on arm64) shimaa64.efi.signed alternative. The best link needs to point to the file ending in latest:

$ update-alternatives --display shimx64.efi.signed
shimx64.efi.signed - auto mode
  link best version is /usr/lib/shim/shimx64.efi.signed.latest
  link currently points to /usr/lib/shim/shimx64.efi.signed.latest
  link shimx64.efi.signed is /usr/lib/shim/shimx64.efi.signed
/usr/lib/shim/shimx64.efi.signed.latest - priority 100
/usr/lib/shim/shimx64.efi.signed.previous - priority 50

If it does not, but you have installed a new kernel compatible with the new shim, you can switch immediately to the new shim after rebooting into the kernel by running dpkg-reconfigure shim-signed. You'll see in the output if the shim was updated, or you can check the output of update-alternatives as you did above after the reconfiguration has finished.

For the out of memory issues in grub, you need grub2-signed 1.187.3~ (same binaries as above).

how do I test this (while it's in proposed)?

  1. upgrade your kernel to proposed and reboot into that
  2. upgrade your grub-efi-amd64-signed, shim-signed, fwupd-signed to proposed.

If you already upgraded your shim before your kernel, don't worry:

  1. upgrade your kernel and reboot
  2. run dpkg-reconfigure shim-signed

And you'll be all good to go.

deep dive: uploading signed boot assets to Ubuntu

For each signed boot asset, we build one version in the latest stable release and the development release. We then binary copy the built binaries from the latest stable release to older stable releases. This process ensures two things: We know the next stable release is able to build the assets and we also minimize the number of signed assets.

OK, I lied. For shim, we actually do not build in the development release but copy the binaries upward from the latest stable, as each shim needs to go through external signing.

The entire workflow looks something like this:

  1. Upload the unsigned package to one of the following "build" PPAs:

  2. Upload the signed package to the same PPA

  3. For stable release uploads:

    • Copy the unsigned package back across all stable releases in the PPA
    • Upload the signed package for stable releases to the same PPA with ~<release>.1 appended to the version
  4. Submit a request to canonical-signing-jobs to sign the uploads.

    The signing job helper copies the binary -unsigned packages to the primary-2022v1 PPA where they are signed, creating a signing tarball, then it copies the source package for the -signed package to the same PPA which then downloads the signing tarball during build and places the signed assets into the -signed deb.

    Resulting binaries will be placed into the proposed PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed

  5. Review the binaries themselves

  6. Unembargo and binary copy the binaries from the proposed PPA to the proposed-public PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed-public.

    This step is not strictly necessary, but it enables tools like sru-review to work, as they cannot access the packages from the normal private "proposed" PPA.

  7. Binary copy from proposed-public to the proposed queue(s) in the primary archive

Lots of steps!

WIP

As of writing, only the grub updates have been released, other updates are still being verified in proposed. An update for fwupd in bionic will be issued at a later point, removing the EFI bits from the fwupd 1.2 packaging and using the separate fwupd-efi project instead like later release series.

01 Feb 2023 1:40pm GMT

feedPlanet Maemo

Apa Permainan Judi Online yang Populer di Indonesia?

Tahukah anda bahwa di Indonesia ada banyak sekali pilihan permainan judi online yang tersedia dan bisa anda mainkan. Anda sebetulnya bebas aja memilih permainan mana saja sesuai dengan yang anda mau makan sebaiknya Coba Anda cari manakah yang memang paling populer dan menguntungkan. Dengan cara begitu kita bisa menemukan dan mendapatkan salah satu pilihan permainan yang memang bisa memberikan kita dan membawa kita pada keberuntungan. Untuk bisa menemukan dan mendapatkan pilihan permainan yang seperti itu tentu kita harus melakukan pencarian sampai kemudian bisa menemukan salah satunya.

Memilih permainan casino yang populer tentu bisa menjadi salah satu cara dan langkah terbaik yang bisa kita lakukan. Kenapa demikian? Ya ketika memang permainan itu populer dan banyak dimainkan oleh banyak orang itu artinya memang ada banyak keuntungan dan kelebihan yang ditawarkan oleh permainan tersebut. Tidak akan menyesal jika bisa bergabung dan bermain di salah satu pilihan permainan yang populer. Justru akan ada banyak keuntungan dan kelebihan besar yang bisa kita peroleh dari keputusan kita bergabung di salah satu pilihan permainan yang populer.

Apa Saja Permainan Judi Online yang Populer di Indonesia?

Indonesia ada banyak sekali pilihan permainan casino yang tersedia dan bisa kita mainkan dan tentunya kita bisa memilih jenis permainan yang paling bagus dan paling populer. Dari banyak pilihan permainan yang ada, memang ada beberapa diantaranya yang sangat populer dan banyak diminati oleh banyak kalangan. Ketika memang ada banyak orang yang meminati untuk bermain itu juga bisa menjadi salah satu pilihan yang tepat. Apa saja Pilihan permainan yang banyak dimainkan itu? Beberapa diantaranya dan penjelasannya di bawah ini:

Beberapa permainan di atas juga di Indonesia masih banyak permainan judi Casino versi online lainnya yang tersedia dan bisa anda pilih dan mainkan. Segitunya untuk Anda bisa memilih dan memainkan permainan mana saja namun cobalah untuk Anda cek Apakah terbuat dari permainan itu dan bagaimana caranya agar Anda bisa mendapatkan keuntungan finansial dari permainan tersebut. Kamu itu harus dipikirkan dengan baik sebelum kita terjun ke dalam permainan itu agar kita berhasil memperoleh banyak penghasilan dan juga keseluruhan dan kesenangan dari Casino Online itu. Selamat mencoba dan selamat memainkan salah satu atau beberapa permainan yang tersedia.

The post Apa Permainan Judi Online yang Populer di Indonesia? appeared first on VALERIOVALERIO.

0 Add to favourites0 Bury

01 Feb 2023 10:08am GMT

feedPlanet KDE | English

My work in KDE for January 2023

This is a non-comprehensive list of all of the major work I've done for KDE this month of January. I think I got a lot done this month! I also was accepted as a KDE Developer near the start of the month, so I'm pretty happy about that.

Sorry that it's pretty much only text, a lot of this stuff isn't either not screenshottable or I'm too lazy to attach an image. Next month should be better!

Custom icon theme in Tokodon

I threw all of the custom icons we use in Tokodon into a proper custom icon theme, which should automatically match your theme and includes a dark theme variant. In the future, I'd like to recolor these better and eventually upstream them into Breeze.

See the merge request.

KXMLGUI tooltips

As part of cleaning up some KDE games-related stuff, I also looked into the issue of duplicate "What's This?" tooltips. This also fixes that visual bug where you can close normal tooltips that don't have "What's This?" information to actually open.

See the merge request.

KBlocks background changes

This one isn't merged yet, but in the future - KBlock themes authors will be able to specify where to pin the background instead of having it stretched by default.

See the merge request.

Kirigami "About KDE" dialog

I added something that's been wanted for a while, Kirigami's own "About KDE" dialog! It's currently sitting in Add-ons, but will most likely be moved in the future. If you would like to suggest what we do about the About pages/windows in KDE, please check out the proposal.

Kirigami Add-on&rsquo;s About KDE dialog
Kirigami Add-on's About KDE dialog

See the merge request.

Media improvements in Tokodon

I did a lot of work improving media in Tokodon this month, including fixing the aspect ratios scaling correctly, video support (not merged yet) and other miscellaneous fixes. I also caught a bunch of blurhash bugs along with making the timeline fixed-width so images aren't absurdly sized on a typical desktop display.

Tokodon on a large display
Tokodon on a large display

See the media layout fixes, three attachment fix, and the video support merge requests.

Krita.org dark theme

I'm starting to get involved in improving the KDE websites, and currently working on the new Krita.org website and adding a proper dark theme to it.

Krita.org in the dark
Krita.org in the dark

See the work-in-progress merge request.

Gwenview MPRIS fixes

Not merged yet (due to MPRIS bugginess in general?) but I cracked a shot at improving the MPRIS situation with Gwenview. Notably, slideshow controls no longer "hang around" until a slideshow is actually happening.

See the open merge request.

CMake Package Installer

I worked a little on solving the kdesrc-build issue of manual package lists, and created cmake-package-installer. It parses your CMake log and installs the relevant packages for you. I want to start looking into hooking this into kdesrc-build!

See the repository.

KDE Wiki improvements

I made some misc changes to the Community Wiki this month, mostly centered around fixing some long-standing formatting issues I've noticed. The homepage should be more descriptive, important pages no longer misformatted (or just missing?) and the Get Involved/Development page should be better organized.

Misc Qt patches

I cherry-picked a Qt6 commit fixing video playback in QML, which should appear in the next Qt KDE Patch collection update, mostly for use in Tokodon when video support lands. I also submitted an upstream Qt patch fixing WebP loading, meant for NeoChat where I see the most WebP images.

See the GStreamer cherry-pick and the WebP patch.

Window Decoration KCM overhaul

This isn't merged yet (but it's close!) so it barely misses the mark for January, but I'll include it anyway. I'm working on making the Window Decoration KCM frameless and give it a new look that matches the other KCMs.

New Window Decoration KCM
New Window Decoration KCM

See the merge request.

01 Feb 2023 12:00am GMT

31 Jan 2023

feedPlanet GNOME

Jussi Pakkanen: PDF with font subsetting and a look in the future

After several days of head scratching, debugging and despair I finally got font subsetting working in PDF. The text renders correctly in Okular, goes througg Ghostscript without errors and even passes an online PDF validator I found. But not Acrobat Reader, which chokes on it completely and refuses to show anything. Sigh.

The most likely cause is that the subset font that gets generated during this operation is not 100% valid. The approach I use is almost identical to what LO does, but for some reason their PDFs work. Opening both files in FontForge seems to indicate that the .notdef glyph definition is somehow not correct, but offers no help as to why.

In any case it seems like there would be a need for a library for PDF generation. Existing libs either do not handle non-RGB color spaces or are implemented in Java, Ruby or other languages that are hard to use from languages other than themselves. Many programs, like LO and Scribus, have their own libraries for generating PDF files. It would be nice if there could be a single library for that.

Is this a reasonable idea that people would actually be interested in? I don't know, but let's try to find out. I'm going to spend the next weekend in FOSDEM. So if you are going too and are interested in PDF generation, write a comment below or send me an email, I guess? Maybe we can have a shadow meeting in the cafeteria.

31 Jan 2023 11:59pm GMT

feedPlanet KDE | English

Building flatpaks and Freedesktop SDK from scratch

Flatpak applications are based on runtimes such as KDE or Gnome Runtimes. Both of these runtimes are actually based on Freedesktop SDK which contains essential libraries and services such as Wayland or D-Bus.

Recently there were a lot of discussion about supply chain attacks, so it might be interesting to ask how Freedesktop SDK was built. The answer can be found in freedesktop-sdk repository:

sources:
- kind: ostree
url: freedesktop-sdk:releases/
gpg-key: keys/freedesktop-sdk.gpg
track: runtime/org.freedesktop.Sdk.PreBootstrap/x86_64/21.08
ref: 0ecba7699760ffc05c8920849856a20ebb3305da9f1f0377ddb9ca5600be710b

So it is built using an older version of Freedesktop SDK image. There is now an approved merge request that completely reworks bootstrapping of Freedesktop SDK. It uses another intermediate docker image freedesktop-sdk-binary-seed that bridges the gap between freedesktop-sdk and live-bootstrap.

So what is this live-bootstrap? If you look at parts.rst you'll see that it is a build chain that starts with 256 byte hex assembler that can build itself from its source and also 640-byte trivial shell that can read list of commands from the file and executes them. Then it proceeds building 130 (as of the moment of writing) other components and in the process builds GCC, Python, Guile, Perl and lots of other supporting packages. Furthermore, each component is built reproducibly (and this is checked using SHA256 hash).

Some caveat: at the moment freedesktop-sdk-binary-seed still uses older binary of rustc to build rustc but in principle one could leverage mrustc to build it. Or possibly rust-gcc will become more capable in future versions and will be able to bootstrap rustc.

So unless your flatpak application uses rust, it will soon be buildable from sub 1 KiB binary seed.

31 Jan 2023 11:48pm GMT

feedOMG! Ubuntu!

Linux Mint 21.2 Codename, New Features Revealed

Linux Mint logoLinux Mint 21.2 will be released at the end of June. It has the codename "Victoria". New features will be added to the login screen and the Pix photo app.

This post, Linux Mint 21.2 Codename, New Features Revealed is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

31 Jan 2023 10:28pm GMT

elementary OS 7 Released, This is What’s New

See what's new in elementary OS 7, the latest stable release of this Ubuntu-based Linux distro. From UI changes, to new apps, to powerful new features.

This post, elementary OS 7 Released, This is What's New is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

31 Jan 2023 5:00pm GMT

feedPlanet Maemo

Strategi Bermain Judi Online Agar Mudah Menang

Memilih permainan taruhan memang harus dipikirkan dengan baik dan benar dan tidak boleh dilakukan dengan sembarangan. Kalaupun ada banyak koleksi permainan yang tersedia tetap kita harus bisa selektif memutuskan dan memilih salah satu pilihan yang paling tepat. Dengan cara demikian inilah yang kemudian akan memungkinkan dan memudahkan kita untuk bisa memperoleh kemenangan. Kita bisa mendapatkan dan memperoleh kemenangan lebih mudah dari permainan yang kita lakukan jika kita bisa bermain dengan cara yang tepat.

Kenapa membutuhkan strategi?

Permainan casino sebetulnya bukan sepenuhnya permainan yang harus mengandalkan yang namanya keberuntungan, akan tetapi kita harus bisa menguasai beragam strategi bermain agar bisa menang mudah dalam permainan itu. Makin banyak strategi bermain yang kita kuasai maka semakin besar pula peluang keuntungan yang bisa kita peroleh dan dapatkan. Oleh karena itu sebisa mungkin kita harus pelajari strategi bermain seperti apa yang harus kita gunakan. Dengan cara demikian ini bisa memungkinkan dan memudahkan kita untuk menang lebih mudah dan lebih seru. Bermain dengan strategi juga bisa memudahkan kita untuk bisa menggapai target yang ingin kita capai.

Strategi Bermain Judi Online Menang Terus

Bermain game judi online agar supaya bisa menang terus terkadang memang membutuhkan beberapa trik dan juga strategi yang jitu. Kita bisa coba pelajari dan cari tahu bagaimana sebetulnya trik yang bisa digunakan bergantung pada pilihan permainan. Misalnya Anda mau bermain permainan seperti slot pragmatic, joker 123 dan lain sebagainya, sebaiknya gunakan teknik dan strategi yang benar.

Jika anda sudah bisa memilih salah satu pilihan permen yang yang tepat, anda tinggal mainkan saja permainan itu dan kemudian Anda rasakan keseruan dari permainan tersebut.

Pilih game yang punya potensi menang tinggi

Cara yang paling mudah pertama sebetulnya adalah memilih game yang memiliki potensi kemenangan yang tinggi. Oleh karena itu anda perlu lakukan beberapa pencarian untuk bisa menemukan pilihan game tersebut. Saat ini banyak sekali pilihan yang tersedia dan bisa dipilih namun juga harus disadari bahwa tidak semua pilihan game memiliki tingkat RTP tinggi.

Menyiapkan modal yang besar

Setiap pemain yang mau menang dan untung banyak di dalam permainan game maju di online judi pasti disarankan untuk bisa memilih untuk menyiapkan modal yang cukup. Persiapan modal yang cukup akan mempengaruhi banyak hal termasuk juga mempengaruhi tingkat kemenangan dan besar keuntungan yang bisa didapatkan. Jadi sebisa mungkin Anda harus bisa siapkan modal yang cukup agar bisa memiliki kesempatan untuk bisa bermain dengan lebih banyak keuntungan besar di dalamnya.

Mempelajari celah kemenangan

Setiap permainan apapun itu juga pasti akan selalu ada celah kemenangan yang bisa kita pelajari. Jadi pastikan supaya Anda bisa belajar dan cari tahu apa saja dan bagaimana saja sebetulnya cara kemenangan yang bisa kita gunakan agar bisa mendapatkan keuntungan yang besar dan berkali Lipat. Bahkan Anda juga bisa coba belajar dari beberapa sumber di beberapa media untuk bisa mendapatkannya informasinya.

Akan tetapi bagi Anda yang memang masih pemula sangat disarankan untuk bisa mengetahui dengan baik bahwa memang ada beberapa cara dan strategi tertentu yang bisa anda gunakan. Ada beberapa panduan khusus mendasar dan Advance yang memang harus dipahami dengan baik. Hal itu bertujuan agar Kemudian Anda bisa memperoleh dan mendapatkan keuntungan besar dari permainan tersebut. Ini juga yang memudahkan anda untuk bisa berjalan dengan lancar dan mendapatkan potensi atau peluang kemenangan yang lebih besar lagi dari permainan tersebut.

The post Strategi Bermain Judi Online Agar Mudah Menang appeared first on VALERIOVALERIO.

0 Add to favourites0 Bury

31 Jan 2023 10:08am GMT

30 Jan 2023

feedLinuxiac

OpenSnitch App-Level Firewall May Find a Home in Debian 12

Opensnitch App-Level Firewall May Find a Home in Debian 12

A discussion that began in 2018 about adopting OpenSnitch in Debian repositories will probably find a resolution in Debian 12.

30 Jan 2023 2:45pm GMT

29 Jan 2023

feedPlanet GNOME

Jakub Steiner: GNOME 44 Wallpapers

As we gear up for the release of GNOME 44, let's take a moment to reflect on the visual design updates.

GNOME 44 Wallpapers

We've made big strides in visual consistency with the growing number of apps that have been ported to gtk4 and libadwaita, embracing the modern look. Sam has also given the high-contrast style in GNOME Shell some love, keeping it in line with gtk's updates last cycle.

The default wallpaper stays true to our brand, but the supplementary set has undergone some bolder changes. From the popular simple shape blends to a nostalgic nod to the past with keypad and pixel art designs, there's something for everyone. The pixelized icons made their debut in the last release, but this time we focus on GNOME Circle apps, rather than the core apps.

Another exciting development is the continued use of geometry nodes in Blender. Although the tool has a steep learning curve, I'm starting to enjoy my time with it. I gave a talk on geometry nodes and its use for GNOME wallpaper design at the Fedora Creative Freedom Summit. You can watch the stream archive recording here (and part2).

Previously, Previously, Previously

29 Jan 2023 11:00pm GMT

feedLinuxiac

Budgie Desktop 10.7: A Sleek and Improved User Experience

Budgie Desktop 10.7: A Sleek and Improved User Experience

The Budgie 10.7 desktop environment is here, bringing many new features and improvements. Check out what's new!

29 Jan 2023 10:12pm GMT

27 Jan 2023

feedKernel Planet

Matthew Garrett: Further adventures in Apple PKCS#11 land

After my previous efforts, I wrote up a PKCS#11 module of my own that had no odd restrictions about using non-RSA keys and I tested it. And things looked much better - ssh successfully obtained the key, negotiated with the server to determine that it was present in authorized_keys, and then went to actually do the key verification step. At which point things went wrong - the Sign() method in my PKCS#11 module was never called, and a strange
debug1: identity_sign: sshkey_sign: error in libcrypto
sign_and_send_pubkey: signing failed for ECDSA "testkey": error in libcrypto"

error appeared in the ssh output. Odd. libcrypto was originally part of OpenSSL, but Apple ship the LibreSSL fork. Apple don't include the LibreSSL source in their public source repo, but do include OpenSSH. I grabbed the OpenSSH source and jumped through a whole bunch of hoops to make it build (it uses the macosx.internal SDK, which isn't publicly available, so I had to cobble together a bunch of headers from various places), and also installed upstream LibreSSL with a version number matching what Apple shipped. And everything worked - I logged into the server using a hardware-backed key.

Was the difference in OpenSSH or in LibreSSL? Telling my OpenSSH to use the system libcrypto resulted in the same failure, so it seemed pretty clear this was an issue with the Apple version of the library. The way all this works is that when OpenSSH has a challenge to sign, it calls ECDSA_do_sign(). This then calls ECDSA_do_sign_ex(), which in turn follows a function pointer to the actual signature method. By default this is a software implementation that expects to have the private key available, but you can also register your own callback that will be used instead. The OpenSSH PKCS#11 code does this by calling EC_KEY_set_method(), and as a result calling ECDSA_do_sign() ends up calling back into the PKCS#11 code that then calls into the module that communicates with the hardware and everything works.

Except it doesn't under macOS. Running under a debugger and setting a breakpoint on EC_do_sign(), I saw that we went down a code path with a function called ECDSA_do_sign_new(). This doesn't appear in any of the public source code, so seems to be an Apple-specific patch. I pushed Apple's libcrypto into Ghidra and looked at ECDSA_do_sign() and found something that approximates this:

nid = EC_GROUP_get_curve_name(curve);
if (nid == NID_X9_62_prime256v1) {
  return ECDSA_do_sign_new(dgst,dgst_len,eckey);
}
return ECDSA_do_sign_ex(dgst,dgst_len,NULL,NULL,eckey);

What this means is that if you ask ECDSA_do_sign() to sign something on a Mac, and if the key in question corresponds to the NIST P256 elliptic curve type, it goes down the ECDSA_do_sign_new() path and never calls the registered callback. This is the only key type supported by the Apple Secure Enclave, so I assume it's special-cased to do something with that. Unfortunately the consequence is that it's impossible to use a PKCS#11 module that uses Secure Enclave keys with the shipped version of OpenSSH under macOS. For now I'm working around this with an SSH agent built using Go's agent module, forwarding most requests through to the default session agent but appending hardware-backed keys and implementing signing with them, which is probably what I should have done in the first place.

comment count unavailable comments

27 Jan 2023 11:39pm GMT

feedPlanet Gentoo

Handy commands to clean up old ~arch-only packages

Here's a bunch of handy commands that I've conceived to semi-automatically remove old versions of packages that do not have stable keywords (and therefore are not subject to post-stabilization cleanups that I do normally).

Requirements

The snippets below require the following packages:

They should be run in top directory of a ::gentoo checkout, ideally with no other changes queued.

Remove redundant versions

First, a pipeline that finds all packages without stable amd64 keywords (note: this is making an assumption that there are no packages that are stable only on some other architecture), then scans these packages for redundant versions and removes them. The example command operates on dev-python/*:

pkgcheck scan 'dev-python/*' -c UnstableOnlyCheck -a amd64 \
    -R FormatReporter --format '{category}/{package}' |
  sort -u |
  xargs pkgcheck scan -c RedundantVersionCheck \
    -R FormatReporter --format \
    '{category}/{package}/{package}-{version}.ebuild' |
  xargs git rm

Check for broken revdeps

The next step is to check for broken revdeps. Start with:

rdep-fetch-cache
check-revdep $(
  git diff --name-only --cached | cut -d'/' -f1-2 | sort -u
)

Use git restore -WS ... to restore versions as necessary and repeat until it comes out clean.

Check for stale files

Finally, iterate over packages with files/ to check for stale patches:

(
  for x in $(
    git diff --name-only --cached | cut -d'/' -f1-2 | sort -u
  ); do
    [[ -d ${x}/files ]] && ( cd ${x}; bash )
  done
)

This will start bash inside every cleaned up package (note: I'm assuming that there are no other changes in the repo) that has a files/ directory. Use a quick grep followed by FILESDIR lookup:

grep FILES *.ebuild
ls files/

Remove the files that are no longer referenced.

Commit the removals

for x in $(
  git diff --name-only --cached | cut -d'/' -f1-2 | sort -u
); do
  (
    cd ${x} && pkgdev manifest &&
    pkgcommit . -sS -m 'Remove old'
  )
done
pkgcheck scan --commits

27 Jan 2023 8:07pm GMT

24 Jan 2023

feedPlanet Arch Linux

Packaging Rust Applications for the NPM Registry

Recently I packaged my project git-cliff (changelog generator written in Rust) for NPM with the help of my friend @atlj. I thought this would be an interesting topic for a blog post since it has a certain technical depth about distributing binaries and frankly it still amazes me how the whole thing works so smoothly. So let's create a simple Rust project, package it for NPM and fully automate the release process via GitHub Actions.

24 Jan 2023 12:00am GMT

14 Jan 2023

feedPlanet Arch Linux

How to enable developer mode on Chrome OS Flex

I have recently switched to Chrome OS Flex as main operating system. The experience so far is really great. It does everything what it should do. I can browse the internet with it, game with it (in the past Google Stadia, now Xbox Cloud), answer my mails and even work on Arch Linux. Even printing worked pretty much out of the box. What does not work properly at the moment is scanning over wifi with my very old HP DeskJet 2540 printer with embedded scanner.

14 Jan 2023 12:00am GMT

13 Jan 2023

feedPlanet Gentoo

FOSDEM 2023

FOSDEM logo

Finally, after a long break, it's FOSDEM time again! Join us at Université Libre de Bruxelles, Campus du Solbosch, in Brussels, Belgium. This year's FOSDEM 2023 will be held on February 4th and 5th.

Our developers will be happy to greet all open source enthusiasts at our Gentoo stand in building H, level 1! Visit this year's wiki page to see who's coming.

13 Jan 2023 6:00am GMT

30 Dec 2022

feedPlanet Gentoo

Src_Snapshot

Prototype

Recently while browsing the Alpine git repo I noticed they have a function called snapshot, see: https://git.alpinelinux.org/aports/tree/testing/dart/APKBUILD#n45 I am not 100% sure about how that works but a wild guess is that the developers can run that function to fetch the sources and maybe later upload them to the Alpine repo or some sort of (cloud?) storage.

In Portage there exists a pkg_config function used to run miscellaneous configuration for packages. The only major difference between src_snapshot and that would of course be that users would never run snapshot.

Sandbox

Probably only the network sandbox would have to be lifted out… to fetch the sources of course.

But also a few (at least one?) special directories and variables would be useful.

30 Dec 2022 2:03am GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT