25 Oct 2025

feedTalkAndroid

Boba Story Lid Recipes – 2025

Look no further for all the latest Boba Story Lid Recipes. They are all right here!

25 Oct 2025 6:36pm GMT

Dice Dreams Free Rolls – Updated Daily

Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.

25 Oct 2025 6:35pm GMT

This PS game looks like Drive with Ryan Gosling and it’s getting rave reviews

If you've ever wished you could step into Ryan Gosling's scorpion jacket from Drive, the PlayStation Plus library…

25 Oct 2025 3:30pm GMT

24 Oct 2025

feedAndroid Developers Blog

5 things you need to know about publishing and distributing your app for Android XR

Posted by Jan Kleinert, Android Developer Relations Engineer

Samsung Galaxy XR is here, powered by Android XR! This blog post is part of our Android XR Spotlight Week, where we provide resources-blog posts, videos, sample code, and more-all designed to help you learn, build, and prepare your apps for Android XR.


Today, we're focusing on one of the last steps in your development journey, ensuring these experiences successfully reach your users. Publishing correctly ensures your app is packaged efficiently, discovered by the right devices, and presented in the best possible light.

Here are 5 things you need to know about publishing and distributing your app for Android XR on Google Play.

1. Uphold quality with the Android XR app quality guidelines

One of the most important steps before publishing is ensuring your app delivers a safe, comfortable, and performant user experience.

Following the Android XR App Quality Guidelines helps ensure that your app provides users with a great experience on devices like the Galaxy XR.

Why quality matters

These guidelines build upon the large screen app quality guidelines, and focus on critical XR-specific criteria including:

  • Safety and comfort: This is paramount. These guidelines help you avoid causing motion sickness by setting standards for camera movement and frame rates, and by limiting visual elements like strobing.

  • Performance: Your app must hit performance metrics, such as target frame rates, to prevent lag and ensure a fluid, comfortable experience.

  • Interaction: The guidelines specify recommended minimum sizes for interactive targets (e.g., 48dp minimum, 56dp recommended) to work well with eye-tracking and hand-tracking inputs.


2. Configure your app manifest correctly

The AndroidManifest.xml file describes important information about your app. The Android build tools, Android system, and Google Play use this information to know what kind of experience you've built and which hardware features it requires. Proper configuration is vital for correct device targeting and app launch.

Specify which Android XR SDK your app uses

In your app manifest, include android.software.xr.api.spatial or android.software.xr.api.openxr to indicate whether you're building with the Jetpack XR SDK or building with OpenXR or Unity.

SDK used

Manifest declaration

Jetpack XR SDK

android.software.xr.api.spatial

OpenXR or Unity

android.software.xr.api.openxr

If your app is built using OpenXR or Unity, you must set the android:required attribute to true. For apps built with the Jetpack XR SDK, set android:required attribute to true if your app is published to the Android XR dedicated release track and set android:required attribute to false if your app is published to the mobile release track.

Set the activity start mode

Use the android.window.PROPERTY_XR_ACTIVITY_START_MODE property on your main activity to define the default user environment:

Start mode

Purpose

SDK

XR_ACTIVITY_START_MODE_HOME_SPACE

Launches your app in Home Space, the shared multitasking environment.

Jetpack XR SDK

XR_ACTIVITY_START_MODE_FULL_SPACE_MANAGED

Launches in Full Space, a full-immersion, single-app environment.

Jetpack XR SDK

XR_ACTIVITY_START_MODE_FULL_SPACE_UNMANAGED

Launches in Full Space, a full-immersion, single-app environment. Note that apps built with OpenXR or Unity always run in Full Space.

OpenXR or Unity


Check for optional hardware features at runtime

Avoid setting optional XR features (like hand tracking or controllers) to android:required="true" unless they are truly required for your app. If a device doesn't support a required feature, Google Play will hide your app from that device. If you have features set as required but your app could operate without them, then you could unnecessarily limit your audience.

Instead, check for advanced features dynamically at runtime using the PackageManager class with hasSystemFeature():

Kotlin

val hasHandTracking = packageManager.hasSystemFeature("android.hardware.xr.input.hand_tracking")

if (hasHandTracking) {

// Enable high-fidelity hand tracking features

} else {

// Provide a fallback experience

}

This ensures your app is broadly compatible and leverages advanced features when they're available.


3. Use Play Asset Delivery (PAD) to deliver large assets

Immersive apps and games often contain large assets that might exceed the standard size limits. Use Play Asset Delivery (PAD) to manage large, high-fidelity assets. PAD offers flexible delivery modes: install-time, fast follow, and on demand for progressive download of content. Apps that are built for Android XR are allowed to deliver additional asset packs: instead of a cumulative total of 4 GB for asset packs delivered on demand or fast follow, these apps are afforded a higher cumulative total of 30 GB.

For developers building with Unity, use Unity Addressables along with Play Asset Delivery to manage asset packs.


4. Showcase your app with spatial video previews

To capture the attention of users browsing the Play Store on their XR headsets, you can provide an immersive preview of your app using a spatial video asset. This must be a 180°, 360°, or stereoscopic video. On Android XR devices, the Play Store will automatically display this as an immersive 3D preview, allowing users to experience the depth and scale of your content before they install the app.


5. Choose your Google Play release track

Google Play provides two pathways for publishing your Android XR app, both using the same Play Console account:

Option A: Continue on the mobile release track (for spatialized mobile apps)

If you are adding spatial XR features to an existing mobile app, you can often bundle the XR features or content into your existing Android App Bundle (AAB).

This approach is ideal if your app maintains most of its core functionality across both mobile and XR devices, and you can continue publishing the same AAB to the mobile track. Review this guidance to be sure you are properly configuring your app's manifest file to support this use case.

Option B: Publish to the dedicated Android XR release track

If you are building a brand-new app for XR or if the XR version is functionally too different for a single AAB, you should publish to the Android XR dedicated release track.

Apps published to the Android XR dedicated release track are only visible to Android XR devices that support the android.software.xr.api.spatial feature or the android.software.xr.api.openxr feature, giving you control over distribution.

By following this guidance, you can help ensure your innovative Android XR apps provide a quality user experience, are packaged efficiently, are delivered smoothly using PAD, and are targeted to the devices that can run them. Happy publishing!

24 Oct 2025 4:00pm GMT

23 Oct 2025

feedAndroid Developers Blog

Set a reminder: Tune in on October 30 for our Fall episode of The Android Show, live from Droidcon London

Posted by The Android Team




In just a few days, on Thursday, October 30th at 10AM PT, we'll be dropping our Fall episode of The Android Show, on YouTube and on developer.android.com! This time, we'll be live from Droidcon London, where we'll be unpacking some of the latest agentic experiences for Gemini in Android Studio designed to help you be more productive, plus doing live demos of Jetpack Compose and more. And with the recent launch of Galaxy XR, we'll be diving into the world of Android XR plus how building adaptive lets you easily extend to XR devices as well as foldables, tablets and large screens.



Get your #AskAndroid questions answered live!

We've assembled a team of experts from across Android to answer your #AskAndroid questions live from London on building excellent apps, across devices; you can start sharing your questions now using #AskAndroid, and tune in to see if they are answered live on the show!


The Android Show is your conversation with the Android developer community, and this episode will be co-hosted by Rebecca Gutteridge and Adetunji Dahunsi. You'll hear the latest from the developers and engineers who build Android. Don't forget to tune in live on October 30 at 10AM PT, live on YouTube and on developer.android.com/events/show!

23 Oct 2025 6:50pm GMT

Getting started with Unity and Android XR

Posted by Luke Hopkins - Developer Relations Engineer

Samsung Galaxy XR is here, powered by Android XR! This blog post is part of our Android XR Spotlight Week, where we provide resources-blog posts, videos, sample code, and more-all designed to help you learn, build, and prepare your apps for Android XR.




There's never been a better time to get into XR development. Last December, we announced Android XR, Google's new Android platform built on open standards such as OpenXR and Vulkan, which makes XR development more accessible than it's ever been.


And when combined with Unity's existing XR tools, you get a powerful and mature development stack. This makes it possible to create and deploy XR apps that work across multiple devices.



No matter whether you've done XR development before or not, we want to help you get started.

This blog will get you up and running with Android XR and Unity development. We'll focus on the practical steps to configure your environment, understand the package ecosystem, and start building.

By the end of this blog, you'll have a good understanding of:

  • The package ecosystem

  • Essential setup steps

  • Input methods

  • Privacy and permissions

  • Composition layers

Unity for Android XR development


You might choose Unity for its cross-platform compatibility, allowing you to build once and deploy to Android XR and other XR devices.

When using Unity, you benefit from its mature XR ecosystem and tooling. It already has established packages such as XR Interaction Toolkit, OpenXR plugin, XR composition layers, XR Hands, an extensive asset store full of XR-ready components and templates, and XR simulation and testing tools. And since Unity 6 was released last November, you'll also benefit from its improved Universal Render Pipeline (URP) performance, better Vulkan graphics support, and enhanced build profiles.

Here are some sample projects to get an idea of what can be done:

Essential setup: your development foundation

Unity 6 requirements and installation

You'll need Unity 6 to create your app, as earlier versions don't support Android XR. Install Unity Hub first, then Unity 6 with the Android Build Support module, following these steps.

Android XR build profiles: simplifying configuration

Unity build profiles are project assets that store your platform-specific settings and configurations. So instead of needing to manually set up 15-20 different settings across multiple menus, you can use a build profile to do this automatically.
You can create your own build profiles, but for now we recommend using the dedicated Android XR build profile we created.
You can select your build profile by selecting File > Build Profile from your Unity project. For full instructions, see the Develop for Android XR workflow page.
If you make any changes of your own, you can then create a new build profile to share with your team. This way you ensure consistent build experience across the board.


After these steps you can build and run your APK for Android XR devices.

Graphics API: why Vulkan matters

Once you have your Unity project set up with an Android XR build profile, we first recommend making sure you have Vulkan set as your graphics API. Android XR is built as a Vulkan-first platform. In March 2025, Google announced that Vulkan is now the official graphics API for Android. It's a modern, low-level graphics API that helps developers maximize the performance of modern GPUs and unlocks advanced features like ray-tracing and multithreading for realistic and immersive gaming visuals.

These standards provide the best compatibility for your existing applications and ease the issues and costs of porting. And it makes it possible to enable advanced Android XR features such as URP Application Space Warp and foveated rendering.

Unity 6 handles Vulkan automatically, so when you use the Android XR build profile, Unity will configure Vulkan as your graphics API. This ensures you get access to all the advanced Android XR features without any manual configuration.

You can verify your graphics API settings by going to 'Edit' >' Project Settings' > 'Player' > 'Android tab' > 'Other settings' > 'Graphics APIs'.

Understanding the package ecosystem

There are two different packages you can use for Android XR in Unity. One is by using the Android XR Extensions for Unity, and the other is using the Unity OpenXR: Android XR package.

These may sound like the same thing, but bear with me.

The Unity OpenXR: Android XR package is the official Unity package for Android XR support. It provides the majority of Android XR features, made available through OpenXR standards. It also enables AR Foundation integration for mixed reality features. The primary benefit of using the Unity OpenXR: Android XR package is that it offers a unified API for supporting XR devices.

Whereas the Android XR Extensions for Unity is Google's XR package, designed specifically for developing for Android XR devices. It supplements the Unity OpenXR package with additional features such as environment blend modes, scene meshing, image tracking, and body tracking. The tradeoff is that you can only develop for Android XR devices.

Which one you choose will depend on your specific needs, but we generally recommend going with the Unity OpenXR: Android XR, as it gives you far more flexibility for the devices your app will be compatible with, and then based on your application requirements you can then add Android XR Extensions for Unity.

How to install packages

To add a new package, with your project open in Unity, select 'Window' > 'Package Management' > 'Package Manager'.

From here you can install these packages from the 'Unity Registry' tab:

You can install the Android XR for unity package via Github by selecting the ➕ icon, selecting 'Install package from git URL', then entering 'https://github.com/android/android-xr-unity-package.git'

Required OpenXR features

Now you have the packages you need installed, let's enable some core features in order to get our project working.

You can enable OpenXR setting for Android: 'Edit' -> 'Project Settings' -> 'XR Plugin Management' -> Click the Android and enable OpenXR


Next we need to enable support for: 'Android XR support', we will cover other OpenXR features as we need them. For now we just need Android XR support to be enabled.

Input

Android XR supports input for Hands, Voice, Eye tracking, Keyboard and Controllers. We recommend installing the XR Interaction Toolkit and XR Hands as these contain the best prefabs for getting started. By using these prefabs, you'll have everything you need to support Hands and Controllers in your app.

Once the XR Hands and XR Interactive toolkit are both installed, I recommend importing the Starter Assets and Hands Interaction Demo. Then you need to enable the Hand Interaction and Khronos Simple Controller profiles, and turn on the Hand Tracking Subsystem and Meta Hand Tracking Aim features.

You can edit these settings by going to 'Edit' > 'Project Settings' > XR Plug-in Management' > 'OpenXR'

We'd also recommend Unity's prefab, XR Origin, that represents the user's position and orientation in XR space. This contains the camera rig and tracking components needed to render your XR experience from the correct viewpoint.

The simplest way to add this prefab is to import it from the hands integration demo we imported earlier which can be found here 'Hands Integration Toolkit' > 'Hand Interaction' > 'Prefabs' > 'XR Origin'

I recommend using this Prefab over the 'XR Origin' option in your game objects as it uses the XR Input Modality Manager which automatically switches between users hands and controllers. This will give you the best success for switching between hands and controllers.

Privacy and permissions: building user trust

Whatever you build, you'll need to capture runtime permissions from the users. That's because scene understanding, eye tracking, face tracking and hand tracking provide access to data that may be more sensitive to the user.

These capabilities provide deeper personal information than traditional desktop or mobile apps, so the runtime permissions ensure your users have full control over what data they choose to share. So, to keep with Android's security and privacy policies, Android XR has permissions for each of these features.

For example, if you use the XR Hands package for custom hand gestures, you will need to request the hand tracking permission (see below) as this package needs to track a lot of information about the user's hands. This includes things like tracking hand joint poses and angular and linear velocities;

Note: For a full list of extensions that require permissions, check out information on the XR developer website.

const string k_Permission = "android.permission.HAND_TRACKING";


#if UNITY_ANDROID

void Start()

{

if (!Permission.HasUserAuthorizedPermission(k_Permission))

{

var callbacks = new PermissionCallbacks();

callbacks.PermissionDenied += OnPermissionDenied;

callbacks.PermissionGranted += OnPermissionGranted;


Permission.RequestUserPermission(k_Permission, callbacks);

}

}


void OnPermissionDenied(string permission)

{

// handle denied permission

}


void OnPermissionGranted(string permission)

{

// handle granted permission

}

#endif // UNITY_ANDROID


Enhancing visual quality with composition layers

A Composition Layer is the recommended way to render UI elements. They make it possible to display elements at a much higher quality compared to Unity's standard rendering pipeline as everything is directly rendered to the platform's compositor.

For example, if you're displaying text, the standard Unity rendering is more likely to have blurry text, soft edges, and visual artifacts. Whereas with composition layers, the text will be clearer, the outlines will be sharper, and the experience will be better overall.

As well as text, it also renders video, images, and UI elements at a much higher quality. It does this by utilising native support for the runtime's compositor layers.
To turn on Composition Layers, open Package Manager, select 'Unity Register', then install 'XR Composition Layers'.


Build and Run

Now that you have your OpenXR packages installed and features enabled, a prefab setup for hand and head movement you can now build your scene and deploy directly to your headset for testing.

What's next: expanding your skills

Now that you've got your Android XR development environment set up and understand the key concepts, here are the next steps to continue your XR development journey:

Essential resources for continued learning:

Sample projects to explore:

23 Oct 2025 4:00pm GMT

15 Oct 2025

feedPlanet Maemo

Dzzee 1.9.0 for N800/N810/N900/N9/Leste

I was playing around with Xlib this summer, and one thing led to another, and here we are with four fresh ports to retro mobile X11 platforms. There is even a Maemo Leste port, but due to some SGX driver woes on the N900, I opted for using XSHM and software rendering, which works well and has the nice, crisp pixel look (on Fremantle, it's using EGL+GLESv2). Even the N8x0 port has very fluid motion by utilizing Xv for blitting software-rendered pixels to the screen. The game is available over at itch.io.





0 Add to favourites0 Bury

15 Oct 2025 11:31am GMT

05 Jun 2025

feedPlanet Maemo

Mobile blogging, the past and the future

This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what's common is that at almost all points there was a mechanism to publish while on the move.

Psion, documents over FTP

In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.

Psion S5, also known as the Ancestor

The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.

Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.

In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.

If we wanted to include photos to the stories, we'd have to find an Internet cafe.

SMS and MMS

For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST, an empty blog post would be created with the rest of that message's text as the title.

As I got into neogeography, I could also send a NEWPOSITION message. This would update my position on the map, connecting weather metadata to the posts.

As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.

Photos over email

A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.

And so my blog included a new "moblog" section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.

My blog from that era

Pause

Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.

In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.

And I traveled off-the-grid rarely enough that it didn't make sense to develop a system.

But now that we're sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?

Starlink, Internet from Outer Space

Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.

However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.

But despite what you'd think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.

Inreach, texting with the cloud

Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.

When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.

I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.

One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it'd even enable rudimentary communications if we found ourselves in a liferaft.

Sailmail and email over HF radio

The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.

Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.

Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.

With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We'd then need a mailbox that can receive these mails, and some automation to fetch and publish.

0 Add to favourites0 Bury

05 Jun 2025 12:00am GMT

16 Oct 2024

feedPlanet Maemo

Adding buffering hysteresis to the WebKit GStreamer video player

The <video> element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.

WebKit GStreamer regular playback class diagram

The player private can have 3 buffering modes:

The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.

All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.

Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.

For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().

So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.

0 Add to favourites0 Bury

16 Oct 2024 6:12am GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT