27 Dec 2025

feedTalkAndroid

3 weeks in Japan: this Android app saved my trip more than Google Maps

Traveling abroad without any bearings can feel a bit like being dropped into a video game with absolutely…

27 Dec 2025 4:30pm GMT

What does the mysterious “N” icon mean on your Android—and should you worry?

Ever noticed a little "N" mysteriously lingering at the top of your Android screen and felt just a…

27 Dec 2025 7:30am GMT

Boba Story Lid Recipes – 2025

Look no further for all the latest Boba Story Lid Recipes. They are all right here!

27 Dec 2025 3:47am GMT

19 Dec 2025

feedAndroid Developers Blog

Media3 1.9.0 - What’s new

Posted by Kristina Simakova, Engineering Manager




Media3 1.9.0 - What's new?

Media3 1.9.0 is out! Besides the usual bug fixes and performance improvements, the latest release also contains four new or largely rewritten modules:
  • media3-inspector - Extract metadata and frames outside of playback

  • media3-ui-compose-material3 - Build a basic Material3 Compose Media UI in just a few steps

  • media3-cast - Automatically handle transitions between Cast and local playbacks

  • media3-decoder-av1 - Consistent AV1 playback with the rewritten extension decoder based on the dav1d library


We also added caching and memory management improvements to PreloadManager, and provided several new ExoPlayer, Transformer and MediaSession simplifications.


This release also gives you the first experimental access to CompositionPlayer to preview media edits.


Read on to find out more, and as always please check out the full release notes for a comprehensive overview of changes in this release.

Extract metadata and frames outside of playback

There are many cases where you want to inspect media without starting a playback. For example, you might want to detect which formats it contains or what its duration is, or to retrieve thumbnails.

The new media3-inspector module combines all utilities to inspect media without playback in one place:

  • MetadataRetriever to read duration, format and static metadata from a MediaItem.

  • FrameExtractor to get frames or thumbnails from an item.

  • MediaExtractorCompat as a direct replacement for the Android platform MediaExtractor class, to get detailed information about samples in the file.


MetadataRetriever and FrameExtractor follow a simple AutoCloseable pattern. Have a look at our new guide pages for more details.

suspend fun extractThumbnail(mediaItem: MediaItem) {
  FrameExtractor.Builder(context, mediaItem).build().use {
    val thumbnail = frameExtractor.getThumbnail().await()
  } 
}

Build a basic Material3 Compose Media UI in just a few steps

In previous releases we started providing connector code between Compose UI elements and your Player instance. With Media3 1.9.0, we added a new module media3-ui-compose-material3 with fully-styled Material3 buttons and content elements. They allow you to build a media UI in just a few steps, while providing all the flexibility to customize style. If you prefer to build your own UI style, you can use the building blocks that take care of all the update and connection logic, so you only need to concentrate on designing the UI element. Please check out our extended guide pages for the Compose UI modules.

We are also still working on even more Compose components, like a prebuilt seek bar, a complete out-of-the-box replacement for PlayerView, as well as subtitle and ad integration.


@Composable
fun SimplePlayerUI(player: Player, modifier: Modifier = Modifier) {
  Column(modifier) {
    ContentFrame(player)  // Video surface and shutter logic
    Row (Modifier.align(Alignment.CenterHorizontally)) {                 
      SeekBackButton(player)   // Simple controls
      PlayPauseButton(player)
      SeekForwardButton(player)
    }
  }
}

Simple Compose player UI with out-of-the-box elements

Automatically handle transitions between Cast and local playbacks

The CastPlayer in the media3-cast module has been rewritten to automatically handle transitions between local playback (for example with ExoPlayer) and remote Cast playback.

When you set up your MediaSession, simply build a CastPlayer around your ExoPlayer and add a MediaRouteButton to your UI and you're done!


// MediaSession setup with CastPlayer 
val exoPlayer = ExoPlayer.Builder(context).build()
val castPlayer = CastPlayer.Builder(context).setLocalPlayer(exoPlayer).build()
val session = MediaSession.Builder(context, player)
// MediaRouteButton in UI 
@Composable fun UIWithMediaRouteButton() {
  MediaRouteButton()
}


New CastPlayer integration in Media3 session demo app


Consistent AV1 playback with the rewritten extension based on dav1d

The 1.9.0 release contains a completely rewritten AV1 extension module based on the popular dav1d library.

As with all extension decoder modules, please note that it requires building from source to bundle the relevant native code correctly. Bundling a decoder provides consistency and format support across all devices, but because it runs the decoding in your process, it's best suited for content you can trust.

Integrate caching and memory management into PreloadManager

We made our PreloadManager even better as well. It already enabled you to preload media into memory outside of playback and then seamlessly hand it over to a player when needed. Although pretty performant, you still had to be careful to not exceed memory limits by accidentally preloading too much. So with Media3 1.9.0, we added two features that makes this a lot easier and more stable:


  1. Caching support - When defining how far to preload, you can now choose PreloadStatus.specifiedRangeCached(0, 5000) as a target state for preloaded items. This will add the specified range to your cache on disk instead of loading the data to memory. With this, you can provide a much larger range of items for preloading as the ones further away from the current item no longer need to occupy memory. Note that this requires setting a Cache in DefaultPreloadManager.Builder.

  2. Automatic memory management - We also updated our LoadControl interface to better handle the preload case so you are now able to set an explicit upper memory limit for all preloaded items in memory. It's 144 MB by default, and you can configure the limit in DefaultLoadControl.Builder. The DefaultPreloadManager will automatically stop preloading once the limit is reached, and automatically releases memory of lower priority items if required.

Rely on new simplified default behaviors in ExoPlayer

As always, we added lots of incremental improvements to ExoPlayer as well. To name just a few:
  • Mute and unmute - We already had a setVolume method, but have now added the convenience mute and unmute methods to easily restore the previous volume without keeping track of it yourself.

  • Stuck player detection - In some rare cases the player can get stuck in a buffering or playing state without making any progress, for example, due to codec issues or misconfigurations. Your users will be annoyed, but you never see these issues in your analytics! To make this more obvious, the player now reports a StuckPlayerException when it detects a stuck state.

  • Wakelock by default - The wake lock management was previously opt-in, resulting in hard to find edge cases where playback progress can be delayed a lot when running in the background. Now this feature is opt-out, so you don't have to worry about it and can also remove all manual wake lock handling around playback.

  • Simplified setting for CC button logic - Changing TrackSelectionParameters to say "turn subtitles on/off" was surprisingly hard to get right, so we added a simple boolean selectTextByDefault option for this use case.

Simplify your media button preferences in MediaSession

Until now, defining your preferences for which buttons should show up in the media notification drawer on Android Auto or WearOS required defining custom commands and buttons, even if you simply wanted to trigger a standard player method.

Media3 1.9.0 has new functionality to make this a lot simpler - you can now define your media button preferences with a standard player command, requiring no custom command handling at all.


session.setMediaButtonPreferences(listOf(
    CommandButton.Builder(CommandButton.ICON_FAST_FORWARD) // choose an icon
      .setDisplayName(R.string.skip_forward)
      .setPlayerCommand(Player.COMMAND_SEEK_FORWARD) // choose an action 
      .build()
))

Media button preferences with fast forward button

CompositionPlayer for real-time preview

The 1.9.0 release introduces CompositionPlayer under a new @ExperimentalApi annotation. The annotation indicates that it is available for experimentation, but is still under development.

CompositionPlayer is a new component in the Media3 editing APIs designed for real-time preview of media edits. Built upon the familiar Media3 Player interface, CompositionPlayer allows users to see their changes in action before committing to the export process. It uses the same Composition object that you would pass to Transformer for exporting, streamlining the editing workflow by unifying the data model for preview and export.

We encourage you to start using CompositionPlayer and share your feedback, and keep an eye out for forthcoming posts and updates to the documentation for more details.

InAppMuxer as a default muxer in Transformer

Transformer now uses InAppMp4Muxer as the default muxer for writing media container files. Internally, InAppMp4Muxer depends on the Media3 Muxer module, providing consistent behaviour across all API versions.
Note that while Transformer no longer uses the Android platform's MediaMuxer by default, you can still provide FrameworkMuxer.Factory via setMuxerFactory if your use case requires it.

New speed adjustment APIs

The 1.9.0 release simplifies speed adjustments APIs for media editing. We've introduced new methods directly on EditedMediaItem.Builder to control speed, making the API more intuitive. You can now change the speed of a clip by calling setSpeed(SpeedProvider provider) on the EditedMediaItem.Builder:

val speedProvider = object : SpeedProvider {
    override fun getSpeed(presentationTimeUs: Long): Float {
        return speed
    }

    override fun getNextSpeedChangeTimeUs(timeUs: Long): Long {
        return C.TIME_UNSET
    }
}

EditedMediaItem speedEffectItem = EditedMediaItem.Builder(mediaItem)
    .setSpeed(speedProvider)
    .build()


This new approach replaces the previous method of using Effects#createExperimentalSpeedChangingEffects(), which we've deprecated and will remove in a future release.

Introducing track types for EditedMediaItemSequence

In the 1.9.0 release, EditedMediaItemSequence requires specifying desired output track types during sequence creation. This change ensures track handling is more explicit and robust across the entire Composition.

This is done via a new EditedMediaItemSequence.Builder constructor that accepts a set of track types (e.g., C.TRACK_TYPE_AUDIO, C.TRACK_TYPE_VIDEO).

To simplify creation, we've added new static convenience methods:

  • EditedMediaItemSequence.withAudioFrom(List<EditedMediaItem>)

  • EditedMediaItemSequence.withVideoFrom(List<EditedMediaItem>)

  • EditedMediaItemSequence.withAudioAndVideoFrom(List<EditedMediaItem>)

We encourage you to migrate to the new constructor or the convenience methods for clearer and more reliable sequence definitions.

Example of creating a video-only sequence:

EditedMediaItemSequence videoOnlySequence =
    EditedMediaItemSequence.Builder(setOf(C.TRACK_TYPE_VIDEO))
        .addItem(editedMediaItem)
        .build()


---


Please get in touch via the Media3 issue Tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!

19 Dec 2025 10:00pm GMT

Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps

Posted by Fahd Imtiaz - Product Manager, Android Developer





Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps


In 2025 the Android ecosystem has grown far beyond the phone. Today, developers have the opportunity to reach over 500 million active devices, including foldables, tablets, XR, Chromebooks, and compatible cars.


These aren't just additional screens; they represent a higher-value audience. We've seen that users who own both a phone and a tablet spend 9x more on apps and in-app purchases than those with just a phone. For foldable users, that average spend jumps to roughly 14x more*.


This engagement signals a necessary shift in development: goodbye mobile apps, hello adaptive apps.





To help you build for that future, we spent this year releasing tools that make adaptive the default way to build. Here are three key updates from 2025 designed to help you build these experiences.


Standardizing adaptive behavior with Android 16


To support this shift, Android 16 introduced significant changes to how apps can restrict orientation and resizability. On displays of at least 600dp, manifest and runtime restrictions are ignored, meaning apps can no longer lock themselves to a specific orientation or size. Instead, they fill the entire display window, ensuring your UI scales seamlessly across portrait and landscape modes.


Because this means your app context will change more frequently, it's important to verify that you are preserving UI state during configuration changes. While Android 16 offers a temporary opt-out to help you manage this transition, Android 17 (SDK37) will make this behavior mandatory. To ensure your app behaves as expected under these new conditions, use the resizable emulator in Android Studio to test your adaptive layouts today.

Supporting screens beyond the tablet with Jetpack WindowManager 1.5.0

As devices evolve, our existing definitions of "large" need to evolve with them. In October, we released Jetpack WindowManager 1.5.0 to better support the growing number of very large screens and desktop environments.


On these surfaces, the standard "Expanded" layout, which usually fits two panes comfortably, often isn't enough. On a 27-inch monitor, two panes can look stretched and sparse, leaving valuable screen real estate unused. To solve this, WindowManager 1.5.0 introduced two new width window size classes: Large (1200dp to 1600dp) and Extra-large (1600dp+).





These new breakpoints signal when to switch to high-density interfaces. Instead of stretching a typical list-detail view, you can take advantage of the width to show three or even four panes simultaneously. Imagine an email client that comfortably displays your folders, the inbox list, the open message, and a calendar sidebar, all in a single view. Support for these window size classes was added to Compose Material 3 adaptive in the 1.2 release.


Rethinking user journeys with Jetpack Navigation 3


Building a UI that morphs from a single phone screen to a multi-pane tablet layout used to require complex state management. This often meant forcing a navigation graph designed for single destinations to handle simultaneous views. First announced at I/O 2025, Jetpack Navigation 3 is now stable, introducing a new approach to handling user journeys in adaptive apps.


Built for Compose, Nav3 moves away from the monolithic graph structure. Instead, it provides decoupled building blocks that give you full control over your back stack and state. This solves the single source of truth challenge common in split-pane layouts. Because Nav3 uses the Scenes API, you can display multiple panes simultaneously without managing conflicting back stacks, simplifying the transition between compact and expanded views.


A foundation for an adaptive future



This year delivered the tools you need, from optimizing for expansive layouts to the granular controls of
WindowManager and Navigation 3. And, Android 16 began the shift toward truly flexible UI, with updates coming next year to deliver excellent adaptive experiences across all form factors. To learn more about adaptive development principles and get started, head over to d.android.com/adaptive-apps.


The tools are ready, and the users are waiting. We can't wait to see what you build!


*Source: internal Google data


19 Dec 2025 5:00pm GMT

18 Dec 2025

feedAndroid Developers Blog

Bringing Androidify to Wear OS with Watch Face Push

Posted by Garan Jenkin - Developer Relations Engineer






A few months ago we
relaunched Androidify as an app for generating personalized Android bots. Androidify transforms your selfie photo into a playful Android bot using Gemini and Imagen.

However, given that Android spans multiple form factors, including our most recent addition, XR, we thought, how could we bring the fun of Androidify to Wear OS?

An Androidify watch face

As Androidify bots are highly-personalized, the natural place to showcase them is the watch face. Not only is it the most frequently visible surface but also the most personal surface, allowing you to represent who you are.


Personalized Androidify watch face, generated from selfie image


Androidify now has the ability to generate a watch face dynamically within the phone app and then send it to your watch, where it will automatically be set as your watch face. All of this happens within seconds!

High-level design

End-to-end flow for watch face creation and installation

In order to achieve the end-to-end experience, a number of technologies need to be combined together, as shown in this high-level design diagram.

First of all, the user's avatar is combined with a pre-existing Watch Face Format template, which is then packaged into an APK. This is validated - for reasons which will be explained! - and sent to the watch.

On being received by the watch, the new Watch Face Push API - part of Wear OS 6- is used to install and activate the watch face.

Let's explore the details:

Creating the watch face templates

The watch face is created from a template, itself designed in Watch Face Designer. This is our new Figma plugin that allows you to create Watch Face Format watch faces directly within Figma.


An Androidify watch face template in Watch Face Designer


The plugin allows the watch face to be exported in a range of different ways, including as Watch Face Format (WFF) resources. These can then be easily incorporated as assets within the Androidify app, for dynamically building the finalized watch face.

Packaging and validation

Once the template and avatar have been combined, the Portable Asset Compiler Kit (Pack) is used to assemble an APK.

In Androidify, Pack is used as a native library on the phone. For more details on how Androidify interfaces with the Pack library, see the GitHub repository.

As a final step before transmission, the APK is checked by the Watch Face Push validator.

This validator checks that the APK is suitable for installation. This includes checking the contents of the APK to ensure it is a valid watch face, as well as some performance checks. If it is valid, then the validator produces a token.

This token is required by the watch for installation.

Sending the watch face

The Androidify app on Wear OS uses WearableListenerService to listen for events on the Wearable Data Layer.

The phone app transfers the watch face by using a combination of MessageClient to set up the process, then ChannelClient to stream the APK.

Installing the watch face on the watch

Once the watch face is received on the Wear OS device, the Androidify app uses the new Watch Face Push API to install the watch face:

val wfpManager =

WatchFacePushManagerFactory.createWatchFacePushManager(context)

val response = wfpManager.listWatchFaces()


try {

if (response.remainingSlotCount > 0) {

wfpManager.addWatchFace(apkFd, token)

} else {

val slotId = response.installedWatchFaceDetails.first().slotId

wfpManager.updateWatchFace(slotId, apkFd, token)

}

} catch (a: WatchFacePushManager.AddWatchFaceException) {

return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR

} catch (u: WatchFacePushManager.UpdateWatchFaceException) {

return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR

}

Androidify uses either the addWatchFace or updateWatchFace method, depending on the scenario: Watch Face Push defines a concept of "slots" - how many watch faces a given app can have installed at any time. For Wear OS 6, this value is in fact 1.

Androidify's approach is to install the watch face if there is a free slot, and if not, any existing watch face is swapped out for the new one.

Setting the active watch face

Installing the watch face programmatically is a great step, but Androidify seeks to ensure the watch face is also the active watch face.

Watch Face Push introduces a new runtime permission which must be granted in order for apps to be able to achieve this:

com.google.wear.permission.SET_PUSHED_WATCH_FACE_AS_ACTIVE

Once this permission has been acquired, the wfpManager.setWatchFaceAsActive() method can be called, to set an installed watch face to being the active watch face.

However, there are a number of considerations that Androidify has to navigate:

For more details see how Androidify implements the set active logic.

Get started with Watch Face Push for Wear OS

Watch Face Push is a versatile API, equally suited to enhancing Androidify as it is to building fully-featured watch face marketplaces.

Perhaps you have an existing phone app and are looking for opportunities to further engage and delight your users?

Or perhaps you're an existing watch face developer looking to create your own community and gallery through releasing a marketplace app?

Take a look at these resources:

And also check out the accompanying video for a greater-depth look at how we brought Androidify to Wear OS!


We're looking forward to what you'll create with Watch Face Push!

18 Dec 2025 5:00pm GMT

05 Dec 2025

feedPlanet Maemo

Meow: Process log text files as if you could make cat speak

Some years ago I had mentioned some command line tools I used to analyze and find useful information on GStreamer logs. I've been using them consistently along all these years, but some weeks ago I thought about unifying them in a single tool that could provide more flexibility in the mid term, and also as an excuse to unrust my Rust knowledge a bit. That's how I wrote Meow, a tool to make cat speak (that is, to provide meaningful information).

The idea is that you can cat a file through meow and apply the filters, like this:

cat /tmp/log.txt | meow appsinknewsample n:V0 n:video ht: \
ft:-0:00:21.466607596 's:#([A-za-z][A-Za-z]*/)*#'

which means "select those lines that contain appsinknewsample (with case insensitive matching), but don't contain V0 nor video (that is, by exclusion, only that contain audio, probably because we've analyzed both and realized that we should focus on audio for our specific problem), highlight the different thread ids, only show those lines with timestamp lower than 21.46 sec, and change strings like Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.cpp to become just AppendPipeline.cpp", to get an output as shown in this terminal screenshot:

Screenshot of a terminal output showing multiple log lines. Some of them have the word "appsinkNewSample" highlighted in red. Some lines have the hexadecimal id of the thread that printed them highlighed (purple for one thread, brown for the other)

Cool, isn't it? After all, I'm convinced that the answer to any GStreamer bug is always hidden in the logs (or will be, as soon as I add "just a couple of log lines more, bro" <span class=0 Add to favourites0 Bury

05 Dec 2025 11:16am GMT

15 Oct 2025

feedPlanet Maemo

Dzzee 1.9.0 for N800/N810/N900/N9/Leste

I was playing around with Xlib this summer, and one thing led to another, and here we are with four fresh ports to retro mobile X11 platforms. There is even a Maemo Leste port, but due to some SGX driver woes on the N900, I opted for using XSHM and software rendering, which works well and has the nice, crisp pixel look (on Fremantle, it's using EGL+GLESv2). Even the N8x0 port has very fluid motion by utilizing Xv for blitting software-rendered pixels to the screen. The game is available over at itch.io.





1 Add to favourites0 Bury

15 Oct 2025 11:31am GMT

05 Jun 2025

feedPlanet Maemo

Mobile blogging, the past and the future

This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what's common is that at almost all points there was a mechanism to publish while on the move.

Psion, documents over FTP

In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.

Psion S5, also known as the Ancestor

The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.

Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.

In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.

If we wanted to include photos to the stories, we'd have to find an Internet cafe.

SMS and MMS

For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST, an empty blog post would be created with the rest of that message's text as the title.

As I got into neogeography, I could also send a NEWPOSITION message. This would update my position on the map, connecting weather metadata to the posts.

As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.

Photos over email

A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.

And so my blog included a new "moblog" section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.

My blog from that era

Pause

Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.

In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.

And I traveled off-the-grid rarely enough that it didn't make sense to develop a system.

But now that we're sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?

Starlink, Internet from Outer Space

Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.

However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.

But despite what you'd think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.

Inreach, texting with the cloud

Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.

When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.

I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.

One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it'd even enable rudimentary communications if we found ourselves in a liferaft.

Sailmail and email over HF radio

The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.

Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.

Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.

With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We'd then need a mailbox that can receive these mails, and some automation to fetch and publish.

0 Add to favourites0 Bury

05 Jun 2025 12:00am GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT