05 Jul 2025

feedTalkAndroid

Steven Spielberg Wraps Production on His Mysterious New Film!

Steven Spielberg's highly anticipated sci-fi UFO project has officially wrapped filming, promising a visually stunning and emotionally resonant…

05 Jul 2025 3:30pm GMT

Board Kings Free Rolls – Updated Every Day!

Run out of rolls for Board Kings? Find links for free rolls right here, updated daily!

05 Jul 2025 2:46pm GMT

Coin Tales Free Spins – Updated Every Day!

Tired of running out of Coin Tales Free Spins? We update our links daily, so you won't have that problem again!

05 Jul 2025 2:45pm GMT

01 Jul 2025

feedAndroid Developers Blog

Level up your game: Google Play's Indie Games Fund in Latin America returns for its 4th year

Posted by Daniel Trócoli - Google Play Partnerships

We're thrilled to announce the return of Google Play's Indie Games Fund (IGF) in Latin America for its fourth consecutive year! This year, we're once again committing $2 million to empower another 10 indie game studios across the region. With this latest round of funding, our total investment in Latin American indie games will reach an impressive $8 million USD.

Since its inception, the IGF has been a cornerstone of our commitment to fostering growth for developers of all sizes on Google Play. We've seen firsthand the transformative impact this support has had, enabling studios to expand their teams, refine their creations, and reach new audiences globally.

What's in store for the Indie Games Fund in 2025?

Just like in previous years, selected small game studios based in Latin America will receive a share of the $2 million fund, along with support from the Google Play team.

As Vish Game Studio, a previously selected studio, shared: "The IGF was a pivotal moment for our studio, boosting us to the next level and helping us form lasting connections." We believe in fostering these kinds of pivotal moments for all our selected studios.

The program is open to indie game developers who have already launched a game, whether it's on Google Play, another mobile platform, PC, or console. Each selected recipient will receive between $150,000 and $200,000 to help them elevate their game and realize their full potential.

Check out all eligibility criteria and apply now! Applications will close at 12:00 PM BRT on July 31, 2025. To give your application the best chance, remember that priority will be given to applications received by 12:00 PM BRT on July 15, 2025.




Google Play logo

01 Jul 2025 2:00pm GMT

30 Jun 2025

feedAndroid Developers Blog

Top announcements to know from Google Play at I/O ‘25

Posted by Raghavendra Hareesh Pottamsetty - Google Play Developer and Monetization Lead

At Google Play, we're dedicated to helping people discover experiences they'll love, while empowering developers like you to bring your ideas to life and build successful businesses. This year, Google I/O was packed with exciting announcements designed to do just that. For a comprehensive overview of everything we shared, be sure to check out our blog post recapping What's new in Google Play.

Today, we'll dive specifically into the latest updates designed to help you streamline your subscriptions offerings and maximize your revenue on Play. Get a quick overview of these updates in our video below, or read on for more details.

#1: Subscriptions with add-ons: Streamlining subscriptions for you and your users

We're excited to announce multi-product checkout for subscriptions, a new feature designed to streamline your purchase flow and offer a more unified experience for both you and your users. This enhancement allows you to sell subscription add-ons right alongside your base subscriptions, all while maintaining a single, aligned payment schedule.

The result? A simplified user experience with just one price and one transaction, giving you more control over how your subscribers upgrade, downgrade, or manage their add-ons. Learn more about how to create add-ons.

base subscriptions and add-ons together in a single, streamlined transaction on Google Play
You can now sell base subscriptions and add-ons together in a single, streamlined transaction

#2: Showcasing benefits in more places across Play: Increasing visibility and value

We're also making it easier for you to retain more of your subscribers by showcasing subscription benefits in more key areas across Play. This includes the Subscriptions Center, within reminder emails, and even during the purchase and cancellation processes. This increased visibility has already proved effective, reducing voluntary churn by 2%. To take advantage of this powerful new capability, be sure to enter your subscription benefits details in Play Console.

value notifications across subscriptions center, email reminders, and during purchase transactions, shown across form factors on Google Play
To help reduce voluntary churn, we're showcasing your subscriptions benefits across Play

#3: New grace period and account hold duration: Decreasing involuntary churn

Another way we're helping you maximize your revenue is by extending grace periods and account hold durations to tackle unintended subscription losses, which often occur when payment methods unexpectedly decline.

Now, you can customize both the grace period (when users retain access while renewal is attempted) and the account hold period (when access is suspended). You can set a grace period of up to 30 days and an account hold period of up to 60 days. However, the total combined recovery period (grace period + account hold) cannot exceed 60 days.

This means instead of an immediate cancellation, your users have a longer window to update their payment information. Developers who've already extended their decline recovery period-from 30 to 60 days-have seen impressive results, with an average 10% reduction in involuntary churn for renewals. Ready to see these results for yourself? Adjust your grace period and account hold durations in Play Console today.

a ten percent reduction in involuntary churn on Google Play according to internal Google data
Developers who extend their decline recovery period see an average 10% reduction in involuntary churn


But that's not all. We're constantly investing in ways to help you optimize conversion throughout the entire buyer lifecycle. This includes boosting purchase-readiness by prompting users to set up payment methods and verification right from device setup, and we've integrated these prompts into highly visible areas like the Play and Google account menus. Beyond that, we're continuously enabling payments in more markets and expanding payment options. Our AI models are even working to optimize in-app transactions by suggesting the right payment method at the right time, and we're bringing buyers back with effective cart abandonment reminders.

That's it for our top announcements from Google I/O 2025, but there's so many more updates to discover from this year's event. Check out What's new in Google Play to learn more, and to dive deeper into the session details, view the Google Play I/O playlist for all the announcements.



Google Play logo

30 Jun 2025 4:00pm GMT

Get ready for the next generation of gameplay powered by Play Games Services

Posted by Chris Wilk - Group Product Manager, Games on Google Play

To captivate players and grow your game, you need tools that enhance discovery and retention. Play Games Services (PGS) is your key to unlocking a suite of services that connect you with over 2 billion monthly active players. PGS empowers you to drive engagement through features like achievements and increase retention with promotions tailored to each gameplay progress. These tools are designed to help you deliver relevant and compelling content that keeps players coming back.

We are continuously evolving gaming on Play, and this year, we're introducing more PGS-powered experiences to give you deeper player insights and greater visibility in the Play Store. To access these latest advancements and ensure continued functionality, you must migrate from PGS v1 to PGS v2 by May 2026. Let's take a closer look at what's new:

Drive discovery and engagement by rewarding gameplay progress

We're fundamentally transforming how achievements work in the Play Store, making them a key driver for a great gaming experience. Now deeply embedded across the store, achievements are easily discoverable via search filters and game detail pages, and further drive engagement when offered with Play Points.

At a minimum, you should have at least 15 achievements spread across the lifetime of the game, in the format of incremental achievements to show progress. Games that enable players to earn at least 5 achievements in the first 2 hours of gameplay are most successful in driving deeper engagement*.

The most engaging titles offer 40 or more achievements with diverse types of goals including leveling up characters, game progression, hidden surprises, or even failed attempts. To help you get the most out of achievements, we've made it easier to create achievements with bulk configuration in Play Console.

For eligible titles*, Play activates quests to reward players for completing achievements - for example with Play Points. Supercell activated quests for Hay Day, leading to an average 177% uplift in installs*. You can tailor your quests to achieve specific campaign objectives, whether it's attracting high-value players or driving spend through repeated engagement, all while making it easy to jump back into your game.

Achievement-based quests allowing users to grow their farm and earn Play Points in the mobile game Hay Day on Google Play
Hay Day boosted new installs with achievement-based quests


Increase retention with tailored promotions

Promotional content is a vital tool for you to highlight new events, major content updates, and exciting offers within your game. It turns Play into a direct marketing channel to re-engage with your players. We've enhanced audience targeting capabilities so you can tailor your content to reach and convert the most relevant players.

By integrating PGS, you can use the Play Grouping API to create custom segments based on gameplay context*. Using this feature, Kabam launched promotional content to custom audiences for Marvel Contest of Champions, resulting in a 4x increase in lapsed user engagement*.

Marvel Contest of Champions increased retention with targeted promotional content on Google Play
Marvel Contest of Champions increased retention with targeted promotional content


Start implementing PGS features today

PGS is designed to make the sign-in experience more seamless for players, automatically syncing their progress and identity across Android devices. With a single tap, they can pick up where they left off or start a new game from any screen. Whether you use your own sign-in solution, services from third parties, or a combination of both, we've made it easier to integrate Play Games Services with the Recall API.

To ensure a consistent sign-in experience for all players, we're phasing out PGS v1.

All games currently using PGS v1 must migrate to PGS v2 by May 2026. After this date, you will no longer be able to publish or update games that use the v1 SDK.

Below you'll find the timeline to plan your migration:


Migration guide

May 2025 As announced at I/O, new apps using PGS v1 can no longer be published. While existing apps can release updates with v1 and the APIs are still functional, you'll need to migrate by May 2026, and APIs will be fully shut down in 2028.
May 2026 APIs are still functional for users, but are no longer included in the SDK. New app versions compiled with the most recent SDK would fail in the build process if your code still uses the removed APIs. If your app still relies on any of these APIs, you should migrate to PGS v2 as soon as possible.
Q3 2028 APIs are no longer functional and will fail when a request is sent by an app.

Looking ahead, more opportunities powered by PGS

Coming soon, players will be able to generate unique, AI-powered avatars within their profiles - creating fun, diverse representations of their gaming selves. With PGS integration, developers can allow players to carry over their avatar within the game. This enables players to showcase their gaming identity across the entire gameplay experience, creating an even stronger motivation to re-engage with your game.

Gen AI avatar profiles create player-centric experiences on Google Play
Gen AI avatar profiles create more player-centric experiences


PGS is the foundational tool for maximizing your business growth on Play, enabling you to tailor your content for each player and access the latest gameplay innovations on the platform. Stay tuned for more PGS features coming this year to provide an even richer player experience.


* To be eligible, the title must participate in Play Points, integrate Play Games Services v2, and have achievements configured in Play Console.
* Data source from partner. Average incremental installs over a 14-day period.
* Data source from partner.
* The Play Grouping API provides strong measures to protect privacy for end users, including user-visible notification when the API is first used, and opt-out options through My Activity.


Google Play logo

30 Jun 2025 3:45pm GMT

05 Jun 2025

feedPlanet Maemo

Mobile blogging, the past and the future

This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what's common is that at almost all points there was a mechanism to publish while on the move.

Psion, documents over FTP

In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.

Psion S5, also known as the Ancestor

The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.

Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.

In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.

If we wanted to include photos to the stories, we'd have to find an Internet cafe.

SMS and MMS

For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST, an empty blog post would be created with the rest of that message's text as the title.

As I got into neogeography, I could also send a NEWPOSITION message. This would update my position on the map, connecting weather metadata to the posts.

As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.

Photos over email

A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.

And so my blog included a new "moblog" section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.

My blog from that era

Pause

Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.

In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.

And I traveled off-the-grid rarely enough that it didn't make sense to develop a system.

But now that we're sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?

Starlink, Internet from Outer Space

Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.

However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.

But despite what you'd think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.

Inreach, texting with the cloud

Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.

When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.

I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.

One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it'd even enable rudimentary communications if we found ourselves in a liferaft.

Sailmail and email over HF radio

The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.

Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.

Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.

With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We'd then need a mailbox that can receive these mails, and some automation to fetch and publish.

0 Add to favourites0 Bury

05 Jun 2025 12:00am GMT

16 Oct 2024

feedPlanet Maemo

Adding buffering hysteresis to the WebKit GStreamer video player

The <video> element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.

WebKit GStreamer regular playback class diagram

The player private can have 3 buffering modes:

The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.

All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.

Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.

For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().

So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.

0 Add to favourites0 Bury

16 Oct 2024 6:12am GMT

10 Sep 2024

feedPlanet Maemo

Don’t shoot yourself in the foot with the C++ move constructor

Move semantics can be very useful to transfer ownership of resources, but as many other C++ features, it's one more double edge sword that can harm yourself in new and interesting ways if you don't read the small print.

For instance, if object moving involves super and subclasses, you have to keep an extra eye on what's actually happening. Consider the following classes A and B, where the latter inherits from the former:

#include <stdio.h>
#include <utility>

#define PF printf("%s %p\n", __PRETTY_FUNCTION__, this)

class A {
 public:
 A() { PF; }
 virtual ~A() { PF; }
 A(A&& other)
 {
  PF;
  std::swap(i, other.i);
 }

 int i = 0;
};

class B : public A {
 public:
 B() { PF; }
 virtual ~B() { PF; }
 B(B&& other)
 {
  PF;
  std::swap(i, other.i);
  std::swap(j, other.j);
 }

 int j = 0;
};

If your project is complex, it would be natural that your code involves abstractions, with part of the responsibility held by the superclass, and some other part by the subclass. Consider also that some of that code in the superclass involves move semantics, so a subclass object must be moved to become a superclass object, then perform some action, and then moved back to become the subclass again. That's a really bad idea!

Consider this usage of the classes defined before:

int main(int, char* argv[]) {
 printf("Creating B b1\n");
 B b1;
 b1.i = 1;
 b1.j = 2;
 printf("b1.i = %d\n", b1.i);
 printf("b1.j = %d\n", b1.j);
 printf("Moving (B)b1 to (A)a. Which move constructor will be used?\n");
 A a(std::move(b1));
 printf("a.i = %d\n", a.i);
 // This may be reading memory beyond the object boundaries, which may not be
 // obvious if you think that (A)a is sort of a (B)b1 in disguise, but it's not!
 printf("(B)a.j = %d\n", reinterpret_cast<B&>(a).j);
 printf("Moving (A)a to (B)b2. Which move constructor will be used?\n");
 B b2(reinterpret_cast<B&&>(std::move(a)));
 printf("b2.i = %d\n", b2.i);
 printf("b2.j = %d\n", b2.j);
 printf("^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place\n");
 printf("Destroying b2, a, b1\n");
 return 0;
}

If you've read the code, those printfs will have already given you some hints about the harsh truth: if you move a subclass object to become a superclass object, you're losing all the subclass specific data, because no matter if the original instance was one from a subclass, only the superclass move constructor will be used. And that's bad, very bad. This problem is called object slicing. It's specific to C++ and can also happen with copy constructors. See it with your own eyes:

Creating B b1
A::A() 0x7ffd544ca690
B::B() 0x7ffd544ca690
b1.i = 1
b1.j = 2
Moving (B)b1 to (A)a. Which move constructor will be used?
A::A(A&&) 0x7ffd544ca6a0
a.i = 1
(B)a.j = 0
Moving (A)a to (B)b2. Which move constructor will be used?
A::A() 0x7ffd544ca6b0
B::B(B&&) 0x7ffd544ca6b0
b2.i = 1
b2.j = 0
^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place
Destroying b2, a, b1
virtual B::~B() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6a0
virtual B::~B() 0x7ffd544ca690
virtual A::~A() 0x7ffd544ca690

Why can something that seems so obvious become such a problem, you may ask? Well, it depends on the context. It's not unusual for the codebase of a long lived project to have started using raw pointers for everything, then switching to using references as a way to get rid of null pointer issues when possible, and finally switch to whole objects and copy/move semantics to get rid or pointer issues (references are just pointers in disguise after all, and there are ways to produce null and dangling references by mistake). But this last step of moving from references to copy/move semantics on whole objects comes with the small object slicing nuance explained in this post, and when the size and all the different things to have into account about the project steals your focus, it's easy to forget about this.

So, please remember: never use move semantics that convert your precious subclass instance to a superclass instance thinking that the subclass data will survive. You can regret about it and create difficult to debug problems inadvertedly.

Happy coding!

0 Add to favourites0 Bury

10 Sep 2024 7:58am GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT