24 Jul 2025
Android Developers Blog
#WeArePlay: 10 million downloads and counting, meet app and game founders from across the U.S.
Posted by Robbie McLachlan, Developer Marketing
They saw a problem and built the answer. Meet 20 #WeArePlay founders from across the U.S. who started their entrepreneurial journey with a question like: what if reading was no longer a barrier for anyone? What if an app could connect neighbors to fight local hunger? What if fitness or self-care could feel as engaging as playing a game?
These new stories showcase how innovation often starts with finding the answer to a personal problem. Here are just a few of our favorites:
Cliff's app Speechify makes the written word accessible to all
Growing up with dyslexia, Cliff always wished he could enjoy books but found reading them challenging. After moving to the U.S., the then college student turned that personal challenge into a solution for millions. His app, Speechify, empowers people by turning any text-from PDFs to web pages-into audio. By making the written word accessible to all, Cliff's innovation gives students, professionals, and auditory learners a new kind of independence.
Jenny's game Run Legends turns everyday fitness into a social adventure
As a teen, Jenny funded her computer science studies by teaching herself to code and publishing over 100 games. A passionate cross-country runner, she wanted to combine her love for gaming and fitness to make exercise feel more like an adventure. The result is Run Legends, a multiplayer RPG where players battle monsters by moving in real life. Jenny's on a mission to blend all types of exercise with playful storytelling, turning everyday fitness into a fun, social, and heroic quest.
Nino and Stephanie's app Finch makes self-care a rewarding daily habit
As engineers, Nino and Stephanie knew the power of technology but found the world of self-care apps overwhelming. Inspired by their own mental health journeys and a gamified app Stephanie built in college, they created Finch. The app introduces a fresh take on the virtual pet: by completing small, positive actions for yourself, like journaling or practicing breathing exercises, you care for your digital companion. With over 10 million downloads, Finch has helped people around the world build healthier habits. With seasonal events every month and growing personalization, the app continues to evolve to make self-care more fun and rewarding.
John's app The HungreeApp connects communities to fight hunger
John began coding as a nine-year-old in Nigeria, sometimes with just a pen and paper. After moving to the U.S., he was struck by how much food from events was wasted while people nearby went hungry. That spark led him to create The HungreeApp, a platform that connects communities with free, surplus food from businesses and restaurants. John's ingenuity turns waste into opportunity, creating a more connected and resourceful nation, one meal at a time.
Anthony's game studio Tech Tree Games turns a passion for idle games into cosmic adventures for aspiring tycoons
While working as a chemical engineer, Anthony dreamed of creating an idle game like the ones he loved to play, leading him to teach himself how to code from scratch. This passion project turned into his studio Tech Tree Games and the hit title Idle Planet Miner, where players grow a space mining empire filled with mystical planets and alluring gems. After releasing a 2.0 update with enhanced visuals for the game, Anthony is back in prototyping mode with new titles in the pipeline.
Discover more #WeArePlay stories from the US and stories from across the globe.

24 Jul 2025 4:00pm GMT
17 Jul 2025
Android Developers Blog
#WeArePlay: With over 3 billion downloads, meet the people behind Amanotes
Posted by Robbie McLachlan - Developer Marketing
In our latest #WeArePlay film, which celebrates the people behind apps and games on Google Play, we meet Bill and Silver - the duo behind Amanotes. Their game company has reached over 3 billion downloads with their mission 'everyone can music'. Their titles, including the global hit Magic Tiles 3, turn playing musical instruments into a fun, easy, and interactive experience, with no musical background needed. Discover how Amanotes blends creativity and technology to bring joy and connection to billions of players around the world.
What inspired you to create Amanotes?
Bill: It all began with a question I'd pursued for over 20 years - how can technology make music even more beautiful? I grew up in a musical family, surrounded by instruments, but I also loved building things with tech. Amanotes became the space where I could bring those two passions together.
Silver: Honestly, I wasn't planning to start a company. I had just finished studying entrepreneurship and was looking to join a startup, not launch one. I dropped a message in an online group saying I wanted to find a team to work with, and Bill reached out. We met for coffee, talked for about an hour, and by the end, we just said, why not give it a shot? That one meeting turned into ten years of building Amanotes.
Do you remember the first time you realized your game was more than just a game and that it could change someone's life?
Silver: There's one moment I'll never forget. A woman in the U.S. left a review saying she used to be a pianist, but after an accident, she lost use of some of her fingers and couldn't play anymore. Then she found Magic Tiles. She said the game gave her that feeling of playing again-even without full movement. That's when it hit me. We weren't just building a game. We were helping people reconnect with something they thought they'd lost.

How has Google Play helped your journey?
Silver: Google Play has been a huge part of our story. It was actually the first platform we ever published on. The audience was global from day one, which gave us the reach we needed to grow fast. We made great use of tools such as Firebase for A/B testing. We also relied on the Play Console for analytics and set custom pricing by country. Without Google Play, Amanotes wouldn't be where it is today.

What's next for Amanotes?
Silver: Music will always be the soul of what we do, but now we're building games with more depth. We want to go beyond just tapping to songs. We're adding stories, challenges, and richer gameplay on top of the music. We've got a whole lineup of new games in the works. Each one is a chance to push the boundaries of what music games can be.
Discover other inspiring app and game founders featured in #WeArePlay.

17 Jul 2025 4:00pm GMT
15 Jul 2025
Android Developers Blog
New tools to help drive success for one-time products
Posted by Laura Nechita - Product Manager, Google Play and Rejane França - Group Product Manager, Google Play
Starting today, Google Play is revamping the way developers can manage one time products, providing greater flexibility and new ways to sell. Play has continually enhanced the ways developers can reach buyers by helping you to diversify the way you can sell products.
Starting in 2022, we created more flexibility for subscriptions and a new Console interface. And now, we are bringing the same flexibility to one-time products, aligning the taxonomy for our one-time products. Previously known as in-app products, one-time product purchases are a vital way for developers to monetize on Google Play. As this business model continues to evolve, we've heard from many of you that you need more flexibility and less complexity in how you offer these digital products.
To address these needs, we're launching new capabilities and a new way of thinking about your products that can help you grow your business. At its core, we've separated what the product is from how you sell it. For each one-time product, you can now configure multiple purchase options and offers. This allows you to sell the same product in multiple ways, reducing operational costs by removing the need to create and manage an ever-increasing number of catalog items.
You might have already noticed some changes as we introduce this new model, which provides a more structured way to define and manage your one-time product offerings.
Introducing the new model

We're introducing a new three-level hierarchy for defining and managing one-time products. This new structure builds upon concepts already familiar from our subscription model and aligns the taxonomy for all of your in-app product offerings on Play.
- One-time product: This object defines what the user is buying. Think of it as the core item in your catalog, such as a "Diamond sword", "Coins" or "No ads".
- Purchase option: This defines how the entitlement is granted to the user, its price, and where the product will be available. A single one-time product can have multiple purchase options representing different ways to acquire it, such as buying it or renting it for a set period of time. Purchase options now have two distinct types: buy and rent.
- Offer: Offers further modify a purchase option and can be used to model discounts or pre-orders. A single purchase option can have multiple offers associated with it.
This allows for a more organized and efficient way to manage your catalog. For instance, you can have one "Diamond sword" product and offer it with a "Buy" purchase option in the US for $10 and a "Rent" purchase option in the UK for £5. This new taxonomy will also allow Play to better understand what the catalogue means, helping developers to further amplify their impact in Play surfaces.
More flexibility to reach more users
The new model unlocks significant flexibility to help you reach a wider audience and cater to different user preferences.
- Sell in multiple ways: Once you've migrated to PBL 8, you can set up different ways of selling the same product. This reduces the complexity of managing numerous individual products for slightly different scenarios.
- Introducing rentals: We're introducing the ability to configure items that are sold as rentals. Users have access to the item for a set duration of time. You can define the rental period, which is the amount of time a user has the entitlement after completing the purchase, and an optional expiration period, which is the time after starting consumption before the entitlement is revoked.
- Pre-order capabilities: You can now set up one-time products to be bought before their release through pre-order offers. You can configure the start date, end date, and the release date for these offers, and even include a discount. Users who pre-order agree to pay on the release date unless they cancel beforehand.
- No default price: we will remove the concept of default price for a product. Now you can set and manage the prices in bulk or individually for each region.
- Regional pricing and availability: Price changes can now be applied to purchase options and offers, allowing you to set different prices in different regions. Furthermore, you can also configure the regional availability for both purchase options and offers. This functionality is available for paid apps in addition to one-time products.
- Offers for promotions: Leverage offers to create various promotions, such as discounts on your base purchase price or special conditions for early access through pre-orders.
To use these new features you first need to upgrade to PBL 8.0. Then, you'll need to utilize the new monetization.onetimeproducts service of the Play Developer API or use the Play Developer Console. You'll also need to integrate with the queryProductDetailsAsync API to take advantage of these new capabilities. And while querySkuDetailsAsync and inappproducts service are not supported with the new model, they will continue to be supported as long as PBL 7 is supported.
Important considerations
- With this change, we will offer a backwards compatible way to port your existing SKUs into this new model. The migration will happen differently depending on how you decide to interact with your catalogue the first time you change the metadata for one or more products.
- New products created through Play Console UI are normalized. And products created or managed with the existing inappproducts service won't support these new features. To access them, you'll need to convert existing ones in the Play Developer Console UI. Once converted, a product can only be managed through the new Play Developer API or Play Developer Console. Products created through the new monetization.onetimeproducts service or through the Play Developer Console are already converted.
- Buy purchase options marked as 'Backwards compatible' will be returned as response for calls through querySkuDetailsAsync API. At launch, all existing products have a backwards compatible purchase option.
- At the time of this post, the pre-orders capability is available through the Early Access Program (EAP) only. If you are interested, please sign-up.
- One-time products will be reflected in the earnings reports at launch (Base plan ID and Offer ID columns will be populated for newly configured one-time products). To minimise the potential for breaking changes, we will be updating these column names in the earnings reports later this year.
We encourage you to explore the new Play Developer API and the updated Play Console interface to see how this enhanced flexibility can help you better manage your catalog and grow your business.
We're excited to see how you leverage these new tools to connect with your users in innovative ways.

15 Jul 2025 4:00pm GMT
05 Jun 2025
Planet Maemo
Mobile blogging, the past and the future
This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what's common is that at almost all points there was a mechanism to publish while on the move.
Psion, documents over FTP
In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.
The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.
Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.
In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.
If we wanted to include photos to the stories, we'd have to find an Internet cafe.
- To the Alps is a post from these times. Lots more in the motorcycling category
SMS and MMS
For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST
, an empty blog post would be created with the rest of that message's text as the title.
- In the Caucasus is a good example of a post from this era
As I got into neogeography, I could also send a NEWPOSITION
message. This would update my position on the map, connecting weather metadata to the posts.
As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.
- Mystery of the Missing Monkey is typical. Some more in Internet Archive
Photos over email
A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.
And so my blog included a new "moblog" section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.
- Internet Archive has some of my old moblogs but nowadays, I post similar stuff on Pixelfed
Pause
Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.
In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.
And I traveled off-the-grid rarely enough that it didn't make sense to develop a system.
But now that we're sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?
Starlink, Internet from Outer Space
Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.
However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.
But despite what you'd think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.
- Any of the 2025 Atlantic crossing posts is a good example of this setup in action
Inreach, texting with the cloud
Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.
When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.
I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.
One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it'd even enable rudimentary communications if we found ourselves in a liferaft.
- There are various InReach integration hacks that could be used for more sophisticated data transfer
Sailmail and email over HF radio
The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.
Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.
Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.
With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We'd then need a mailbox that can receive these mails, and some automation to fetch and publish.
- Sailmail and No Foreign Land support structured data via email to update position. Their formats could be useful inspiration
05 Jun 2025 12:00am GMT
16 Oct 2024
Planet Maemo
Adding buffering hysteresis to the WebKit GStreamer video player
The <video>
element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.
The player private can have 3 buffering modes:
- On-disk buffering: This is the typical mode on desktop systems, but is frequently disabled on purpose on embedded devices to avoid wearing out their flash storage memories. All the video content is downloaded to disk, and the buffering percentage refers to the total size of the video. A GstDownloader element is present in the pipeline in this case. Buffering level monitoring is done by polling the pipeline every second, using the
fillTimerFired()
method. - In-memory buffering: This is the typical mode on embedded systems and on desktop systems in case of streamed (live) content. The video is downloaded progressively and only the part of it ahead of the current playback time is buffered. A GstQueue2 element is present in the pipeline in this case. Buffering level monitoring is done by listening to GST_MESSAGE_BUFFERING bus messages and using the buffering level stored on them. This is the case that motivates the refactoring described in this blog post, what we actually wanted to correct in Broadcom platforms, and what motivated the addition of hysteresis working on all the platforms.
- Local files: Files, MediaStream sources and other special origins of video don't do buffering at all (no GstDownloadBuffering nor GstQueue2 element is present on the pipeline). They work like the on-disk buffering mode in the sense that
fillTimerFired()
is used, but the reported level is relative, much like in the streaming case. In the initial version of the refactoring I was unaware of this third case, and only realized about it when tests triggered the assert that I added to ensure that the on-disk buffering method was working in GST_BUFFERING_DOWNLOAD mode.
The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.
All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.
Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.
For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().
So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.
16 Oct 2024 6:12am GMT
10 Sep 2024
Planet Maemo
Don’t shoot yourself in the foot with the C++ move constructor
Move semantics can be very useful to transfer ownership of resources, but as many other C++ features, it's one more double edge sword that can harm yourself in new and interesting ways if you don't read the small print.
For instance, if object moving involves super and subclasses, you have to keep an extra eye on what's actually happening. Consider the following classes A and B, where the latter inherits from the former:
#include <stdio.h> #include <utility> #define PF printf("%s %p\n", __PRETTY_FUNCTION__, this) class A { public: A() { PF; } virtual ~A() { PF; } A(A&& other) { PF; std::swap(i, other.i); } int i = 0; }; class B : public A { public: B() { PF; } virtual ~B() { PF; } B(B&& other) { PF; std::swap(i, other.i); std::swap(j, other.j); } int j = 0; };
If your project is complex, it would be natural that your code involves abstractions, with part of the responsibility held by the superclass, and some other part by the subclass. Consider also that some of that code in the superclass involves move semantics, so a subclass object must be moved to become a superclass object, then perform some action, and then moved back to become the subclass again. That's a really bad idea!
Consider this usage of the classes defined before:
int main(int, char* argv[]) { printf("Creating B b1\n"); B b1; b1.i = 1; b1.j = 2; printf("b1.i = %d\n", b1.i); printf("b1.j = %d\n", b1.j); printf("Moving (B)b1 to (A)a. Which move constructor will be used?\n"); A a(std::move(b1)); printf("a.i = %d\n", a.i); // This may be reading memory beyond the object boundaries, which may not be // obvious if you think that (A)a is sort of a (B)b1 in disguise, but it's not! printf("(B)a.j = %d\n", reinterpret_cast<B&>(a).j); printf("Moving (A)a to (B)b2. Which move constructor will be used?\n"); B b2(reinterpret_cast<B&&>(std::move(a))); printf("b2.i = %d\n", b2.i); printf("b2.j = %d\n", b2.j); printf("^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place\n"); printf("Destroying b2, a, b1\n"); return 0; }
If you've read the code, those printfs will have already given you some hints about the harsh truth: if you move a subclass object to become a superclass object, you're losing all the subclass specific data, because no matter if the original instance was one from a subclass, only the superclass move constructor will be used. And that's bad, very bad. This problem is called object slicing. It's specific to C++ and can also happen with copy constructors. See it with your own eyes:
Creating B b1 A::A() 0x7ffd544ca690 B::B() 0x7ffd544ca690 b1.i = 1 b1.j = 2 Moving (B)b1 to (A)a. Which move constructor will be used? A::A(A&&) 0x7ffd544ca6a0 a.i = 1 (B)a.j = 0 Moving (A)a to (B)b2. Which move constructor will be used? A::A() 0x7ffd544ca6b0 B::B(B&&) 0x7ffd544ca6b0 b2.i = 1 b2.j = 0 ^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place Destroying b2, a, b1 virtual B::~B() 0x7ffd544ca6b0 virtual A::~A() 0x7ffd544ca6b0 virtual A::~A() 0x7ffd544ca6a0 virtual B::~B() 0x7ffd544ca690 virtual A::~A() 0x7ffd544ca690
Why can something that seems so obvious become such a problem, you may ask? Well, it depends on the context. It's not unusual for the codebase of a long lived project to have started using raw pointers for everything, then switching to using references as a way to get rid of null pointer issues when possible, and finally switch to whole objects and copy/move semantics to get rid or pointer issues (references are just pointers in disguise after all, and there are ways to produce null and dangling references by mistake). But this last step of moving from references to copy/move semantics on whole objects comes with the small object slicing nuance explained in this post, and when the size and all the different things to have into account about the project steals your focus, it's easy to forget about this.
So, please remember: never use move semantics that convert your precious subclass instance to a superclass instance thinking that the subclass data will survive. You can regret about it and create difficult to debug problems inadvertedly.
Happy coding!
10 Sep 2024 7:58am GMT
18 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub
I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.
So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.
Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.
Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org
But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.
The equipment deployed in this installation currently contains:
-
a rather beefy Supermicro 2U server with EPYC 7113P CPU and 4x PCIe, two of which are populated with Digium TE820 cards resulting in a total of 16 E1 ports
-
an icE1usb with RS422 interface board connected via 100m RS422 to an Ericsson GPS03 receiver. There's two layers of of over-voltage protection on the RS422 (each with gas discharge tubes and TVS) and two stages of over-voltage protection in the coaxial cable between antenna and GPS receiver.
-
a Livingston Portmaster3 RAS server
-
a Cisco AS5400 RAS server
For more details, see this wiki page and this ticket
Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.
In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.
Acknowledgements
I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware
18 Sep 2022 10:00pm GMT
08 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front
Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.
This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.
My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.
The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about
In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.
So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.
08 Sep 2022 10:00pm GMT
Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables
If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.
In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.
In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.
So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.
Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.
bursty bit clock changes until link is up
The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4
As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.
What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.
As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.
This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.
But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.
clock drift between master and slave cards
Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.
When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(
Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.
This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.
The work-around
If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.
In this setup, your slave card[s] will have perfect bit clock sync/lock.
Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).
08 Sep 2022 10:00pm GMT