01 Apr 2025

feedTalkAndroid

Anker Has a Whole Bunch of Discounts to Start the New Month

While other brands are using today to try and fool you, Anker is saving you money.

01 Apr 2025 4:30pm GMT

Idle Office Tycoon Codes – April 2025

Find all the latest Idle Office Tycoon codes here! Keep reading for more!

01 Apr 2025 4:24pm GMT

Alien Invasion – Codes For April 2025

Find the latest Alien Invasion codes here! Keep reading for more!

01 Apr 2025 4:19pm GMT

27 Mar 2025

feedAndroid Developers Blog

Media3 1.6.0 — what’s new?

Posted by Andrew Lewis - Software Engineer

This article is cross-published on Medium

Media3 1.6.0 is now available!

This release includes a host of bug fixes, performance improvements and new features. Read on to find out more, and as always please check out the full release notes for a comprehensive overview of changes in this release.


Playback, MediaSession and UI

ExoPlayer now supports HLS interstitials for ad insertion in HLS streams. To play these ads using ExoPlayer's built-in playlist support, pass an HlsInterstitialsAdsLoader.AdsMediaSourceFactory as the media source factory when creating the player. For more information see the official documentation.

This release also includes experimental support for 'pre-warming' decoders. Without pre-warming, transitions from one playlist item to the next may not be seamless in some cases, for example, we may need to switch codecs, or decode some video frames to reach the start position of the new media item. With pre-warming enabled, a secondary video renderer can start decoding the new media item earlier, giving near-seamless transitions. You can try this feature out by enabling it on the DefaultRenderersFactory. We're actively working on further improvements to the way we interact with decoders, including adding a 'fast seeking mode' so stay tuned for updates in this area.

Media3 1.6.0 introduces a new media3-ui-compose module that contains functionality for building Compose UIs for playback. You can find a reference implementation in the Media3 Compose demo and learn more in Getting started with Compose-based UI. At this point we're providing a first set of foundational state classes that link to the Player, in addition to some basic composable building blocks. You can use these to build your own customized UI widgets. We plan to publish default Material-themed composables in a later release.

Some other improvements in this release include: moving system calls off the application's main thread to the background (which should reduce ANRs), a new decoder module wrapping libmpegh (for bundling object-based audio decoding in your app), and a fix for the Cast extension for apps targeting API 34+. There are also fixes across MPEG-TS and WebVTT extraction, DRM, downloading/caching, MediaSession and more.

Media extraction and frame retrieval

The new MediaExtractorCompat is a drop-in replacement for the framework MediaExtractor but implemented using Media3's extractors. If you're using the Android framework MediaExtractor, consider migrating to get consistent behavior across devices and reduce crashes.

We've also added experimental support for retrieving video frames in a new class ExperimentalFrameExtractor, which can act as a replacement for the MediaMetadataRetriever getFrameAtTime methods. There are a few benefits over the framework implementation: HDR input is supported (by default tonemapping down to SDR, but with the option to produce HLG bitmaps from Android 14 onwards), Media3 effects can be applied (including Presentation to scale the output to a desired size) and it runs faster on some devices due to moving color space conversion to the GPU. Here's an example of using the new API:

val bitmap =
    withContext(Dispatchers.IO) {
        val configuration =
            ExperimentalFrameExtractor.Configuration
                .Builder()
                .setExtractHdrFrames(true)
                .build()
        val frameExtractor =
            ExperimentalFrameExtractor(
                context,
                configuration,
            )

        frameExtractor.setMediaItem(mediaItem, /*effects*/ listOf())

        val frame = frameExtractor.getFrame(timestamps).await()
        frameExtractor.release()
        frame.bitmap
    }

Editing, transcoding and export

Media3 1.6.0 includes performance, stability and functional improvements in Transformer. Highlights include: support for transcoding/transmuxing Dolby Vision streams on devices that support this format and a new MediaProjectionAssetLoader for recording from the screen, which you can try out in the Transformer demo app.

Check out Common media processing operations with Jetpack Media3 Transformer for some code snippets showing how to process media with Transformer, and tips to reduce latency.

This release also includes a new Kotlin-based demo app showcasing Media3's video effects framework. You can select from a variety of video effects and preview them via ExoPlayer.setVideoEffects.

Media3 video effect animation
Animation showing contrast adjustment and a confetti effect in the new demo app


Get started with Media3 1.6.0

Please get in touch via the Media3 issue Tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!

27 Mar 2025 4:30pm GMT

25 Mar 2025

feedAndroid Developers Blog

Strengthening Our App Ecosystem: Enhanced Tools for Secure & Efficient Development

Posted by Suzanne Frey - VP, Product, Trust & Growth for Android & Play

Knowing that you're building on a safe, secure ecosystem is essential for any app developer. We continuously invest in protecting Android and Google Play, so millions of users around the world can trust the apps they download and you can build thriving businesses. And we're dedicated to continually improving our developer tools to make world-class security even easier to implement.

Together, we've made Google Play one of the safest and most secure platforms for developers and users. Our partnership over the past few years includes helping you:

Today, we're excited to share more about how we're making it easier than ever for developers to build safe apps, while also continuing to strengthen our ecosystem's protection in 2025 and beyond.

Making it easier for you to build safer apps from the start

Google Play's policies are a critical component of ensuring a safe experience for our shared users. Play Console pre-review checks are a great way to resolve certain policy and compatibility issues before you submit your app for review. We recently added the ability to check privacy policy links and login credential requirements, and we're launching even more pre-review checks this year to help you avoid common policy pitfalls.

To help you avoid policy complications before you submit apps for review, we've been notifying you earlier about certain policies relevant to your apps - starting right as you code in Android Studio. We currently notify developers through Android Studio about a few key policy areas, but this year we'll expand to a much wider range of policies.

Providing more policy support

Acting on your feedback, we've improved our policy experience to give you clearer updates, more time for substantial changes, more flexible requirements while still maintaining safety standards, and more helpful information with live Q&A's. Soon, we'll be trying a new way of communicating with you in Play Console so you get information when you need it most. This year, we're investing in even more ways to get your feedback, help you understand our policies, navigate our Policy Center, and help to fix issues before app submission through new features in Console and Android Studio.

We're also expanding our popular Google Play Developer Help Community, which saw 2.7 million visits last year from developers looking to find answers to policy questions, share knowledge, and connect with fellow developers. This year, we're planning to expand the community to include more languages, such as Indonesian, Japanese, Korean, and Portuguese.

Protecting your business and users from scams and attacks

The Play Integrity API is an essential tool to help protect your business from abuse such as fraud, bots, cheating, and data theft. Developers are already using our new app access risk feature in Play Integrity API to make over 500M daily checks for potentially fraudulent or risky behavior. In fact, apps that use Play Integrity features to detect suspicious activity are seeing an 80% drop in unauthorized usage on average compared to other apps.

Developers are using Play Integrity API's new app access risk detection to make over 500M daily checks for potentially fraudulent or risky behavior, and apps that use the Play Integrity API are seeing 80% lower usage from unverified, untrusted sources on average.


This year, we'll continue to enhance the Play Integrity API with stronger protection for even more users. We recently improved the technology that powers the API on all devices running Android 13 (API level 33) and above, making it faster, more reliable, and more private for users. We also launched enhanced security signals to help you decide how much you trust the environment your app is running in, which we'll automatically roll out to all developers who use the API in May. You can opt in now to start using the improved verdicts today.

We'll be adding new features later this year to help you deal with emerging threats, such as the ability to re-identify abusive and risky devices in a way that also preserves user privacy. We're also building more tools to help you guide users to fix issues, like if they need a security update or they're using a tampered version of your app.

Providing additional validation for your app

For apps in select categories, we offer badges that provide an extra layer of validation and connect users with safe, high-quality, and useful experiences. Building on the work of last year's "Government" badge, which helps users identify official government apps, this year we introduced a "Verified" badge to help users discover VPN apps that take extra steps to demonstrate their commitment to security. We'll continue to expand on this and add badges to more app categories in the future.

Partnering to keep kids safe

Whether your app is specifically designed for kids or simply attracts their attention, there is an added responsibility to ensure a safe and trusted experience. We want to partner with you to keep kids and teens safe online, and protect their privacy, and empower families. In addition to Google Play's Teacher Approved program, Families policies, and tools like Restrict Declared Minors setting within the Google Play Console, we're building tools like Credential Manager API, now in Beta for Digital IDs.

Strengthening the Android ecosystem

In addition to helping developers build stronger, safer apps on Google Play, we remain committed to protecting the broader Android ecosystem. Last year, our investments in stronger privacy policies, AI-powered threat detection and other security measures prevented 2.36 million policy-violating apps from being published on Google Play. By contrast, our most recent analysis found over 50 times more Android malware from Internet-sideloaded sources (like browsers and messaging apps) than on Google Play. This year we're working on ways to make it even harder for malicious actors to hide or trick users into harmful installs, which will not only protect your business from fraud but also help users download your apps with confidence.

Our most recent analysis found over 50 times more Android malware from Internet-sideloaded sources than on Google Play


Meanwhile, Google Play Protect is always evolving to combat new threats and protect users from harmful apps that can lead to scams and fraud. As this is a core part of user safety, we're doing more to keep users from being socially-engineered by scammers to turn this off. First, Google Play Protect live threat detection is expanding its protection to target malicious applications that try to impersonate financial apps. And our enhanced financial fraud protection pilot has continued to expand after a successful launch in select countries where we saw malware based financial fraud coming from Internet-sideloaded sources. We are planning to expand the pilot throughout this year to additional countries where we have seen higher levels of malware-based financial fraud.

We're even working with other leaders across the industry to protect all users, no matter what device they use or where they download their apps. As a founding member of the App Defense Alliance, we're working to establish and promote industry-wide security standards for mobile and web applications, as well as cloud configurations. Recently, the ADA launched Application Security Assessments (ASA) v1.0, which provides clear guidance to developers on protecting sensitive data and defending against cyber attacks to strengthen user trust.

What's next

Please keep the feedback coming! We appreciate knowing what can make our developers' experiences more efficient while ensuring we maintain the highest standards in app safety. Thank you for your continued partnership in making Android and Google Play a safe, thriving platform for everyone.

25 Mar 2025 5:00pm GMT

24 Mar 2025

feedAndroid Developers Blog

#WeArePlay | How Memory Lane Games helps people with dementia

Posted by Robbie McLachlan - Developer Marketing


In our latest #WeArePlay film, which celebrates the people behind apps and games, we meet Bruce - a co-founder of Memory Lane Games. His company turns cherished memories into simple, engaging quizzes for people with different types of dementia. Discover how Memory Lane Games blends nostalgia and technology to spark conversations and emotional connections.


What inspired the idea behind Memory Lane Games?

The idea for Memory Lane Games came about one day at the pub when Peter was telling me how his mum, even with vascular dementia, lights up when she looks at old family photos. It got me thinking about my own mum, who treasures old photos just as much. The idea hit us - why not turn those memories into games? We wanted to help people reconnect with their past and create moments where conversations could flow naturally.

Memory Lane Games co-founders, Peter and Bruce from Isle of Man


Can you tell us of a memorable moment in the journey when you realized how powerful the game was?

We knew we were onto something meaningful when a caregiver in a memory cafe told us about a man who was pretty much non-verbal but would enjoy playing. He started humming along to one of our music trivia games, then suddenly said, "Roy Orbison is a way better singer than Elvis, but Elvis had a better manager." The caregiver was in tears-it was the first complete sentence he'd spoken in months. Moments like these remind us why we're doing this-it's not just about games; it's about unlocking moments of connection and joy that dementia often takes away.

A user plays Memory Lane Games from their phone


One of the key features is having errorless fun with the games, why was that so important?

We strive for frustration-free design. With our games, there are no wrong answers-just gentle prompts to trigger memories and spark conversations about topics they are interested in. It's not about winning or losing; it's about rekindling connections and creating moments of happiness without any pressure or frustration. Dementia can make day-to-day tasks challenging, and the last thing anyone needs is a game that highlights what they might not remember or get right. Caregivers also like being able to redirect attention back to something familiar and fun when behaviour gets more challenging.

How has Google Play helped your journey?

What's been amazing is how Google Play has connected us with an incredibly active and engaged global community without any major marketing efforts on our part.

For instance, we got our first big traction in places like the Philippines and India-places we hadn't specifically targeted. Yet here we are, with thousands of downloads in more than 100 countries. That reach wouldn't have been possible without Google Play.

A group of senior citizen gather around a table to play a round of Memory Lane Games from a shared mobile device


What is next for Memory Lane Games?

We're really excited about how we can use AI to take Memory Lane Games to the next level. Our goal is to use generative AI, like Google's Gemini, to create more personalized and localized game content. For example, instead of just focusing on general memories, we want to tailor the game to a specific village the player came from, or a TV show they used to watch, or even local landmarks from their family's hometown. AI will help us offer games that are deeply personal. Plus, with the power of AI, we can create games in multiple languages, tapping into new regions like Japan, Nigeria or Mexico.

Discover other inspiring app and game founders featured in #WeArePlay.



How useful did you find this blog post?


24 Mar 2025 7:00pm GMT

16 Oct 2024

feedPlanet Maemo

Adding buffering hysteresis to the WebKit GStreamer video player

The <video> element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.

WebKit GStreamer regular playback class diagram

The player private can have 3 buffering modes:

The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.

All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.

Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.

For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().

So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.

0 Add to favourites0 Bury

16 Oct 2024 6:12am GMT

10 Sep 2024

feedPlanet Maemo

Don’t shoot yourself in the foot with the C++ move constructor

Move semantics can be very useful to transfer ownership of resources, but as many other C++ features, it's one more double edge sword that can harm yourself in new and interesting ways if you don't read the small print.

For instance, if object moving involves super and subclasses, you have to keep an extra eye on what's actually happening. Consider the following classes A and B, where the latter inherits from the former:

#include <stdio.h>
#include <utility>

#define PF printf("%s %p\n", __PRETTY_FUNCTION__, this)

class A {
 public:
 A() { PF; }
 virtual ~A() { PF; }
 A(A&& other)
 {
  PF;
  std::swap(i, other.i);
 }

 int i = 0;
};

class B : public A {
 public:
 B() { PF; }
 virtual ~B() { PF; }
 B(B&& other)
 {
  PF;
  std::swap(i, other.i);
  std::swap(j, other.j);
 }

 int j = 0;
};

If your project is complex, it would be natural that your code involves abstractions, with part of the responsibility held by the superclass, and some other part by the subclass. Consider also that some of that code in the superclass involves move semantics, so a subclass object must be moved to become a superclass object, then perform some action, and then moved back to become the subclass again. That's a really bad idea!

Consider this usage of the classes defined before:

int main(int, char* argv[]) {
 printf("Creating B b1\n");
 B b1;
 b1.i = 1;
 b1.j = 2;
 printf("b1.i = %d\n", b1.i);
 printf("b1.j = %d\n", b1.j);
 printf("Moving (B)b1 to (A)a. Which move constructor will be used?\n");
 A a(std::move(b1));
 printf("a.i = %d\n", a.i);
 // This may be reading memory beyond the object boundaries, which may not be
 // obvious if you think that (A)a is sort of a (B)b1 in disguise, but it's not!
 printf("(B)a.j = %d\n", reinterpret_cast<B&>(a).j);
 printf("Moving (A)a to (B)b2. Which move constructor will be used?\n");
 B b2(reinterpret_cast<B&&>(std::move(a)));
 printf("b2.i = %d\n", b2.i);
 printf("b2.j = %d\n", b2.j);
 printf("^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place\n");
 printf("Destroying b2, a, b1\n");
 return 0;
}

If you've read the code, those printfs will have already given you some hints about the harsh truth: if you move a subclass object to become a superclass object, you're losing all the subclass specific data, because no matter if the original instance was one from a subclass, only the superclass move constructor will be used. And that's bad, very bad. This problem is called object slicing. It's specific to C++ and can also happen with copy constructors. See it with your own eyes:

Creating B b1
A::A() 0x7ffd544ca690
B::B() 0x7ffd544ca690
b1.i = 1
b1.j = 2
Moving (B)b1 to (A)a. Which move constructor will be used?
A::A(A&&) 0x7ffd544ca6a0
a.i = 1
(B)a.j = 0
Moving (A)a to (B)b2. Which move constructor will be used?
A::A() 0x7ffd544ca6b0
B::B(B&&) 0x7ffd544ca6b0
b2.i = 1
b2.j = 0
^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place
Destroying b2, a, b1
virtual B::~B() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6a0
virtual B::~B() 0x7ffd544ca690
virtual A::~A() 0x7ffd544ca690

Why can something that seems so obvious become such a problem, you may ask? Well, it depends on the context. It's not unusual for the codebase of a long lived project to have started using raw pointers for everything, then switching to using references as a way to get rid of null pointer issues when possible, and finally switch to whole objects and copy/move semantics to get rid or pointer issues (references are just pointers in disguise after all, and there are ways to produce null and dangling references by mistake). But this last step of moving from references to copy/move semantics on whole objects comes with the small object slicing nuance explained in this post, and when the size and all the different things to have into account about the project steals your focus, it's easy to forget about this.

So, please remember: never use move semantics that convert your precious subclass instance to a superclass instance thinking that the subclass data will survive. You can regret about it and create difficult to debug problems inadvertedly.

Happy coding!

0 Add to favourites0 Bury

10 Sep 2024 7:58am GMT

17 Jun 2024

feedPlanet Maemo

Incorporating 3D Gaussian Splats into the graphics pipeline

3D Gaussian splatting is the emerging rendering technique that is overtaking NeRFs. Since it is centered around point primitives, it is more compatible with traditional graphics pipelines that already support point rendering.

Gaussian splats essentially enhance the concept of point rendering by converting the point primitive into a 3D ellipsoid, which is then projected into 2D during the rendering process.. This concept was initially described in 2002 [3], but the technique of extending Structure from Motion scans in this way was only detailed more recently [1].

In this post, I explore how to integrate Gaussian splats into the traditional graphics pipeline. This allows them to be used alongside triangle-based primitives and interact with them through the depth buffer for occlusion (see header image). This approach also simplifies deployment by eliminating the need for CUDA.

Storage

The original implementation uses .ply files as their checkpoint format, focusing on maintaining training-relevant data structures at the expense of storage efficiency, leading to increased file sizes.

For example, it stores the covariance as scaling and a rotation quaternion, necessitating reconstruction during rendering. A more efficient approach would be to leverage orthogonality, storing only the diagonal and upper triangular vectors, thereby eliminating reconstruction and reducing storage requirements.

Further analysis of the storage usage for each attribute shows that the spherical harmonics of orders 1-3 are the main contributors to the file size. However, according to the ablation study in the original publication [1], these harmonics only lead to a modest PSNR improvement of 0.5.

Therefore, the most straightforward way to decrease storage is by discarding the higher-order spherical harmonics. Additionally, the level 0 spherical harmonics can be converted into a diffuse color and merged with opacity to form a single RGBA value. These simple yet effective methods were implemented in one of the early WebGL implementations, resulting in the .splat format. As an added benefit, this format can be easily interpreted by viewers unaware of Gaussian splats as a simple colored point cloud:

Results using a non Gaussian-splat aware renderer

By directly storing the covariance as previously mentioned we can reduce the precision from float32 to float16, thereby halving the storage needed for that data. Furthermore, since most splats have limited spatial extents, we can also utilize float16 for position data, yielding additional storage savings.

With these changes, we achieve a storage requirement of 22 bytes per splat, in contrast to the 44 bytes needed by the .splat format and 236 bytes in the original implementation. Thus, we have attained a 10x reduction in storage compared to the original implementation simply by using more suitable data types.

Blending

The image formation model presented in the original paper [1] is similar to the NeRF rendering, as it is compared to it. This involves casting a ray and observing its intersection with the splats, which leads to front-to-back blending. This is precisely the approach taken by the provided CUDA implementation.

Blending remains a component of the fixed-function unit within the graphics pipeline, which can be set up for front-to-back blending [2] by using the factors (one_minus_dest_alpha, one) and by multiplying color and alpha in the shader as color.rgb * color.a. This results in the following equation:

\begin{aligned}C_{dst} &= (1 - \alpha_{dst}) \cdot \alpha_{src} C_{src} &+ C_{dst}\\ \alpha_{dst} &= (1 - \alpha_{dst})\cdot\alpha_{src} &+ \alpha_{dst}\end{aligned}

However, this method requires the framebuffer alpha value to be zero before rendering the splats, which is not typically the case as any previous render pass could have written an arbitrary alpha value.

A simple solution is to switch to back-to-front sorting and use the standard alpha blending factors (src_alpha, one_minus_src_alpha) for the following blending equation:

C_{dst} = \alpha_{src} \cdot C_{src} + (1 - \alpha_{src}) \cdot C_{dst}

This allows us to regard Gaussian splats as a special type of particles that can be rendered together with other transparent elements within a scene.

References

  1. Kerbl, Bernhard, et al. "3d gaussian splatting for real-time radiance field rendering." ACM Transactions on Graphics 42.4 (2023): 1-14.
  2. Green, Simon. "Volumetric particle shadows." NVIDIA Developer Zone (2008).
  3. Zwicker, Matthias, et al. "EWA splatting." IEEE Transactions on Visualization and Computer Graphics 8.3 (2002): 223-238.

0 Add to favourites0 Bury

17 Jun 2024 1:28pm GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT