18 Sep 2025
TalkAndroid
The Rookie: the truth is out about the show’s future after Season 7
After seven action-packed seasons filled with chase scenes, heart-pounding standoffs, and the kind of emotional twists that keep…
18 Sep 2025 6:30am GMT
17 Sep 2025
Android Developers Blog
Android 16 QPR2 Beta 2 is Here
Posted by Matthew McCullough, VP of Product Management, Android Developer
Android 16 QPR2 has released Platform Stability today with Beta 2! That means that the API surface is locked, and the app-facing behaviors are final, so you can incorporate them into your apps and take advantage of our latest platform innovations.
New in the QPR2 Beta
Testing developer verification
To better protect Android users from repeat offenders, Android is introducing developer verification, a new requirement to make app installation safer by preventing the spread of malware and scams. Starting in September 2026 and in specific regions, Android will require apps to be registered by verified developers to be installed on certified Android devices, with an exception made for installs made through the Android Debug Bridge (ADB).
As a developer, you are free to install apps without verification by using ADB, so you can continue to test apps that are not intended or not yet ready to distribute to the wider consumer population.
For apps that enable user-initiated installation of app packages, Android 16 QPR2 Beta 2 contains new APIs that support developer verification during installation, along with a new adb command to let you force a verification outcome for testing purposes.
adb shell pm set-developer-verification-result
By using this command, (see adb shell pm help for full details) you can now simulate verification failures. This allows you to understand the end-to-end user experience for both successful and unsuccessful verification, so you can prepare accordingly before enforcement begins.
We encourage all developers who distribute apps on certified Android devices to sign up for early access to get ready and stay updated.
SMS OTP Protection
The delivery of messages containing an SMS retriever hash will be delayed for most apps for three hours to help prevent OTP hijacking. The RECEIVE_SMS broadcast will be withheld and sms provider database queries will be filtered. The SMS will be available to these apps after the three hour delay.
Certain apps such as the default SMS, assistant, and dialer apps, along with connected device companion, system apps, etc will be exempt from this delay, and apps can continue to use the SMS retriever API to access messages intended for them in a timely manner.
Custom app icon shapes
More efficient garbage collection
The Android Runtime (ART) now includes a Generational Concurrent Mark-Compact (CMC) Garbage Collector in Android 16 QPR2 that focuses collection efforts on newly allocated objects, which are more likely to be garbage. You can expect reduced CPU usage from garbage collection, a smoother user experience with less jank, and improved battery efficiency.
Native step tracking and expanded exercise data in Health Connect
Health Connect now automatically tracks steps using the device's sensors. If your app has the READ_STEPS permission, this data will be available from the "android" package. Not only does this simplify the code needed to do step tracking, it's more power efficient as well.
Also, the ExerciseSegment and ExerciseSession data types have been updated. You can now record and read weight, set index, and Rate of Perceived Exertion (RPE) for exercise segments. Since Health Connect is updated independently of the platform, checking for feature availability before writing the data will ensure compatibility with the current local version of Health Connect.
// Check if the expanded exercise features are available val newFieldsAvailable = healthConnectClient.features.getFeatureStatus( HealthConnectFeatures.FEATURE_EXPANDED_EXERCISE_RECORD ) == HealthConnectFeatures.FEATURE_STATUS_AVAILABLE val segment = ExerciseSegment( //... // Conditionally add the new data fields weight = if (newFieldsAvailable) Mass.fromKilograms(50.0) else null, setIndex = if (newFieldsAvailable) 1 else null, rateOfPerceivedExertion = if (newFieldsAvailable) 7.0f else null )
A minor SDK version
QPR2 marks the first Android release with a minor SDK version allowing us to more rapidly innovate with new platform APIs provided outside of our usual once-yearly timeline. Unlike the major platform release (Android 16) in 2025-Q2 that included behavior changes that impact app compatibility, the changes in this release are largely additive and designed to minimize the need for additional app testing.

Your app can safely call the new APIs on devices where they are available by using SDK_INT_FULL and the respective value from the VERSION_CODES_FULL enumeration.
if (Build.VERSION.SDK_INT_FULL >= Build.VERSION_CODES_FULL.BAKLAVA_1) { // Call new APIs from the Android 16 QPR2 release }
You can also use the Build.getMinorSdkVersion() method to get just the minor SDK version number.
val minorSdkVersion = Build.getMinorSdkVersion(VERSION_CODES_FULL.BAKLAVA)
The original VERSION_CODES enumeration can still be used to compare against the SDK_INT enumeration for APIs declared in non minor releases.
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.BAKLAVA) { // Call new APIs from the Android 16 release }
Since minor releases aren't intended to have breaking behavior changes, they cannot be used in the uses-sdk manifest attributes.
Get started with the Android 16 QPR2 beta
You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are already in the Android Beta program, you will be offered an over-the-air update to Beta 2. We'll update the system images and SDK regularly throughout the Android 16 QPR2 release cycle.
If you are in the Canary program and would like to enter the Beta program, you will need to wipe your device and manually flash it to the beta release.
For the best development experience with Android 16 QPR2, we recommend that you use the latest Canary version of Android Studio Narwhal Feature Drop.
We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release. Thank you for helping to shape the future of the Android platform.
17 Sep 2025 8:04pm GMT
TalkAndroid
Redmi 15 Series Debuts with 7000mAh Battery and Expansive 6.9 Inch Display
Xiaomi has unleashed the Redmi 15 Series in London. The new lineup packs a massive 7000mAh battery and…
17 Sep 2025 6:03pm GMT
Google’s New Windows Search App Redefines Desktop Search Experience
Google has launched an experimental Windows search app that could revolutionize how millions of users find files on…
17 Sep 2025 4:15pm GMT
15 Sep 2025
Android Developers Blog
Simplifying advanced networking with DHCPv6 Prefix Delegation
Posted by Lorenzo Colitti - TL, Android Core Networking and Patrick Rohr - Software Engineer, Android Core Networking
IPv4 complicates app code and causes battery impact
Most of today's Internet traffic still uses IPv4, which cannot provide transparent end-to-end connectivity to apps. IPv4 only provides 232 addresses - much less than the number of devices on today's Internet - so it's not possible to assign a public IPv4 address to every Android device, let alone to individual apps or functions within a device. So most Internet users have private IPv4 addresses, and share a public IPv4 address with other users of the same network using Network Address Translation (NAT). NAT makes it difficult to build advanced networking apps such as video calling apps or VPNs, because these sorts of apps need to periodically send packets to keep NAT sessions alive (which hurts battery) and implement complex protocols such as STUN to allow devices to connect to each other through NAT.
Why IPv6 hasn't solved this problem yet
The new version of the Internet protocol, IPv6 - now used by about half of all Google users - provides virtually unlimited address space and the ability for devices to use multiple addresses. When every device can get global IPv6 addresses, there is no need to use NAT for address sharing! But although the address space itself is no longer limited, the current IPv6 address assignment methods used on Wi-Fi, such as SLAAC and DHCPv6 IA_NA, still have limitations.
For one thing, both SLAAC and DHCPv6 IA_NA require the network to maintain state for each individual address, so assigning more than a few IPv6 addresses to every Android device can cause scaling issues on the network. This means it's often not possible to assign IPv6 addresses to VMs or containers within the device, or to wearable devices and other tethered devices connected to it. For example, if your app is running on a wearable device connected to an Android phone, or on a tablet tethered to an Android phone that's connected to Wi-Fi, it likely won't have IPv6 connectivity and will need to deal with the complexities and battery impact of NAT.
Additionally, we've heard feedback from some users and network operators that they desire more control over the IPv6 addresses used by Android devices. Until now, Android only supported SLAAC, which does not allow networks to assign predictable IPv6 addresses, and makes it more difficult to track the mapping between IPv6 addresses and the devices using them. This has limited the availability of IPv6 on Android devices on some networks.
The solution: dedicated IPv6 address blocks with DHCPv6 PD
To overcome these drawbacks, we have added support for DHCPv6 Prefix Delegation (PD) as defined in RFC 8415 and RFC 9762. The Android network stack can now request a dedicated prefix from the network, and if it obtains a prefix, it will use it to obtain IPv6 connectivity. In future releases, the device will be able to share the prefix with wearable devices, tethered devices, virtual machines, and stub networks such as Thread, providing all these devices with global IPv6 connectivity. This truly realizes the potential of IPv6 to allow end-to-end, scalable connectivity to an unlimited number of devices and functions, without requiring NAT. And because the prefix is assigned by the network, network operators can use existing DHCPv6 logging infrastructure to track which device is using which prefix (see RFC 9663 for guidance to network operators on deploying DHCPv6 PD).
This allows networks to fully realize the potential of IPv6: devices maintain the flexibility of SLAAC, such as the ability to use a nearly unlimited number of addresses, and the network maintains the manageability and accountability of a traditional DHCPv6 setup. We hope that this will allow more networks to transition to IPv6, providing apps with end-to-end IPv6 connectivity and reducing the need for NAT traversal and keepalives.
What this means for app developers
15 Sep 2025 9:00pm GMT
10 Sep 2025
Android Developers Blog
HDR and User Interfaces
Posted by Alec Mouri - Software Engineer
As explained in What is HDR?, we can think of HDR as only referring to a luminance range brighter than SDR. When integrating HDR content into a user interface, you must be careful when your user interface is primarily SDR colors and assets. The human visual system adapts to perceived color based on the surrounding environment, which can lead to surprising results. We'll look at one pertinent example.
Simultaneous Contrast
Consider the following image:

This image shows two gray rectangles with different background colors. For most people viewing this image, the two gray rectangles appear to be different shades of gray: the topmost rectangle with a darker background appears to be a lighter shade than the bottommost rectangle with a lighter background.
But these are the same shades of gray! You can prove this to yourself by using your favorite color picking tool or by looking at the below image:

This illustrates a visual phenomenon called simultaneous contrast. Readers who are interested in the biological explanation may learn more here.
Nearby differences in color are therefore "emphasized": colors appear darker when immediately next to brighter colors. That same color would appear lighter when immediately next to darker colors.
Implications on Mixing HDR and SDR
The effect of simultaneous contrast affects the appearance of user interfaces that need to present a mixture of HDR and SDR content. The peak luminance allowed by HDR will create an effect of simultaneous contrast: the eye will adapt* to a higher peak luminance (and oftentimes a higher average luminance in practice), which will perceptually cause SDR content to appear dimmer although technically the SDR content luminance has not changed at all. For users, this can be expressed as: my phone screen became "grey" or "washed out".
We can see this phenomenon in the below image. The device on the right simulates how photos may appear with an SDR UI, if those photos were rendered as HDR. Note that the August photos look identical when compared side-by-side, but the quality of the SDR UI is visually degraded.

Applications, when designing for HDR, need to consider how "much" SDR is shown at any given time in their screens when controlling how bright HDR is "allowed" to be. A UI that is dominated by SDR, such as a gallery view where small amounts of HDR content are displayed, can suddenly appear to be darker than expected.
When building your UI, consider the impact of HDR on text legibility or the appearance of nearby SDR assets, and use the appropriate APIs provided by your platform to constrain HDR brightness, or even disable HDR. For example, a 2x headroom for HDR brightness may be acceptable to balance the quality of your HDR scene with your SDR elements. In contrast, a UI that is dominated by HDR, such as full-screen video without other UI elements on-top, does not need to consider this as strongly, as the focus of the UI is on the HDR content itself. In those situations, a 5x headroom (or higher, depending on content metadata such as UltraHDR's max_content_boost) may be more appropriate.
It might be tempting to "brighten" SDR content instead. Resist this temptation! This will cause your application to be too bright, especially if there are other applications or system UI elements on-screen.
How to control HDR headroom
Android 15 introduced a control for desired HDR headroom. You can have your application request that the system uses a particular HDR headroom based on the context around your desired UI:
- If you only want to show SDR content, simply request no headroom.
- If you only want to show HDR content, then request a high HDR headroom up to and according to the demands of the content.
- If you want to show a mixture of HDR and SDR content, then can request an intermediate headroom value accordingly. Typical headroom amounts would be around 2x for a mixed scene and 5-8x for a fully-HDR scene.
Here is some example usage:
// Required for the window to respect the desired HDR headroom. // Note that the equivalent api on SurfaceView does NOT require // COLOR_MODE_HDR to constraint headroom, if there is HDR content displayed // on the SurfaceView. window.colorMode = ActivityInfo.COLOR_MODE_HDR // Illustrative values: different headroom values may be used depending on // the desired headroom of the content AND particularities of apps's UI // design. window.desiredHdrHeadroom = if(/* SDR only */) { 0f } else { if (/* Mixed, mostly SDR */) { 1.5f } else { if ( /* Mixed, mostly HDR */) { 3f } else { /* HDR only */ 5f } } }
Other platforms also have APIs that allow for developers to have some control over constraining HDR content in their application.
Web platforms have a more coarse concept: The First Public Working Draft of the CSS Color HDR Module adds a constrained-high option to constrain the headroom for mixed HDR and SDR scenes. Within the Apple ecosystem, constrainedHigh is similarly coarse, reckoning with the challenges of displaying mixed HDR and SDR scenes on consumer displays.
If you are a developer who is considering supporting HDR, be thoughtful about how HDR interacts with your UI and use HDR headroom controls appropriately.
*There are other mechanisms the eye employs for light adaptation, like pupillary light reflex, which amplifies this visual phenomenon (brighter peak HDR light means the pupil constricts, which causes less light to hit the retina).
10 Sep 2025 2:00pm GMT
05 Jun 2025
Planet Maemo
Mobile blogging, the past and the future
This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what's common is that at almost all points there was a mechanism to publish while on the move.
Psion, documents over FTP
In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.
The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.
Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.
In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.
If we wanted to include photos to the stories, we'd have to find an Internet cafe.
- To the Alps is a post from these times. Lots more in the motorcycling category
SMS and MMS
For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST
, an empty blog post would be created with the rest of that message's text as the title.
- In the Caucasus is a good example of a post from this era
As I got into neogeography, I could also send a NEWPOSITION
message. This would update my position on the map, connecting weather metadata to the posts.
As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.
- Mystery of the Missing Monkey is typical. Some more in Internet Archive
Photos over email
A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.
And so my blog included a new "moblog" section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.
- Internet Archive has some of my old moblogs but nowadays, I post similar stuff on Pixelfed
Pause
Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.
In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.
And I traveled off-the-grid rarely enough that it didn't make sense to develop a system.
But now that we're sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?
Starlink, Internet from Outer Space
Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.
However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.
But despite what you'd think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.
- Any of the 2025 Atlantic crossing posts is a good example of this setup in action
Inreach, texting with the cloud
Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.
When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.
I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.
One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it'd even enable rudimentary communications if we found ourselves in a liferaft.
- There are various InReach integration hacks that could be used for more sophisticated data transfer
Sailmail and email over HF radio
The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.
Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.
Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.
With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We'd then need a mailbox that can receive these mails, and some automation to fetch and publish.
- Sailmail and No Foreign Land support structured data via email to update position. Their formats could be useful inspiration
05 Jun 2025 12:00am GMT
16 Oct 2024
Planet Maemo
Adding buffering hysteresis to the WebKit GStreamer video player
The <video>
element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.
The player private can have 3 buffering modes:
- On-disk buffering: This is the typical mode on desktop systems, but is frequently disabled on purpose on embedded devices to avoid wearing out their flash storage memories. All the video content is downloaded to disk, and the buffering percentage refers to the total size of the video. A GstDownloader element is present in the pipeline in this case. Buffering level monitoring is done by polling the pipeline every second, using the
fillTimerFired()
method. - In-memory buffering: This is the typical mode on embedded systems and on desktop systems in case of streamed (live) content. The video is downloaded progressively and only the part of it ahead of the current playback time is buffered. A GstQueue2 element is present in the pipeline in this case. Buffering level monitoring is done by listening to GST_MESSAGE_BUFFERING bus messages and using the buffering level stored on them. This is the case that motivates the refactoring described in this blog post, what we actually wanted to correct in Broadcom platforms, and what motivated the addition of hysteresis working on all the platforms.
- Local files: Files, MediaStream sources and other special origins of video don't do buffering at all (no GstDownloadBuffering nor GstQueue2 element is present on the pipeline). They work like the on-disk buffering mode in the sense that
fillTimerFired()
is used, but the reported level is relative, much like in the streaming case. In the initial version of the refactoring I was unaware of this third case, and only realized about it when tests triggered the assert that I added to ensure that the on-disk buffering method was working in GST_BUFFERING_DOWNLOAD mode.
The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.
All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.
Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.
For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().
So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.
16 Oct 2024 6:12am GMT
10 Sep 2024
Planet Maemo
Don’t shoot yourself in the foot with the C++ move constructor
Move semantics can be very useful to transfer ownership of resources, but as many other C++ features, it's one more double edge sword that can harm yourself in new and interesting ways if you don't read the small print.
For instance, if object moving involves super and subclasses, you have to keep an extra eye on what's actually happening. Consider the following classes A and B, where the latter inherits from the former:
#include <stdio.h> #include <utility> #define PF printf("%s %p\n", __PRETTY_FUNCTION__, this) class A { public: A() { PF; } virtual ~A() { PF; } A(A&& other) { PF; std::swap(i, other.i); } int i = 0; }; class B : public A { public: B() { PF; } virtual ~B() { PF; } B(B&& other) { PF; std::swap(i, other.i); std::swap(j, other.j); } int j = 0; };
If your project is complex, it would be natural that your code involves abstractions, with part of the responsibility held by the superclass, and some other part by the subclass. Consider also that some of that code in the superclass involves move semantics, so a subclass object must be moved to become a superclass object, then perform some action, and then moved back to become the subclass again. That's a really bad idea!
Consider this usage of the classes defined before:
int main(int, char* argv[]) { printf("Creating B b1\n"); B b1; b1.i = 1; b1.j = 2; printf("b1.i = %d\n", b1.i); printf("b1.j = %d\n", b1.j); printf("Moving (B)b1 to (A)a. Which move constructor will be used?\n"); A a(std::move(b1)); printf("a.i = %d\n", a.i); // This may be reading memory beyond the object boundaries, which may not be // obvious if you think that (A)a is sort of a (B)b1 in disguise, but it's not! printf("(B)a.j = %d\n", reinterpret_cast<B&>(a).j); printf("Moving (A)a to (B)b2. Which move constructor will be used?\n"); B b2(reinterpret_cast<B&&>(std::move(a))); printf("b2.i = %d\n", b2.i); printf("b2.j = %d\n", b2.j); printf("^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place\n"); printf("Destroying b2, a, b1\n"); return 0; }
If you've read the code, those printfs will have already given you some hints about the harsh truth: if you move a subclass object to become a superclass object, you're losing all the subclass specific data, because no matter if the original instance was one from a subclass, only the superclass move constructor will be used. And that's bad, very bad. This problem is called object slicing. It's specific to C++ and can also happen with copy constructors. See it with your own eyes:
Creating B b1 A::A() 0x7ffd544ca690 B::B() 0x7ffd544ca690 b1.i = 1 b1.j = 2 Moving (B)b1 to (A)a. Which move constructor will be used? A::A(A&&) 0x7ffd544ca6a0 a.i = 1 (B)a.j = 0 Moving (A)a to (B)b2. Which move constructor will be used? A::A() 0x7ffd544ca6b0 B::B(B&&) 0x7ffd544ca6b0 b2.i = 1 b2.j = 0 ^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place Destroying b2, a, b1 virtual B::~B() 0x7ffd544ca6b0 virtual A::~A() 0x7ffd544ca6b0 virtual A::~A() 0x7ffd544ca6a0 virtual B::~B() 0x7ffd544ca690 virtual A::~A() 0x7ffd544ca690
Why can something that seems so obvious become such a problem, you may ask? Well, it depends on the context. It's not unusual for the codebase of a long lived project to have started using raw pointers for everything, then switching to using references as a way to get rid of null pointer issues when possible, and finally switch to whole objects and copy/move semantics to get rid or pointer issues (references are just pointers in disguise after all, and there are ways to produce null and dangling references by mistake). But this last step of moving from references to copy/move semantics on whole objects comes with the small object slicing nuance explained in this post, and when the size and all the different things to have into account about the project steals your focus, it's easy to forget about this.
So, please remember: never use move semantics that convert your precious subclass instance to a superclass instance thinking that the subclass data will survive. You can regret about it and create difficult to debug problems inadvertedly.
Happy coding!
10 Sep 2024 7:58am GMT
18 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub
I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.
So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.
Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.
Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org
But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.
The equipment deployed in this installation currently contains:
-
a rather beefy Supermicro 2U server with EPYC 7113P CPU and 4x PCIe, two of which are populated with Digium TE820 cards resulting in a total of 16 E1 ports
-
an icE1usb with RS422 interface board connected via 100m RS422 to an Ericsson GPS03 receiver. There's two layers of of over-voltage protection on the RS422 (each with gas discharge tubes and TVS) and two stages of over-voltage protection in the coaxial cable between antenna and GPS receiver.
-
a Livingston Portmaster3 RAS server
-
a Cisco AS5400 RAS server
For more details, see this wiki page and this ticket
Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.
In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.
Acknowledgements
I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware
18 Sep 2022 10:00pm GMT
08 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front
Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.
This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.
My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.
The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about
In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.
So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.
08 Sep 2022 10:00pm GMT
Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables
If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.
In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.
In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.
So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.
Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.
bursty bit clock changes until link is up
The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4
As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.
What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.
As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.
This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.
But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.
clock drift between master and slave cards
Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.
When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(
Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.
This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.
The work-around
If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.
In this setup, your slave card[s] will have perfect bit clock sync/lock.
Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).
08 Sep 2022 10:00pm GMT