15 Sep 2025
Android Developers Blog
Simplifying advanced networking with DHCPv6 Prefix Delegation
Posted by Lorenzo Colitti - TL, Android Core Networking and Patrick Rohr - Software Engineer, Android Core Networking
IPv4 complicates app code and causes battery impact
Most of today's Internet traffic still uses IPv4, which cannot provide transparent end-to-end connectivity to apps. IPv4 only provides 232 addresses - much less than the number of devices on today's Internet - so it's not possible to assign a public IPv4 address to every Android device, let alone to individual apps or functions within a device. So most Internet users have private IPv4 addresses, and share a public IPv4 address with other users of the same network using Network Address Translation (NAT). NAT makes it difficult to build advanced networking apps such as video calling apps or VPNs, because these sorts of apps need to periodically send packets to keep NAT sessions alive (which hurts battery) and implement complex protocols such as STUN to allow devices to connect to each other through NAT.
Why IPv6 hasn't solved this problem yet
The new version of the Internet protocol, IPv6 - now used by about half of all Google users - provides virtually unlimited address space and the ability for devices to use multiple addresses. When every device can get global IPv6 addresses, there is no need to use NAT for address sharing! But although the address space itself is no longer limited, the current IPv6 address assignment methods used on Wi-Fi, such as SLAAC and DHCPv6 IA_NA, still have limitations.
For one thing, both SLAAC and DHCPv6 IA_NA require the network to maintain state for each individual address, so assigning more than a few IPv6 addresses to every Android device can cause scaling issues on the network. This means it's often not possible to assign IPv6 addresses to VMs or containers within the device, or to wearable devices and other tethered devices connected to it. For example, if your app is running on a wearable device connected to an Android phone, or on a tablet tethered to an Android phone that's connected to Wi-Fi, it likely won't have IPv6 connectivity and will need to deal with the complexities and battery impact of NAT.
Additionally, we've heard feedback from some users and network operators that they desire more control over the IPv6 addresses used by Android devices. Until now, Android only supported SLAAC, which does not allow networks to assign predictable IPv6 addresses, and makes it more difficult to track the mapping between IPv6 addresses and the devices using them. This has limited the availability of IPv6 on Android devices on some networks.
The solution: dedicated IPv6 address blocks with DHCPv6 PD
To overcome these drawbacks, we have added support for DHCPv6 Prefix Delegation (PD) as defined in RFC 8415 and RFC 9762. The Android network stack can now request a dedicated prefix from the network, and if it obtains a prefix, it will use it to obtain IPv6 connectivity. In future releases, the device will be able to share the prefix with wearable devices, tethered devices, virtual machines, and stub networks such as Thread, providing all these devices with global IPv6 connectivity. This truly realizes the potential of IPv6 to allow end-to-end, scalable connectivity to an unlimited number of devices and functions, without requiring NAT. And because the prefix is assigned by the network, network operators can use existing DHCPv6 logging infrastructure to track which device is using which prefix (see RFC 9663 for guidance to network operators on deploying DHCPv6 PD).
This allows networks to fully realize the potential of IPv6: devices maintain the flexibility of SLAAC, such as the ability to use a nearly unlimited number of addresses, and the network maintains the manageability and accountability of a traditional DHCPv6 setup. We hope that this will allow more networks to transition to IPv6, providing apps with end-to-end IPv6 connectivity and reducing the need for NAT traversal and keepalives.
What this means for app developers
15 Sep 2025 9:00pm GMT
10 Sep 2025
Android Developers Blog
HDR and User Interfaces
Posted by Alec Mouri - Software Engineer
As explained in What is HDR?, we can think of HDR as only referring to a luminance range brighter than SDR. When integrating HDR content into a user interface, you must be careful when your user interface is primarily SDR colors and assets. The human visual system adapts to perceived color based on the surrounding environment, which can lead to surprising results. We'll look at one pertinent example.
Simultaneous Contrast
Consider the following image:

This image shows two gray rectangles with different background colors. For most people viewing this image, the two gray rectangles appear to be different shades of gray: the topmost rectangle with a darker background appears to be a lighter shade than the bottommost rectangle with a lighter background.
But these are the same shades of gray! You can prove this to yourself by using your favorite color picking tool or by looking at the below image:

This illustrates a visual phenomenon called simultaneous contrast. Readers who are interested in the biological explanation may learn more here.
Nearby differences in color are therefore "emphasized": colors appear darker when immediately next to brighter colors. That same color would appear lighter when immediately next to darker colors.
Implications on Mixing HDR and SDR
The effect of simultaneous contrast affects the appearance of user interfaces that need to present a mixture of HDR and SDR content. The peak luminance allowed by HDR will create an effect of simultaneous contrast: the eye will adapt* to a higher peak luminance (and oftentimes a higher average luminance in practice), which will perceptually cause SDR content to appear dimmer although technically the SDR content luminance has not changed at all. For users, this can be expressed as: my phone screen became "grey" or "washed out".
We can see this phenomenon in the below image. The device on the right simulates how photos may appear with an SDR UI, if those photos were rendered as HDR. Note that the August photos look identical when compared side-by-side, but the quality of the SDR UI is visually degraded.

Applications, when designing for HDR, need to consider how "much" SDR is shown at any given time in their screens when controlling how bright HDR is "allowed" to be. A UI that is dominated by SDR, such as a gallery view where small amounts of HDR content are displayed, can suddenly appear to be darker than expected.
When building your UI, consider the impact of HDR on text legibility or the appearance of nearby SDR assets, and use the appropriate APIs provided by your platform to constrain HDR brightness, or even disable HDR. For example, a 2x headroom for HDR brightness may be acceptable to balance the quality of your HDR scene with your SDR elements. In contrast, a UI that is dominated by HDR, such as full-screen video without other UI elements on-top, does not need to consider this as strongly, as the focus of the UI is on the HDR content itself. In those situations, a 5x headroom (or higher, depending on content metadata such as UltraHDR's max_content_boost) may be more appropriate.
It might be tempting to "brighten" SDR content instead. Resist this temptation! This will cause your application to be too bright, especially if there are other applications or system UI elements on-screen.
How to control HDR headroom
Android 15 introduced a control for desired HDR headroom. You can have your application request that the system uses a particular HDR headroom based on the context around your desired UI:
- If you only want to show SDR content, simply request no headroom.
- If you only want to show HDR content, then request a high HDR headroom up to and according to the demands of the content.
- If you want to show a mixture of HDR and SDR content, then can request an intermediate headroom value accordingly. Typical headroom amounts would be around 2x for a mixed scene and 5-8x for a fully-HDR scene.
Here is some example usage:
// Required for the window to respect the desired HDR headroom. // Note that the equivalent api on SurfaceView does NOT require // COLOR_MODE_HDR to constraint headroom, if there is HDR content displayed // on the SurfaceView. window.colorMode = ActivityInfo.COLOR_MODE_HDR // Illustrative values: different headroom values may be used depending on // the desired headroom of the content AND particularities of apps's UI // design. window.desiredHdrHeadroom = if(/* SDR only */) { 0f } else { if (/* Mixed, mostly SDR */) { 1.5f } else { if ( /* Mixed, mostly HDR */) { 3f } else { /* HDR only */ 5f } } }
Other platforms also have APIs that allow for developers to have some control over constraining HDR content in their application.
Web platforms have a more coarse concept: The First Public Working Draft of the CSS Color HDR Module adds a constrained-high option to constrain the headroom for mixed HDR and SDR scenes. Within the Apple ecosystem, constrainedHigh is similarly coarse, reckoning with the challenges of displaying mixed HDR and SDR scenes on consumer displays.
If you are a developer who is considering supporting HDR, be thoughtful about how HDR interacts with your UI and use HDR headroom controls appropriately.
*There are other mechanisms the eye employs for light adaptation, like pupillary light reflex, which amplifies this visual phenomenon (brighter peak HDR light means the pupil constricts, which causes less light to hit the retina).
10 Sep 2025 2:00pm GMT
#WeArePlay: Meet the people using Google AI to solve problems in agriculture, education, and pet care
Posted by Robbie McLachlan - Developer Marketing
In our latest #WeArePlay stories, we meet the people using Google AI to drive positive change with their apps and games on Google Play - from diagnosing crop diseases with a single photo to reuniting lost pets with a simple nose print.
Here are a few of our favorites:
Jesse and Ken's app Petnow uses AI-powered nose print recognition to identify individual dogs and cats, helping to reunite lost pets with their owners.
Inspired by his lifelong love of dogs, Jesse teamed up with Vision AI expert Ken to create Petnow. Boosted by Gemini, their app uses nose print recognition to identify individual dogs and cats, helping to reunite lost pets with their owners. Recent AI updates, enhanced by Google Gemini, now let people search by breed, color, and size simply by taking a photo with their device. Next, the team plans to expand globally, aiming to help owners everywhere stay connected with their furry companions.
Simone and Rob's app, Plantix, uses AI to identify crop diseases from photos and suggests remedies.
While testing soil in the Amazon, PhD students Simone and Rob were asked by farmers for help diagnosing crop diseases. The couple quickly realized that local names for plant illnesses didn't match research terms, making solutions hard to find. So they created Plantix, an AI app that uses Google Vision Transformer (ViT), identifying crop problems from photos and suggests remedies in multiple languages.Their mission continues to grow; now based in India, they are building a new startup to help farmers source eco-friendly products. With global expansion in mind, the team aims to add a speech-based assistant to give farmers real-time, personalized advice.
Gabriel and Isaac's game, Afrilearn uses AI powered by Google Cloud to make education fun and accessible for children across West Africa.
Inspired by their own upbringing in Lagos, friends Gabriel and Isaac believe every child deserves a chance to succeed through education. They built Afrilearn, a gamified learning app that uses animation, storytelling, and AI to make lessons aligned with local curriculums engaging and accessible. Already helping thousands of learners, they are now expanding into school management tools to continue their mission of unlocking every child's potential.
Discover other inspiring app and game founders featured in #WeArePlay.

10 Sep 2025 9:00am GMT