18 Sep 2025
TalkAndroid
The Rookie: the truth is out about the show’s future after Season 7
After seven action-packed seasons filled with chase scenes, heart-pounding standoffs, and the kind of emotional twists that keep…
18 Sep 2025 6:30am GMT
17 Sep 2025
Android Developers Blog
Android 16 QPR2 Beta 2 is Here
Posted by Matthew McCullough, VP of Product Management, Android Developer
Android 16 QPR2 has released Platform Stability today with Beta 2! That means that the API surface is locked, and the app-facing behaviors are final, so you can incorporate them into your apps and take advantage of our latest platform innovations.
New in the QPR2 Beta
Testing developer verification
To better protect Android users from repeat offenders, Android is introducing developer verification, a new requirement to make app installation safer by preventing the spread of malware and scams. Starting in September 2026 and in specific regions, Android will require apps to be registered by verified developers to be installed on certified Android devices, with an exception made for installs made through the Android Debug Bridge (ADB).
As a developer, you are free to install apps without verification by using ADB, so you can continue to test apps that are not intended or not yet ready to distribute to the wider consumer population.
For apps that enable user-initiated installation of app packages, Android 16 QPR2 Beta 2 contains new APIs that support developer verification during installation, along with a new adb command to let you force a verification outcome for testing purposes.
adb shell pm set-developer-verification-result
By using this command, (see adb shell pm help for full details) you can now simulate verification failures. This allows you to understand the end-to-end user experience for both successful and unsuccessful verification, so you can prepare accordingly before enforcement begins.
We encourage all developers who distribute apps on certified Android devices to sign up for early access to get ready and stay updated.
SMS OTP Protection
The delivery of messages containing an SMS retriever hash will be delayed for most apps for three hours to help prevent OTP hijacking. The RECEIVE_SMS broadcast will be withheld and sms provider database queries will be filtered. The SMS will be available to these apps after the three hour delay.
Certain apps such as the default SMS, assistant, and dialer apps, along with connected device companion, system apps, etc will be exempt from this delay, and apps can continue to use the SMS retriever API to access messages intended for them in a timely manner.
Custom app icon shapes
More efficient garbage collection
The Android Runtime (ART) now includes a Generational Concurrent Mark-Compact (CMC) Garbage Collector in Android 16 QPR2 that focuses collection efforts on newly allocated objects, which are more likely to be garbage. You can expect reduced CPU usage from garbage collection, a smoother user experience with less jank, and improved battery efficiency.
Native step tracking and expanded exercise data in Health Connect
Health Connect now automatically tracks steps using the device's sensors. If your app has the READ_STEPS permission, this data will be available from the "android" package. Not only does this simplify the code needed to do step tracking, it's more power efficient as well.
Also, the ExerciseSegment and ExerciseSession data types have been updated. You can now record and read weight, set index, and Rate of Perceived Exertion (RPE) for exercise segments. Since Health Connect is updated independently of the platform, checking for feature availability before writing the data will ensure compatibility with the current local version of Health Connect.
// Check if the expanded exercise features are available val newFieldsAvailable = healthConnectClient.features.getFeatureStatus( HealthConnectFeatures.FEATURE_EXPANDED_EXERCISE_RECORD ) == HealthConnectFeatures.FEATURE_STATUS_AVAILABLE val segment = ExerciseSegment( //... // Conditionally add the new data fields weight = if (newFieldsAvailable) Mass.fromKilograms(50.0) else null, setIndex = if (newFieldsAvailable) 1 else null, rateOfPerceivedExertion = if (newFieldsAvailable) 7.0f else null )
A minor SDK version
QPR2 marks the first Android release with a minor SDK version allowing us to more rapidly innovate with new platform APIs provided outside of our usual once-yearly timeline. Unlike the major platform release (Android 16) in 2025-Q2 that included behavior changes that impact app compatibility, the changes in this release are largely additive and designed to minimize the need for additional app testing.

Your app can safely call the new APIs on devices where they are available by using SDK_INT_FULL and the respective value from the VERSION_CODES_FULL enumeration.
if (Build.VERSION.SDK_INT_FULL >= Build.VERSION_CODES_FULL.BAKLAVA_1) { // Call new APIs from the Android 16 QPR2 release }
You can also use the Build.getMinorSdkVersion() method to get just the minor SDK version number.
val minorSdkVersion = Build.getMinorSdkVersion(VERSION_CODES_FULL.BAKLAVA)
The original VERSION_CODES enumeration can still be used to compare against the SDK_INT enumeration for APIs declared in non minor releases.
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.BAKLAVA) { // Call new APIs from the Android 16 release }
Since minor releases aren't intended to have breaking behavior changes, they cannot be used in the uses-sdk manifest attributes.
Get started with the Android 16 QPR2 beta
You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are already in the Android Beta program, you will be offered an over-the-air update to Beta 2. We'll update the system images and SDK regularly throughout the Android 16 QPR2 release cycle.
If you are in the Canary program and would like to enter the Beta program, you will need to wipe your device and manually flash it to the beta release.
For the best development experience with Android 16 QPR2, we recommend that you use the latest Canary version of Android Studio Narwhal Feature Drop.
We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release. Thank you for helping to shape the future of the Android platform.
17 Sep 2025 8:04pm GMT
TalkAndroid
Redmi 15 Series Debuts with 7000mAh Battery and Expansive 6.9 Inch Display
Xiaomi has unleashed the Redmi 15 Series in London. The new lineup packs a massive 7000mAh battery and…
17 Sep 2025 6:03pm GMT
Google’s New Windows Search App Redefines Desktop Search Experience
Google has launched an experimental Windows search app that could revolutionize how millions of users find files on…
17 Sep 2025 4:15pm GMT
Netflix in September: 10 must-watch series to kick off your fall season
September might signal the return to work, routines, and packed calendars - but Netflix has other plans. The…
17 Sep 2025 3:30pm GMT
Trump Mobile’s Early Phones Fail US-Made and Budget Promises
Trump Mobile's newest smartphones are neither manufactured in the United States nor offered at discount prices. This contradicts…
17 Sep 2025 3:01pm GMT
Magic Chronicle: Isekai RPG Codes – September 2025
Find all the latest Magic Chronicle: Isekai RPG codes here!
17 Sep 2025 12:41pm GMT
Mafia City Codes – September 2025
Find the latest Mafia City Codes here! Keep reading for more!
17 Sep 2025 12:40pm GMT
Warhammer Tacticus Codes – September 2025
Find the latest Warhammer Tacticus Codes here! Keep reading for more!
17 Sep 2025 12:39pm GMT
Cyber Rebellion Codes – September 2025
Find all the latest Cyber Rebellion codes here! Keep reading for more.
17 Sep 2025 12:38pm GMT
Honkai Star Rail Codes – September 2025
Find all the latest Honkai Star Rail codes here! Keep reading for more.
17 Sep 2025 12:36pm GMT
Idle Heroes Codes – September 2025
Find the latest Idle Heroes codes here! Keep reading for more.
17 Sep 2025 12:35pm GMT
Redecor Codes – September 2025
Find the latest Redecor Codes here! Keep reading for more.
17 Sep 2025 12:34pm GMT
Coin Tales Free Spins – Updated Every Day!
Tired of running out of Coin Tales Free Spins? We update our links daily, so you won't have that problem again!
17 Sep 2025 12:32pm GMT
Coin Master Free Spins & Coins Links
Find all the latest Coin Master free spins right here! We update daily, so be sure to check in daily!
17 Sep 2025 12:31pm GMT
Monopoly Go – Free Dice Links Today (Updated Daily)
If you keep on running out of dice, we have just the solution! Find all the latest Monopoly Go free dice links right here!
17 Sep 2025 12:30pm GMT
The Best OnePlus Deals in the US and UK: Phones, Tablets, and Freebies
OnePlus delivers serious savings this month. Fresh deals span flagship phones, tablets, and bundled accessories across US and…
17 Sep 2025 12:30pm GMT
15 Sep 2025
Android Developers Blog
Simplifying advanced networking with DHCPv6 Prefix Delegation
Posted by Lorenzo Colitti - TL, Android Core Networking and Patrick Rohr - Software Engineer, Android Core Networking
IPv4 complicates app code and causes battery impact
Most of today's Internet traffic still uses IPv4, which cannot provide transparent end-to-end connectivity to apps. IPv4 only provides 232 addresses - much less than the number of devices on today's Internet - so it's not possible to assign a public IPv4 address to every Android device, let alone to individual apps or functions within a device. So most Internet users have private IPv4 addresses, and share a public IPv4 address with other users of the same network using Network Address Translation (NAT). NAT makes it difficult to build advanced networking apps such as video calling apps or VPNs, because these sorts of apps need to periodically send packets to keep NAT sessions alive (which hurts battery) and implement complex protocols such as STUN to allow devices to connect to each other through NAT.
Why IPv6 hasn't solved this problem yet
The new version of the Internet protocol, IPv6 - now used by about half of all Google users - provides virtually unlimited address space and the ability for devices to use multiple addresses. When every device can get global IPv6 addresses, there is no need to use NAT for address sharing! But although the address space itself is no longer limited, the current IPv6 address assignment methods used on Wi-Fi, such as SLAAC and DHCPv6 IA_NA, still have limitations.
For one thing, both SLAAC and DHCPv6 IA_NA require the network to maintain state for each individual address, so assigning more than a few IPv6 addresses to every Android device can cause scaling issues on the network. This means it's often not possible to assign IPv6 addresses to VMs or containers within the device, or to wearable devices and other tethered devices connected to it. For example, if your app is running on a wearable device connected to an Android phone, or on a tablet tethered to an Android phone that's connected to Wi-Fi, it likely won't have IPv6 connectivity and will need to deal with the complexities and battery impact of NAT.
Additionally, we've heard feedback from some users and network operators that they desire more control over the IPv6 addresses used by Android devices. Until now, Android only supported SLAAC, which does not allow networks to assign predictable IPv6 addresses, and makes it more difficult to track the mapping between IPv6 addresses and the devices using them. This has limited the availability of IPv6 on Android devices on some networks.
The solution: dedicated IPv6 address blocks with DHCPv6 PD
To overcome these drawbacks, we have added support for DHCPv6 Prefix Delegation (PD) as defined in RFC 8415 and RFC 9762. The Android network stack can now request a dedicated prefix from the network, and if it obtains a prefix, it will use it to obtain IPv6 connectivity. In future releases, the device will be able to share the prefix with wearable devices, tethered devices, virtual machines, and stub networks such as Thread, providing all these devices with global IPv6 connectivity. This truly realizes the potential of IPv6 to allow end-to-end, scalable connectivity to an unlimited number of devices and functions, without requiring NAT. And because the prefix is assigned by the network, network operators can use existing DHCPv6 logging infrastructure to track which device is using which prefix (see RFC 9663 for guidance to network operators on deploying DHCPv6 PD).
This allows networks to fully realize the potential of IPv6: devices maintain the flexibility of SLAAC, such as the ability to use a nearly unlimited number of addresses, and the network maintains the manageability and accountability of a traditional DHCPv6 setup. We hope that this will allow more networks to transition to IPv6, providing apps with end-to-end IPv6 connectivity and reducing the need for NAT traversal and keepalives.
What this means for app developers
15 Sep 2025 9:00pm GMT
10 Sep 2025
Android Developers Blog
HDR and User Interfaces
Posted by Alec Mouri - Software Engineer
As explained in What is HDR?, we can think of HDR as only referring to a luminance range brighter than SDR. When integrating HDR content into a user interface, you must be careful when your user interface is primarily SDR colors and assets. The human visual system adapts to perceived color based on the surrounding environment, which can lead to surprising results. We'll look at one pertinent example.
Simultaneous Contrast
Consider the following image:

This image shows two gray rectangles with different background colors. For most people viewing this image, the two gray rectangles appear to be different shades of gray: the topmost rectangle with a darker background appears to be a lighter shade than the bottommost rectangle with a lighter background.
But these are the same shades of gray! You can prove this to yourself by using your favorite color picking tool or by looking at the below image:

This illustrates a visual phenomenon called simultaneous contrast. Readers who are interested in the biological explanation may learn more here.
Nearby differences in color are therefore "emphasized": colors appear darker when immediately next to brighter colors. That same color would appear lighter when immediately next to darker colors.
Implications on Mixing HDR and SDR
The effect of simultaneous contrast affects the appearance of user interfaces that need to present a mixture of HDR and SDR content. The peak luminance allowed by HDR will create an effect of simultaneous contrast: the eye will adapt* to a higher peak luminance (and oftentimes a higher average luminance in practice), which will perceptually cause SDR content to appear dimmer although technically the SDR content luminance has not changed at all. For users, this can be expressed as: my phone screen became "grey" or "washed out".
We can see this phenomenon in the below image. The device on the right simulates how photos may appear with an SDR UI, if those photos were rendered as HDR. Note that the August photos look identical when compared side-by-side, but the quality of the SDR UI is visually degraded.

Applications, when designing for HDR, need to consider how "much" SDR is shown at any given time in their screens when controlling how bright HDR is "allowed" to be. A UI that is dominated by SDR, such as a gallery view where small amounts of HDR content are displayed, can suddenly appear to be darker than expected.
When building your UI, consider the impact of HDR on text legibility or the appearance of nearby SDR assets, and use the appropriate APIs provided by your platform to constrain HDR brightness, or even disable HDR. For example, a 2x headroom for HDR brightness may be acceptable to balance the quality of your HDR scene with your SDR elements. In contrast, a UI that is dominated by HDR, such as full-screen video without other UI elements on-top, does not need to consider this as strongly, as the focus of the UI is on the HDR content itself. In those situations, a 5x headroom (or higher, depending on content metadata such as UltraHDR's max_content_boost) may be more appropriate.
It might be tempting to "brighten" SDR content instead. Resist this temptation! This will cause your application to be too bright, especially if there are other applications or system UI elements on-screen.
How to control HDR headroom
Android 15 introduced a control for desired HDR headroom. You can have your application request that the system uses a particular HDR headroom based on the context around your desired UI:
- If you only want to show SDR content, simply request no headroom.
- If you only want to show HDR content, then request a high HDR headroom up to and according to the demands of the content.
- If you want to show a mixture of HDR and SDR content, then can request an intermediate headroom value accordingly. Typical headroom amounts would be around 2x for a mixed scene and 5-8x for a fully-HDR scene.
Here is some example usage:
// Required for the window to respect the desired HDR headroom. // Note that the equivalent api on SurfaceView does NOT require // COLOR_MODE_HDR to constraint headroom, if there is HDR content displayed // on the SurfaceView. window.colorMode = ActivityInfo.COLOR_MODE_HDR // Illustrative values: different headroom values may be used depending on // the desired headroom of the content AND particularities of apps's UI // design. window.desiredHdrHeadroom = if(/* SDR only */) { 0f } else { if (/* Mixed, mostly SDR */) { 1.5f } else { if ( /* Mixed, mostly HDR */) { 3f } else { /* HDR only */ 5f } } }
Other platforms also have APIs that allow for developers to have some control over constraining HDR content in their application.
Web platforms have a more coarse concept: The First Public Working Draft of the CSS Color HDR Module adds a constrained-high option to constrain the headroom for mixed HDR and SDR scenes. Within the Apple ecosystem, constrainedHigh is similarly coarse, reckoning with the challenges of displaying mixed HDR and SDR scenes on consumer displays.
If you are a developer who is considering supporting HDR, be thoughtful about how HDR interacts with your UI and use HDR headroom controls appropriately.
*There are other mechanisms the eye employs for light adaptation, like pupillary light reflex, which amplifies this visual phenomenon (brighter peak HDR light means the pupil constricts, which causes less light to hit the retina).
10 Sep 2025 2:00pm GMT
#WeArePlay: Meet the people using Google AI to solve problems in agriculture, education, and pet care
Posted by Robbie McLachlan - Developer Marketing
In our latest #WeArePlay stories, we meet the people using Google AI to drive positive change with their apps and games on Google Play - from diagnosing crop diseases with a single photo to reuniting lost pets with a simple nose print.
Here are a few of our favorites:
Jesse and Ken's app Petnow uses AI-powered nose print recognition to identify individual dogs and cats, helping to reunite lost pets with their owners.
Inspired by his lifelong love of dogs, Jesse teamed up with Vision AI expert Ken to create Petnow. Boosted by Gemini, their app uses nose print recognition to identify individual dogs and cats, helping to reunite lost pets with their owners. Recent AI updates, enhanced by Google Gemini, now let people search by breed, color, and size simply by taking a photo with their device. Next, the team plans to expand globally, aiming to help owners everywhere stay connected with their furry companions.
Simone and Rob's app, Plantix, uses AI to identify crop diseases from photos and suggests remedies.
While testing soil in the Amazon, PhD students Simone and Rob were asked by farmers for help diagnosing crop diseases. The couple quickly realized that local names for plant illnesses didn't match research terms, making solutions hard to find. So they created Plantix, an AI app that uses Google Vision Transformer (ViT), identifying crop problems from photos and suggests remedies in multiple languages.Their mission continues to grow; now based in India, they are building a new startup to help farmers source eco-friendly products. With global expansion in mind, the team aims to add a speech-based assistant to give farmers real-time, personalized advice.
Gabriel and Isaac's game, Afrilearn uses AI powered by Google Cloud to make education fun and accessible for children across West Africa.
Inspired by their own upbringing in Lagos, friends Gabriel and Isaac believe every child deserves a chance to succeed through education. They built Afrilearn, a gamified learning app that uses animation, storytelling, and AI to make lessons aligned with local curriculums engaging and accessible. Already helping thousands of learners, they are now expanding into school management tools to continue their mission of unlocking every child's potential.
Discover other inspiring app and game founders featured in #WeArePlay.

10 Sep 2025 9:00am GMT
09 Sep 2025
Android Developers Blog
Improve app performance with optimized resource shrinking
Posted by Johan Bay - Software Engineer
A small and fast app is key to a fantastic user experience. That's why we built R8, our app optimizer, which streamlines your app by removing unused code and resources, rewriting code to optimize runtime performance, and more.
With the release of version 8.12.0 of the Android Gradle Plugin (AGP), we're introducing optimized resource shrinking, an even better way to shrink your app with R8. By opting in, you can make your app smaller, which means smaller downloads, faster installations, and less memory used on your users' devices. The result is a faster startup, improved rendering, and fewer ANRs.
How it works
Resource shrinking for Android apps has been around for a long time, with several improvements made along the way- for instance, shrinking the resource table (resources.arsc) is now a default optimization.
The new approach improves resource shrinking by fully integrating it with the existing code optimization pipeline. In the new approach, R8 optimizes both code and resource references at the same time ensuring that all resources referenced exclusively from unused code are identified as unused and then removed. This completely eliminates the need for the unconditional keep rules generated by AAPT2 (the resource packaging tool for Android) and provides much more fine-grained and precise information for discarding unused code and resources
This is an improvement over the existing pipeline where code and resource optimization are separate. In the existing pipeline, AAPT2 generates keep rules to unconditionally keep classes referenced from resources. Then, R8 optimization runs with these keep rules. After R8 is done optimizing and shrinking the code, it scans the remaining code to build a graph of all the resources referenced directly or indirectly. However, the unconditional AAPT2 rules often keep code that is otherwise unused, which in turn causes R8 to keep both the unused code and the unused resources referenced by it.
How to use it
First, turn on R8 optimization with resource shrinking, by using the following in your build.gradle.kts file:
android { buildTypes { release { isMinifyEnabled = true isShrinkResources = true … } } }
Turn on the new optimized resource shrinking pipeline by adding the following to your gradle.properties file:
android.r8.optimizedResourceShrinking=true
Benefits
The optimized resource shrinking pipeline has shown significant improvements over the existing implementation. For apps that share significant resources and code across different form factor verticals, we measured improvements of over 50% in app size. Smaller apps see improvements as well - for example, in Androidify we see the following gains:

The table shows the progressive improvements in size as additional optimizations are enabled, from no shrinking to optimized resource shrinking. The cells marked with an asterisk (*) indicate improved numbers compared to the previous row. Enabling R8 trims the size of your DEX, while enabling resource shrinking removes unused resources in both the res folder and in the resource table, but does not change the DEX size further, and finally, optimized resource shrinking further reduces the size by removing both resources and DEX code since it can trace references across the DEX and resource boundary.
Next steps
Starting with AGP 9.0.0, optimized resource shrinking becomes the standard behavior for any project that has the resource shrinker turned on.
Check out the newly updated documentation to try optimized resource shrinking and let us know if you encounter any problems on the issue tracker.
09 Sep 2025 4:00pm GMT
05 Sep 2025
Android Developers Blog
Elevating media playback: Introducing preloading with Media3 - Part 1
Posted by Mayuri Khinvasara Khabya - Developer Relations Engineer (LinkedIn and X)
In today's media-centric apps, delivering a smooth, uninterrupted playback experience is key to a delightful user experience. Users expect their videos to start instantly and play seamlessly without pauses.
The core challenge is latency. Traditionally, a video player only starts its work-connecting, downloading, parsing, buffering-after the user has chosen an item for playback. This reactive approach is slow for today's short form video context. The solution is to be proactive. We need to anticipate what the user will watch next and get the content ready ahead of time. This is the essence of preloading.
The key benefits of preloading include:
- 🚀 Faster Playback Start: Videos are already ready to go, leading to quicker transitions between items and a more immediate start.
- 📉 Reduced Buffering: By proactively loading data, playback is far less likely to stall, for example due to network hiccups.
- ✨ Resulting smoother User Experience: The combination of faster starts and less buffering creates a more fluid, seamless interaction for users to enjoy.
In this three-part series, we'll introduce and deep dive into Media3's powerful utilities for (pre)loading components.
- In Part 1, we'll cover the foundations: understanding the different preloading strategies available in Media3, enabling PreloadConfiguration and setting up the DefaultPreloadManager, enabling your app to preload items. By the end of this blog, you should be able to preload and play media items with your configured ranking and duration.
- In Part 2, we'll get into more advanced topics of DefaultPreloadManager: using listeners for analytics, exploring production-ready best practices like the sliding window pattern and custom shared components of DefaultPreloadManager and ExoPlayer.
- In Part 3, we'll dive deep into disk caching with DefaultPreloadManager.
Preloading to the rescue! 🦸♀️
The core idea behind preloading is simple: load media content before you need it. By the time a user swipes to the next video, the first segments of the video are already downloaded and available, ready for immediate playback.
Think of it like a restaurant. A busy kitchen doesn't wait for an order to start chopping onions. 🧅 They do their prep work in advance. Preloading is the prep work for your video player.
When enabled, preloading can help minimize join latency when a user skips to the next item before the playback buffer reaches the next item. The first period of the next window is prepared and video, audio and text samples are buffered. The preloaded period is later queued into the player with buffered samples immediately available and ready to be fed to the codec for rendering.
In Media3 there are two primary APIs for preloading, each suited for different use cases. Choosing the right API is the first step.
1. Preloading playlist items with PreloadConfiguration
This is the simple approach, useful for linear, sequential media like playlists where the playback order is predictable (like a series of episodes). You give the player the full list of media items using ExoPlayer's playlist APIs and set the PreloadConfiguration for the player, then it automatically preloads the next items in the sequence as configured. This API attempts to optimize the join latency when a user skips to the next item before the playback buffer already overlaps into the next item.
Preloading is only started when no media is being loaded for the ongoing playback, which prevents it from competing for bandwidth with the primary playback.
If you're still not sure whether you need preloading, this API is a great low-lift option to try it out!
player.preloadConfiguration = PreloadConfiguration(/* targetPreloadDurationUs= */ 5_000_000L)
With the PreloadConfiguration above, the player tries to preload five seconds of media for the next item in the playlist.
Once opted-in, playlist preloading can be turned off again by using PreloadConfiguration.DEFAULT to disable playlist preloading:
player.preloadConfiguration = PreloadConfiguration.DEFAULT
2. Preloading dynamic lists with PreloadManager
For dynamic UIs like vertical feeds or carousels, where the "next" item is determined by user interaction, the PreloadManager API is appropriate. This is a new powerful, standalone component within the Media3 ExoPlayer library specifically designed to proactively preload. It manages a collection of potential MediaSources, prioritizing them based on proximity to the user's current position and offers granular control over what to preload, suitable for complex scenarios like dynamic feeds of short form videos.
Setting Up Your PreloadManager
The DefaultPreloadManager is the canonical implementation for PreloadManager.
The builder of DefaultPreloadManager can build both the DefaultPreloadManager and any ExoPlayer instances that will play its preloaded content. To create a DefaultPreloadManager, you will need to pass a TargetPreloadStatusControl, which the preload manager can query to find out how much to load for an item. We will explain and define an example of TargetPreloadStatusControl in the section below.
val preloadManagerBuilder = DefaultPreloadManager.Builder(context, targetPreloadStatusControl) val preloadManager = val preloadManagerBuilder.build() // Build ExoPlayer with DefaultPreloadManager.Builder val player = preloadManagerBuilder.buildExoPlayer()
Using the same builder for both the ExoPlayer and DefaultPreloadManager is necessary, which ensures that the components under the hood of them are correctly shared.
And that's it! You now have a manager ready to receive instructions.
Configuring Duration and Ranking with TargetPreloadStatusControl
What if you want to preload, say, 10 seconds of video ? You can provide the position of your media items in the carousel, and the DefaultPreloadManager prioritizes loading the items based on how close it is to the item the user is currently playing.
If you want to control how much duration of the item to preload, you can tell that with DefaultPreloadManager.PreloadStatus you return.
For example,
- Item 'A' is the highest priority, load 5 seconds of video.
- Item 'B' is medium priority but when you get to it, load 3 seconds of video.
- Item 'C' is less priority, load only tracks.
- Item 'D' is even less of a priority, just prepare.
- Any other items are far away, Don't preload anything.
This granular control can help you optimize your resource utilization which is recommended for a seamless playback.
import androidx.media3.exoplayer.DefaultPreloadManager.PreloadStatus class MyTargetPreloadStatusControl( currentPlayingIndex: Int = C.INDEX_UNSET ) : TargetPreloadStatusControl<Int,PreloadStatus> { // The app is responsible for updating this based on UI state override fun getTargetPreloadStatus(index: Int): PreloadStatus? { val distance = index - currentPlayingIndex // Adjacent items (Next): preload 5 seconds if (distance == 1) { // Return a PreloadStatus that is labelled by STAGE_SPECIFIED_RANGE_LOADED and suggest loading // 5000ms from the default start position return PreloadStatus.specifiedRangeLoaded(5000L) } // Adjacent items (Previous): preload 3 seconds else if (distance == -1) { // Return a PreloadStatus that is labelled by STAGE_SPECIFIED_RANGE_LOADED //and suggest loading 3000ms from the default start position return PreloadStatus.specifiedRangeLoaded(3000L) } // Items two positions away: just select tracks else if (distance) == 2) { // Return a PreloadStatus that is labelled by STAGE_TRACKS_SELECTED return PreloadStatus.TRACKS_SELECTED } // Items four positions away: just select prepare else if (abs(distance) <= 4) { // Return a PreloadStatus that is labelled by STAGE_SOURCE_PREPARED return PreloadStatus.SOURCE_PREPARED } // All other items are too far away return null } }
Tip: PreloadManager can keep both the previous and next items preloaded, whereas the PreloadConfiguration will only look ahead to the next items.
Managing Preloading Items
With your manager created, you can start telling it what to work on. As your user scrolls through a feed, you'll identify the upcoming videos and add them to the manager. The interaction with the PreloadManager is a state-driven conversation between your UI and the preloading engine.
1. Add Media Items
As you populate your feed, you must inform the manager of the media it needs to track. If you are starting, you could add the entire list you want to preload. Subsequently you can keep adding a single item to the list as and when required. You have full control over what items are in the preloading list which means you also have to manage what is added and removed from the manager.
val initialMediaItems = pullMediaItemsFromService(/* count= */ 20) for (index in 0 until initialMediaItems.size) { preloadManager.add( initialMediaItems.get(index),index) ) }
The manager will now start fetching data for this MediaItem in the background.
After adding, tell the manager to re-evaluate its new list (hinting that something has changed like adding/ removing an item, or the user switches to play a new item.)
preloadManager.invalidate()
2. Retrieve and Play an Item
Here comes the main playback logic. When the user decides to play that video, you don't need to create a new MediaSource. Instead, you ask the PreloadManager for the one it has already prepared. You can retrieve the MediaSource from the Preload Manager using the MediaItem.
If the retrieved item from the PreloadManager is null, that means the mediaItem is not preloaded yet or added to the PreloadMamager, so you choose to set the mediaItem directly.
// When a media item is about to display on the screen val mediaSource = preloadManager.getMediaSource(mediaItem) if (mediaSource!= null) { player.setMediaSource(mediaSource) } else { // If mediaSource is null, that mediaItem hasn't been added yet. // So, send it directly to the player. player.setMediaItem(mediaItem) } player.prepare() // When the media item is displaying at the center of the screen player.play()
By preparing the MediaSource retrieved from the PreloadManager, you seamlessly transition from preloading to playback, using the data that's already in memory. This is what makes the start time faster.
3. Keep the current index in sync with the UI
Since our feed / list could be dynamic, it's important to notify the PreloadManager of your current playing index so that it can always prioritize items nearest to your current index for preloading.
preloadManager.setCurrentPlayingIndex(currentIndex) // Need to call invalidate() to update the priorities preloadManager.invalidate()
4. Remove an Item
To keep the manager efficient, you should remove items it no longer needs to track, such as items that are far away from the user's current position.
// When an item is too far from the current playing index preloadManager.remove(mediaItem)
If you need to clear all items at once, you can call preloadManager.reset().
5. Release the Manager
When you no longer need the PreloadManager (e.g., when your UI is destroyed), you must release it to free up its resources. A good place to do this is where you're already releasing your Player's resources. It's recommended to release the manager before the player as the player can continue to play if you don't need any more preloading.
// In your Activity's onDestroy() or Composable's onDispose preloadManager.release()
Demo time
Check it live in action 👍
In the demo below , we see the impact of PreloadManager on the right side which has faster load times, whereas the left side shows the existing experience. You can also view the code sample for the demo. (Bonus: It also displays startup latency for every video)
![Jetpack Media3 API for fast loading of short videos [PreloadManager]](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPRidTRe3s-2TIek6_YlfUsFwxRAaH-6rynAMXLX7UwVM9b-PhCAfOqAWwrTIBj6pSfntZizlYLU6nhmkNqbwwvBLGkWRjsOKNb-c56Jr8gutxBkSZidmOP1T420tBU5knaVvuAygMKHnDV8epXMc2TuIZN2klfnIKulNQUncwfz4fuuIlbLQ3yxhrhrU/s1600/Demo-PreloadManager%202.gif)
What's Next?
And that's a wrap for Part 1! You now have the tools to build a dynamic preloading system. You can either use PreloadConfiguration to preload the next item of a playlist in ExoPlayer or set up a DefaultPreloadManager, add and remove items on the fly, configure the target preload status, and correctly retrieve the preloaded content for playback.
In Part 2, we'll go deeper on the DefaultPreloadManager. We'll explore how to listen for preloading events, discuss best practices like using a sliding window to avoid memory issues, and peek under the hood at custom shared components of ExoPlayer and DefaultPreloadManager.
Do you have any feedback to share? We are eager to hear from you.
Stay tuned, and go make your app faster! 🚀
05 Sep 2025 5:00pm GMT
04 Sep 2025
Android Developers Blog
Best practices for migrating users to passkeys with Credential Manager
Posted by Niharika Arora (X and LinkedIn) - Senior Developer Relations Engineer and Vinisha Athwani - Technical Writer (LinkedIn)
In a world where digital security is becoming increasingly critical, passwords have become a notorious weak link - they're cumbersome, often insecure, and a source of frustration for users and developers. But there's good news: passkeys are gaining popularity as the most user-friendly, phishing-resistant, and secure authentication mechanism available. For Android developers, the Credential Manager API helps you guide your users towards using passkeys while ensuring continued support for traditional sign-in mechanisms, such as passwords.
In this blog, we discuss some of the best practices you should follow while encouraging users to transition to passkeys.
Understand authentication with passkeys
Before diving into the recommendations for encouraging the transition to passkeys, here's an overview of the fundamentals of authentication with passkeys:
- Passkeys: These are cryptographic credentials that replace passwords. Passkeys are associated with device unlocking mechanisms, and are the recommended method of authentication for apps and sites.
- Credential Manager: A Jetpack API that provides a unified API interface for interacting with different types of authentication, including passkeys, passwords, and federated sign-in mechanisms like Sign in with Google.
How do passkeys help your users?
There are several tangible benefits that users experience in apps that allow them to use passkeys to sign in. The highlights of using passkey for users are as follows:
- Improved sign-in experience: Users get the same UI whether they use passwords, passkeys or federated sign-in mechanisms like Sign in with Google.
- Reduced sign-in time: Instead of typing out passwords, users use their phone unlock mechanisms, such as biometrics, resulting in a smooth sign-in experience.
- Improved security: Passkeys use public-key cryptography so that data breaches of service providers don't result in a compromise of passkey-protected accounts, and are based on industry standard APIs and protocols to ensure they are not subject to phishing attacks. (Read more about syncing and security here).
- Unified experience across devices: With the ability to sync passkeys across devices, users benefit from simplified authentication regardless of the device they're using.
- No friction due to forgotten passwords!
Underscoring the improved experience with passkeys, we heard from several prominent apps. X observed that login rates improved 2x after adding passkeys to their authentication flows. KAYAK, a travel search engine, observed that the average time it takes their users to sign up and sign in reduced by 50% after they incorporated passkeys into their authentication flows. Zoho, a comprehensive cloud-based software suite focused on security and seamless experiences, achieved 6x faster logins by adopting passkeys in their OneAuth Android app.
What's in it for you?
When you migrate your app to use passkeys, you'll be leveraging the Credential Manager API which is the recommended standard for identity and authentication on Android.
Apart from passkeys, the Credential Manager API supports traditional sign-in mechanisms, simplifying the development and maintenance of your authentication flows!
For all of these sign-in mechanisms, Credential Manager offers an integrated bottom-sheet UI, saving you development efforts while offering users a consistent experience.
When should you prompt users to use passkeys?
Now that we've established the benefits of passkeys, let's discuss how you should encourage your users to migrate to passkeys.
The following are a list of UX flows in which you can promote passkeys:
- User account registration: Introduce passkey creation prompts at key moments, such as when your users create their accounts:
-
Contextual Prompts during account creation
- Sign in: We recommend you encourage users to prompt passkeys in the moment after a user signs in with an OTP, password, or other-sign in mechanisms.
-
Prompt passkey creation during sign-in
- Account recovery: The critical user journey (CUJ) for account recovery is one that historically presents friction to users. Prompting users to adopt passkeys during account recovery is a recommended path. Users who adopt passkeys experience a familiar account recovery experience as during sign-in.
-
Account Recovery flow
- Password resets: This is the perfect moment to prompt users to create a passkey; after the frustration of a password reset, users are typically more receptive to the convenience and security passkeys offer.
-
Create a passkey for faster sign-in next time
How should you encourage the transition to passkeys?
Encouraging users to transition from passwords to passkeys requires a clear strategy. A few recommended best practices are as follows:
- Clear value proposition: Use simple, user-centric prompts to explain the benefits of passkeys. Use messaging that highlights the benefits for users. Emphasize the following benefits:
-
- Improved security benefits, such as safety from phishing.
- No need to type out a password.
- Ability to use the same passkey across devices/platforms.
- A consistent authentication experience.
-
Passkey prompt with clear value proposition
- Provide a seamless user experience:
-
- Use the unified UI provided by Credential Manager to show all available sign-in options, allowing the user to choose their preferred method without having to remember which one they used last.
- Use the official passkey icon to build user familiarity and create a consistent experience.
- Make sure that users can fall back to their traditional sign-in methods or a recovery method, such as a username and password, if a passkey is not available or if they are using a different device.
- Provide users with clarity about credentials within your app's Settings UI: Make sure your users understand their authentications options by displaying helpful information about each passkey within your app's settings. To learn more about adding credentials metadata, see the Credential Manager documentation.
-
Passkey Metadata on App's Settings screen
- Educate users: Supplement the messaging to adopt passkeys with in-app educational resources or links that explain passkeys in detail.
- Progressive rollout: Consider a phased rollout to introduce passkeys to a subset of your user base to gather feedback and refine the user experience before a broader launch.
Developer Case Studies
Real-world developer experiences often highlight how small design choices-like when and where to surface a passkey prompt-can significantly influence adoption and user trust. To see this in action, let's explore how top apps have strategically surfaced passkey prompts at key moments in their apps to drive stronger adoption :
Uber
To accelerate passkeys adoption, Uber is proactively promoting passkeys in various user journeys, alongside marketing strategies.
Uber has shared : "90+% of passkey enrollments come from promoting passkey creation at key moments inside the app as compared to onboarding and authentication CUJs", underscoring the effectiveness of their proactive strategy.
Key learnings and strategies from their implementation:
- Offer passkeys without disrupting the core user experience: Uber added a new account checkup experience in their account settings to highlight passkey benefits, resulting in high passkey adoption rates.
-
User Account checkup flow
- Proactively bring passkeys to users: They learned not to wait for users to discover passkeys organically because relying on organic adoption would have been slower despite observed benefits like faster sign-ins and increased login success rates for passkey users.
- Use additional mediums to promote passkeys: Uber is also experimenting to promote passkeys through email campaigns or banners on a user's account screen to highlight the new sign-in method, making their next sign-in easier and more secure.
- Respect your user's choice: Recognizing that not all users are ready for passkeys, Uber implemented backoff logic in critical flows as sign in, signup screens and, in some contexts, offers passkeys alongside other familiar authentication methods.
Here's what Uber has to say:
At Uber, we've seen users who adopt passkeys enjoy a faster, more seamless, and more secure login experience. To help more users benefit from passkeys, we've added nudges to create a passkey at key moments in the user experience: account settings, signup, and login. These proactive outreaches have significantly accelerated our passkey adoption.Ryan O'LaughlinSenior Software Engineer, Uber
Economic Times
Economic Times, part of the Times Internet ecosystem, used a seamless user experience as the primary motivation for users to transition to passkeys.
After introducing targeted nudges, Economic Times observed ~10% improvements in passkey creation completion rate within the initial rollout period.
Key learnings and strategies from their implementation:
- Strategic passkey generation prompts: Initially, Economic Times was aggressively prompting passkey creation in multiple user flows, but it was observed that this approach disrupted business-critical journeys such as subscription purchases or unlocking premium features and was leading to abandoned carts.
- Refined approach: Economic Times made a deliberate decision to remove passkey generation prompts from sensitive flows (such as the subscription checkout flow) to prioritize immediate action completion.
- Targeted prompts: They strategically maintained passkey generation in areas where user intent to sign-in or manage authentication is high, such as initial sign-up flows, explicit sign in pages, or account management sections.
- Positive outcome: This refined deployment resulted in improved passkey generation numbers, indicating strong user adoption, without compromising user experience in critical business flows.
-
Passkeys Management Screen
Conclusion
Integrating passkeys with Android's Credential Manager isn't just about adopting new technology; it's about building a fundamentally more secure, convenient, and delightful experience for your users. By focusing on intelligent passkey introduction, you're not just securing accounts-you're building trust and future-proofing your application's authentication strategy.
To provide your users the best, optimized and seamless experience, follow the UX guidelines while implementing passkeys authentication with Credential Manager. Check out the docs today!
04 Sep 2025 7:00pm GMT
#WeArePlay: Meet the people behind apps & games powering businesses around the world
Posted by Robbie McLachlan - Developer Marketing
In our latest #WeArePlay stories, we meet the founders building apps and games that power entrepreneurs and business owners around the world. From digitizing finances for micro-merchants in Bogotá to modernizing barbershops in New York, they are solving real-world challenges with intuitive apps.
Here are a few of our favorites:
Lluís and Man Hei's app Treinta saves microbusiness owners 30 minutes a day by digitizing their sales, inventory, and cash flow.
Bogotá, Colombia
After meeting at university, Lluís and Man Hei reunited to launch Treinta, an app inspired by Lluís' experience with small businesses across Latin America. Named after the Spanish word for '30,' it helps microbusiness owners save at least 30 minutes a day by digitizing tasks like inventory, sales tracking, and cash flow management. With a recent expansion into the U.S. and upcoming AI tools, they are on track to reach their goal of 100,000 premium members by 2026.
Ying, Yong, Steve, and Oswald's app Glints uses AI-powered recommendations to make hiring talent quick and easy.
Singapore, Singapore
High school friends Ying, Yong, Steve, and Oswald bonded over a shared vision to use technology for good, which led them to create Glints. What began as an internship portal in Singapore evolved into a dynamic hiring app after they saw an opportunity to tackle underemployment among young people in Indonesia. The app streamlines the job search with AI-powered recommendations and direct chat features, creating new career opportunities and helping companies find top talent. Their goal is to expand into more cities and become Indonesia's leading career app.
Dave and Songe's app SQUIRE helps barbers manage and grow their business with an all-in-one platform.
New York, USA
Former lawyer Songe and finance expert Dave saw an opportunity to modernize the cash-reliant barbershop industry. With no prior experience, they took over a struggling Manhattan barbershop while developing SQUIRE. The app, which initially focused on appointment scheduling and digital payments, has since evolved into a complete management platform with Point of Sale, inventory tracking, and analytics - helping barbers run their businesses more efficiently. Now, they're adding more customization options and plan to expand SQUIRE's capabilities to continue elevating the modern barbershop experience.
Discover other inspiring app and game founders featured in #WeArePlay.

04 Sep 2025 4:00pm GMT
03 Sep 2025
Android Developers Blog
The latest for devs from Made by Google, updates to Gemini in Android Studio, plus a new Androidify: our summer episode of The Android Show
Posted by Matthew McCullough - VP of Product Management, Android Developer
In this dynamic and complex ecosystem, our commitment is to your success. That's why in our summer episode of The Android Show, we're making it easier for you to build amazing apps by unpacking the latest tools and opportunities. In this episode, we'll cover how you can get building for Wear OS 6, boost your productivity with the latest Gemini in Android Studio updates, create for the new Pixel 10 Pro Fold, and even have some fun with the new AI-powered Androidify. (And for Android users, we also just dropped a bunch of new feature updates today; you can read more about those here). Let's dive in!
Get the most out of Agent Mode in Android Studio with MCP
We're focused on making you more productive by integrating AI directly into your workflow. Gemini in Android Studio is at the center of this, helping teams like Entri who was able to reduce UI development time by 40%. You can now connect Model Context Protocol servers to Android Studio, which expands the tools, knowledge, and capabilities of the AI Agent. We also just launched the Android Studio Narwhal 3 feature drop, which brings more productivity boosters like Resizable Compose Preview and Play Policy Insights.
Build for every screen with Compose Adaptive Layouts 1.2 beta
The new Pixel 10 Pro Fold creates an incredible canvas for your app, and we're simplifying development so you can take full advantage of it. The Compose Adaptive Layouts 1.2 library, now officially in beta, makes it easier than ever to build for large screens and to embrace adaptive app development. This library is packed with tools to help you create sophisticated, adaptive UIs with less code. We're focused on helping you build intuitive and engaging experiences for every screen. This foundational library is packed with powerful tools to help you create sophisticated, adaptive UIs with less code. Build dynamic, multi-pane experiences using new layout strategies like Reflow and Levitate, and use the new Large and Extra-Large window size classes to make your app more intuitive and engaging than ever. Read more about these new tools here.
Bring your most expressive apps to the wrist with Wear OS 6
We want to help you build amazing experiences for the wrist, and the new Pixel Watch 4 with Wear OS 6 provides a powerful new stage for your apps. We're giving you the tools to make your apps more expressive and personal, with Material 3 Expressive to create stunning UIs. You can also engage users in new ways by building your own marketplace with the Watch Face Push API. All of this is built on a more reliable foundation, with watches updating to Wear OS 6 seeing up to 10% better battery life and faster app launches.
Androidify yourself, with a selfie + AI!
Our journey to reimagine Android with Gemini at its center extends to everything we do-including our mascot. That's why we rebuilt Androidify with AI at its core. With the new Androidify, available on the web or on Google Play, you can use a selfie or a prompt to create your own unique Android bot, powered by Gemini 2.5 Flash and Imagen. This is a fun example of how we're building better user experiences powered by AI... Try it out for yourself-we can't wait to see what you build.
Under the hood, we're using Gemini 2.5 Flash to validate the prompt and Imagen to create your Android bot. And on Friday's this month, you'll be able to animate your Android bot into an 8-second video; this feature is powered by Veo and available to a limited number of creations. You can read more about the technical building of the Androidify app here. Try it out for yourself - we can't wait to see your inner Android!
Watch the Summer episode of The Android Show
Thank you for tuning into this quarter's episode. We're excited to continue building great things together, and this show is an important part of our conversation with you. We'd love to hear your ideas for our next episode, so please reach out on X or LinkedIn. A special thanks to my co-hosts, Annyce Davis and John Zoeller, for helping us share the latest updates.
03 Sep 2025 6:08pm GMT
Entri cut UI development time by 40% with Gemini in Android Studio
Posted by Paris Hsu - Product Manager

Entri delivers online learning experiences across local languages to over 15 million people in India, empowering them to secure jobs and advance in their careers. To seize on the latest advancements in AI, the Entri team explored a variety of tools to help their developers create better experiences for users.
Their latest experiment? Adopting Gemini in Android Studio to enable them to move faster. Not only did Gemini speed up the teams' work, trim tedious tasks, and foster ongoing learning, it streamlined collaboration between design and development and became an enjoyable, go-to resource that boosted the team's productivity overall.
Turning screenshots to code-fast
To tighten build time, developers at Entri used Gemini in Android Studio to generate Compose UI code directly from mockups. By uploading screenshots of Figma designs, Gemini produced the UI structures they needed to build entire screens in minutes. Gemini played a key role in revamping the platform's Sign-Up flow, for example, fast-tracking a process that typically takes hours to just under 45 minutes.
By streamlining the creation of Compose UIs-often from just a screenshot and a few prompts-Gemini also made it significantly easier to quickly prototype new ideas and create MVPs. This allowed their team to test concepts and validate business needs without getting bogged down by repetitive UI tweaks up front.
Entri developers found that the ability to generate code by attaching images in Gemini in Android Studio drastically reduced boilerplate work and improved alignment between design and engineering. Over time, this approach became a standard part of their prototyping process, with the team reporting 40% reduction in average UI build time per screen.

Faster experimentation to create a better app experience
The Entri team has a strong culture of experimentation, and often has multiple user-facing experiments running at once. The team found Gemini in Android Studio particularly valuable in speeding up their experimentation processes. The tool quickly produced code for A/B testing, including UI changes and feature toggles, allowing the team to conduct experiments faster and iterate in more informed ways. It also made it faster for them to get user feedback and apply it. By simplifying the early build phase and allowing for sharper testing, Gemini boosted their speed and confidence, freeing them up to create more, test faster, and refine smarter.
When it came to launching new AI learning features, Entri wanted to be first to market. With Gemini in Android Studio's help, the Entri team rolled out their AI Teaching Assistant and Interview Coach to production much faster than they normally could. "What used to take weeks, now takes days," said Jackson. "And what used to take hours, now takes minutes."

Tool integration reduces context switching
Gemini in Android Studio has changed the game for Entri's developers, removing the need to break focus to switch between tools or hunt through external documentation. Now the team receives instant answers to common questions about Android APIs and Kotlin syntax without leaving the application.
For debugging crashes, Gemini was especially useful when paired with App Quality Insights in Android Studio. By sharing stack traces directly with Gemini, developers received targeted suggestions for possible root causes and quick fixes directly in the IDE. This guidance allowed them to resolve crashes reported by Firebase and Google Play more efficiently and with less context switching. Gemini surfaced overlooked edge cases and offered alternative solutions to improve app stability, too.

Shifting focus from routine tasks to innovation
Entri developers also wanted to test the efficiency of Gemini in Android Studio on personal projects as well. They leaned on the tool to create a weather tracker, password manager, and POS billing system-all on top of their core project work at Entri. They enjoyed trying it out in their personal projects and experimenting with different use cases.
By offloading repetitive tasks and expediting initial UI and screen generation, Gemini has allowed developers to focus more on innovation, exploration, and creativity-things that often get sidelined when dealing with routine coding work. Now the team is able to spend their time refining final products, designing smarter UX, and strategizing, making their day-to-day work more efficient, collaborative, and motivating.
Get started
Ramp up your development processes with Gemini in Android Studio.
03 Sep 2025 6:07pm GMT
How Dashlane Brought Credential Manager to Wear OS with Only 78 New Lines of Code
Posted by John Zoeller - Developer Relations Engineer, Loyrn Hairston - Product Marketing Manager, and Jonathan Salamon - Dashlane Staff Software Engineer
Dashlane is a password management and provision tool that provides a secure way to manage user credentials, access control, and authentication across multiple systems and applications.
Dashlane has over 18 million users and 20,000 businesses in 180 countries. It's available on Android, Wear OS, iOS, macOS, Windows, and as a web app with an extension for Chrome, Firefox, Edge, and Safari.
Recently, they expanded their offerings by creating a Wear OS app with a Credential Provider integration from the Credential Manager API, bringing passkeys to their clients and users on smartwatches.
Streamlining Authentication on Wear OS
Dashlane users have frequently requested a Wear OS solution that provides standalone authentication for their favorite apps. In the past, Wear OS lacked the key APIs necessary for this request, which kept Dashlane from being able to provide the functionality. In their words:
"Our biggest challenge was the lack of a standard credentials API on Wear OS, which meant that it was impossible to bring our core features to this platform."
This has changed with the introduction of the new Credential Manager API on Wear OS.
Credential Manager provides a simplified, standardized user sign-in experience with built-in authentication options for passkeys, passwords, and federated identities like Sign in with Google. Conveniently, it can be implemented with minimal effort by reusing the same code as the mobile version.
The Dashlane team was thrilled to learn about this, as it meant they could save a lot of time and effort: "[The] CredentialManager API provides the same API on phones and Wear OS; you write the code only once to support multiple form factors."

After Dashlane had planned out their roadmap, they were able execute their vision for the new app with only a small engineering investment, reusing 92% of the Credential Manager code from their mobile app. And because the developers built Dashlane's app UI with Jetpack Compose for Wear OS, 60% of their UI code was also reused.

Developing for Wear OS
To provide credentials to other apps with Credential Manager, Dashlane needed to implement the Credential Provider interface on Wear OS. This proved to be a simple exercise in calling their existing mobile code, where Dashlane had already implemented behavior for credential querying and credential selection.
For example, Dashlane was able to reuse their logic to handle client invocations of CredentialManager.getCredential. When a client invokes this, the Android framework propagates the client's getCredentialRequest to Dashlane's CredentialProviderService.onBeginGetCredentialRequest implementation to retrieve the credentials specified in the request.
Dashlane delegates the logic for onBeginGetCredentialRequest to their handleGetCredentials function, below, which is shared between their mobile and Wear OS implementations.
// When a Credential Manager client calls 'getCredential', the Android // framework invokes `onBeginGetCredentialRequest`. Dashlane // implemented this `handleGetCredentials` function to handle some of // the logic needed for `onBeginGetCredentialRequest` override fun handleGetCredentials( context: Context, request: BeginGetCredentialRequest): List<CredentialEntry> = request.beginGetCredentialOptions.flatMap { option -> when (option) { // Handle passkey credential is BeginGetPublicKeyCredentialOption -> { val passkeyRequestOptions = Gson().fromJson( option.requestJson, PasskeyRequestOptions::class.java) credentialLoader.loadPasskeyCredentials( passkeyRequestOptions.rpId, passkeyRequestOptions.allowCredentials ?: listOf() ).map { passkey -> val passkeyDisplayName = getSuggestionTitle(passkey, context) PublicKeyCredentialEntry.Builder( context, passkeyDisplayName, pendingIntentForGet(context, passkey.id), option ) .setLastUsedTime(passkey.locallyViewedDate) .setIcon(buildMicroLogomarkIcon(context = context)) .setDisplayName(passkeyDisplayName) .build() // Handle other credential types
Reusing precise logic flows like this made it a breeze for Dashlane to implement their Wear OS app.
"The Credential Manager API is unified across phones and Wear OS, which was a huge advantage. It meant we only had to write our code once."
Impact and Improved Growth
The team is excited to be among the first credential providers on wearables: "Being one of the first on Wear OS was a key differentiator for us. It reinforces our brand as an innovator, focusing on the user experience, better meeting and serving our users where they are."
As an early adopter of this new technology, Dashlanes Wear OS app has already shown early promise, as described by Dashlane software engineer, Sebastien Eggenspieler: "In the first 3 months, our Wear OS app organically grew to represent 1% of our active device install base."
With their new experience launched, Wear OS apps can now rely on Dashlane as a trusted credential provider for their own Credential Manager integrations, using Dashlane to allow users to log in with a single tap; and users can view details about their credentials right from their wrist.

Dashlane's Recommendations to Wear OS Developers
With their implementation complete, the Dashlane team can offer some advice for other developers who are considering the Credential Manager API. Their message is clear: "the future is passwordless… and passkeys are leading the way, [so] provide a passkey option."
As a true innovator in their field, and the preferred credential provider for so many users, we are thrilled to have Dashlane support Credential Manager. They truly inspired us with their commitment to providing Wear OS users with the best experience possible:
"We hope that in the future every app developer will migrate their existing users to the Credential Manager API."
Get Started with Credential Manager
With its elegant simplicity and built-in secure authentication methods, the Credential Manager API provides a simple, straightforward authentication experience for users that changes the game in Wear OS.
Want to find out more about how Dashlane is driving the future of end-user authentication? Check out our video blog with their team in Paris, and read about how they found a 70% in sign-in conversion rates with passkeys.
To learn more about how you can implement Credential Manager, read our official developer and UX guides, and be sure to check out our brand new blog post and video blog as part of Wear OS Spotlight week!
We've also expanded our existing Credential Manager sample to support Wear OS, to help guide you along the way, and if you'd like to provide credentials like Dashlane, you can use our Credential Provider sample.
Finally, explore how you can start developing additional experiences for Wear OS today with our documentation and samples.
03 Sep 2025 6:06pm GMT
Unfold new possibilities with Compose Adaptive Layouts 1.2 beta
Posted by Fahd Imtiaz - Senior Product Manager and Miguel Montemayor - Developer Relations Engineer
With new form factors like the Pixel 10 Pro Fold joining the Android ecosystem, adaptive app development is essential for creating high-quality user experiences across phones, tablets, and foldables. Users expect your app's UI to seamlessly adapt to these different sizes and postures.
To help you build these dynamic experiences more efficiently, we are announcing that the Compose Adaptive Layouts Library 1.2 is officially entering beta. This release provides powerful new tools to create polished, responsive UIs for this expanding device ecosystem.
Powerful new tools for a bigger canvas
The Compose Adaptive Layouts library is our foundational toolkit for building UIs that adapt across different window sizes. This new beta release is packed with powerful features to help you create sophisticated layouts with less code. Key additions include:
- Powerful new layout strategies: The beta introduces new layout strategies like reflow and levitate, designed to help you build dynamic layouts that look great on both the outer and inner displays of a device like the Pixel 10 Pro Fold, Galaxy Z Fold7 and Z Flip7.
- New Window Size Classes: The release adds built-in support for the new Large and Extra-Large window size classes. These new breakpoints are essential for designing and triggering rich, multi-pane UI changes on expansive screens like tablets and large foldables.


For a full list of changes, check out the official release documentation. Explore our guides on canonical layouts and building a supporting pane layout.
Engage more users on every screen
Embracing an adaptive mindset is more than a best practice, it's a strategy for growth. The goal isn't just to make your app work on a larger screen, but to make it shine by becoming more intuitive for users. Instead of simply stretching a single-column layout, think about how you can use the extra space to create more efficient and immersive experiences.

This is the core principle behind dynamic layout strategies like reflow, a powerful new feature in the Compose Adaptive Layouts 1.2 beta designed to help you build these UIs. For example, a great starting point is adopting a multi-pane layout. By showing a list and its corresponding detail view side-by-side, you reduce taps and allow users to accomplish tasks more quickly.
This kind of thoughtful adaptive development is what truly boosts engagement. And, as we highlighted during the latest episode of #TheAndroidShow, this is why we see that users who use an app on both their phone and a larger screen are almost three times more engaged. Building adaptively doesn't just make your current users happier; it creates a more valuable and compelling experience that builds lasting loyalty and helps you reach new users.
The expanding Android ecosystem, from foldables to desktops
This shift toward adaptive design extends across the entire Android ecosystem. From the new Pixel 10 Pro Fold to the latest Samsung Galaxy foldables, developers have the opportunity to engage a large and growing user base on over 500 million large-screen devices.

This is also why we're continuing to invest in forward-looking experiences like Connected Displays, currently available to try in developer preview. This feature opens up new surfaces and interaction models for apps to run on, enabling true desktop-class features and multi-instance workflows. We've previously shared details on how you can get started with the Connected Displays developer preview and see how it's shaping the future of multi-device experiences.
Putting adaptive principles into practice
For developers who want to get their apps ready for this adaptive future, here are a few key best practices to keep in mind:
- Take inventory: The first step is to see where you are today. Test your app on a large screen device or with the resizable emulator in Android Studio to identify areas for improvement, like stretched UIs or usability issues.
- Support optimized layouts: Use libraries like Compose Adaptive Layouts to build UI that adapts to different window sizes and device postures. Your app should work well in both portrait and landscape, without restricting orientation.
- Think beyond touch: A great adaptive experience means supporting all input methods. This goes beyond basic functionality to include thoughtful details that users expect, like hover states for mouse cursors, context menus on right-click, and support for keyboard shortcuts.
Your app's potential is no longer confined to a single screen. Explore the large screen design gallery and app quality guidelines today to envision where your app can go. Get inspired and find design patterns, official guidance, and sample apps you need to build for every fold, flip, and screen at developer.android.com/adaptive-apps.
03 Sep 2025 6:05pm GMT
Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase
Posted by Rebecca Franks - Developer Relations Engineer, Tracy Agyemang - Product Marketer, and Avneet Singh - Product Manager
Androidify is our new app that lets you build your very own Android bot, using a selfie and AI. We walked you through some of the components earlier this year, and starting today it's available on the web or as an app on Google Play. In the new Androidify, you can upload a selfie or write a prompt of what you're looking for, add some accessories, and watch as AI builds your unique bot. Once you've had a chance to try it, come back here to learn more about the AI APIs and Android tools we used to create the app. Let's dive in!
Key technical integrations
The Androidify app combines powerful technologies to deliver a seamless and engaging user experience. Here's a breakdown of the core components and their roles:
AI with Gemini and Firebase
Androidify leverages the Firebase AI Logic SDK to access Google's powerful Gemini and Imagen* models. This is crucial for several key features:
- Image validation: The app first uses Gemini 2.5 Flash to validate the user's photo. This includes checking that the image contains a clear, focused person and meets safety standards before any further processing. This is a critical first step to ensure high-quality and safe outputs.
- Image captioning: Once validated, the model generates a detailed caption of the user's image. This is done using structured output, which means the model returns a specific JSON format, making it easier for the app to parse the information. This detailed description helps create a more accurate and creative final result.
- Android Bot Generation: The generated caption is then used to enrich the prompt for the final image generation. A specifically fine-tuned version of the Imagen 3 model is then called to generate the custom Android bot avatar based on the enriched prompt. This custom fine-tuning ensures the results are unique and align with the app's playful and stylized aesthetic.
The Androidify app also has a "Help me write" feature which uses Gemini 2.5 Flash to create a random description for a bot's clothing and hairstyle, adding a bit of a fun "I'm feeling lucky" element.

UI with Jetpack Compose and CameraX
The app's user interface is built entirely with Jetpack Compose, enabling a declarative and responsive design across form factors. The app uses the latest Material 3 Expressive design, which provides delightful and engaging UI elements like new shapes, motion schemes, and custom animations.
For camera functionality, CameraX is used in conjunction with the ML Kit Pose Detection API. This intelligent integration allows the app to automatically detect when a person is in the camera's view, enabling the capture button and adding visual guides for the user. It also makes the app's camera features responsive to different device types, including foldables in tabletop mode.
Androidify also makes extensive use of the latest Compose features, such as:
- Adaptive layouts: It's designed to look great on various screen sizes, from phones to foldables and tablets, by leveraging WindowSizeClass and reusable composables.
- Shared element transitions: The app uses the new Jetpack Navigation 3 library to create smooth and delightful screen transitions, including morphing shape animations that add a polished feel to the user experience.
- Auto-sizing text: With Compose 1.8, the app uses a new parameter that automatically adjusts font size to fit the container's available size, which is used for the app's main "Customize your own Android Bot" text.

Latest updates
In the latest version of Androidify, we've added some new powerful AI driven features.
Background vibe generation with Gemini Image editing
Using the latest Gemini 2.5 Flash Image model, we combine the Android bot with a preset background "vibe" to bring the Android bots to life.

This is achieved by using Firebase AI Logic - passing a prompt for the background vibe, and the input image bitmap of the bot, with instructions to Gemini on how to combine the two together.
override suspend fun generateImageWithEdit( image: Bitmap, backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...", ): Bitmap { val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel( modelName = "gemini-2.5-flash-image-preview", generationConfig = generationConfig { responseModalities = listOf( ResponseModality.TEXT, ResponseModality.IMAGE, ) }, ) // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background val prompt = content { text(backgroundPrompt) image(image) } val response = model.generateContent(prompt) val image = response.candidates.firstOrNull() ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() } return image ?: throw IllegalStateException("Could not extract image from model response") }
Sticker mode with ML Kit Subject Segmentation
The app also includes a "Sticker mode" option, which integrates the ML Kit Subject Segmentation library to remove the background on the bot. You can use "Sticker mode" in apps that support stickers.

The code for the sticker implementation first checks if the Subject Segmentation model has been downloaded and installed, if it has not - it requests that and waits for its completion. If the model is installed already, the app passes in the original Android Bot image into the segmenter, and calls process on it to remove the background. The foregroundBitmap object is then returned for exporting.
override suspend fun generateImageWithEdit( image: Bitmap, backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...", ): Bitmap { val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel( modelName = "gemini-2.5-flash-image-preview", generationConfig = generationConfig { responseModalities = listOf( ResponseModality.TEXT, ResponseModality.IMAGE, ) }, ) // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background val prompt = content { text(backgroundPrompt) image(image) } val response = model.generateContent(prompt) val image = response.candidates.firstOrNull() ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() } return image ?: throw IllegalStateException("Could not extract image from model response") }
See the LocalSegmentationDataSource for the full source implementation
Learn more
To learn more about Androidify behind the scenes, take a look at the new solutions walkthrough, inspect the code or try out the experience for yourself at androidify.com or download the app on Google Play.

*Check responses. Compatibility and availability varies. 18+.
03 Sep 2025 6:04pm GMT
Android Studio Narwhal 3 Feature Drop: Resizable Compose Preview, monthly releases and smarter AI
Posted by Paris Hsu - Product Manager, Android Studio
Welcome to the Android Studio Narwhal Feature Drop 3 release. This update delivers significant improvements across the board to enhance your productivity. While we continue to innovate with powerful, project-aware AI assistance in Gemini, this release also brings fundamental upgrades to core development workflows. Highlights include a resizable Compose Preview for faster UI iteration and robust app Backup & Restore tools to ensure smooth app transfers across devices for your users. These additions, alongside a more context-aware Gemini, aim to streamline every phase of your development process.
These features are delivered as part of our new monthly release cadence for Android Studio, which allows us to provide improvements more frequently. Learn more about this change and how we're accelerating development with monthly releases for Android Studio.
Develop with AI 🚀
Since launching Gemini in Android Studio, we've been working hard to introduce features and integrations across Studio with the needs of Android developers in mind. Developers have been telling us about the productivity benefits AI brings to their workflow - such as Entri, who reduced their UI development time per screen by 40%.
With this release, enhanced how you interact with Gemini - with improved options for providing project context, file attachments, and support for image attachments.
AGENTS.md: providing project-level context to Gemini
AGENTS.md is a Markdown file that lets you provide project-specific instructions, coding style rules, and other guidance that Gemini automatically uses for context. The AGENTS.md file can be checked into your version control system (like Git), ensuring your entire team shares the same core instructions and receives consistent, context-aware AI assistance. AGENTS.md files are located right alongside your code; use multiple AGENTS.md files across different directories for more granular control over your codebase.


We're making it much easier to provide rich, on-the-fly context. That's why we are also excited to share that two powerful features, Image Attachment and the @File Context, are graduating from Studio Labs and are now stable:
Image attachment - Gemini in Android Studio
The ability to attach images to your queries with Gemini is now available in the stable channel! This feature accelerates UI development and improves architectural understanding. You can:
- Generate UI from a mock-up: Provide a design image and ask Gemini to generate the Compose code.
- Understand an existing screen: Upload a screenshot and ask Gemini to explain the UI's component structure and data flow.
- Debug UI bugs: Take a screenshot of a bug, circle the issue, and ask Gemini for solutions.

@file attachment - Gemini in Android Studio
The File attachment and context drawer are also graduating from Studio Labs! Easily attach relevant project files to your prompts by typing @ in the chat window. Gemini can then use the full context of those files to provide more accurate and relevant answers. Gemini will also suggest files it thinks are relevant, which you can easily add or remove.

What's next: Deeper integration with MCP support
Looking ahead, in our summer episode of #TheAndroidShow, we went behind the scenes with Android Studio's new MCP (Model Context Protocol) support. This protocol enhances Gemini's interoperability with the broader developer ecosystem, allowing it to connect to tools like GitHub. Learn how MCP support can make Gemini's Agent Mode even more helpful for your workflow, and try it today in the Canary channel.
Optimize and refine ✨
This release includes several new features to help you optimize your app, improve project organization, and ensure compliance.
Test app backup and restore
With new Android hardware devices coming out, ensuring a smooth app transfer experience for your users switching to a new device is critical. Android Studio now provides tools to generate a backup of your app's data and restore it to another device. This makes it much easier to test your app's backup and restore functionality and protect users from data loss. Additionally, you can create and attach backups to your run configurations, making it easy to utilize Backup and Restore for your day-to-day development.

Play policy insights
Get early warnings about potential Play policy violations to help you build more compliant apps with Play Policy Insights, now in Android Studio. The IDE now shows lint warnings directly in your code when it relates to a Google Play policy requirement. You can also integrate these lint checks into your CI/CD pipelines. These insights provide an overview of the policy, dos and don'ts, and links more resources, helping you address potential issues early in your development cycle.

Proguard inspections for overly broad keep rules
Android Studio's Proguard file editor now warns you about keep rules that are overly broad. These rules can limit R8's ability to optimize your code, potentially impacting app size and performance. This inspection helps you write more precise rules for a more optimized app.

Improved Android view for multi-module projects
For those working on large projects, the Android view has a new setting to display build files directly under their corresponding modules. This change makes it easier to navigate and manage build scripts in projects with many modules.

More control over automatic project sync
For developers working on large projects, automatic Gradle syncs can sometimes interrupt your workflow. To give you more control, we're introducing an option to switch to manual project sync with reminders. When enabled, Android Studio will inform you when a sync is needed, but lets you decide when to run it, so there aren't unexpected interruptions. You can try this feature by navigating to Settings > Build, Execution, Deployment > Build Tools.


Faster UI iteration 🎨
Resizable compose preview
Building responsive UIs just got easier: Compose Preview now supports dynamic resizing, giving you instant visual feedback on how your UI adapts to different screen sizes. Simply enter Focus mode in the Compose Preview and drag the edges to see your layout change in real-time. You can even save a specific size as a new @Preview annotation with a single click, streamlining your multi-device development process.

Summary
To recap, Android Studio Narwhal Feature Drop 3 includes the following enhancements and features:
Develop with AI
- AGENTS.md support: Provide project-specific context to Gemini for more tailored responses.
- Image attachment (Stable): Easily attach image files for Gemini in Android Studio.
- @File attachment (Stable): Easily attach project files as context for Gemini in Android Studio.
Optimize and refine
- Backup and restore support: Easily test your app's data backup and restoration flow.
- Play policy insights: Get early warnings about potential Play Policy violations.
- Proguard inspections: Identify and fix overly broad keep rules for better optimization.
- Display build files under module: Improve project navigation in the Android view.
- Manual project sync: Gain more control over when Gradle syncs occur in large projects.
Faster UI iteration
- Resizable compose preview: Dynamically resize your previews to test responsive UIs instantly.
Get started
Ready to accelerate your development? Download Android Studio Narwhal 3 Feature Drop from the stable channel today!
Your feedback is essential. Please continue to share your thoughts by reporting bugs or suggesting features. For early access to the latest features, download Android Studio from the Canary channel.
Join our vibrant Android developer community on LinkedIn, Medium, YouTube, or X. We can't wait to see what you build!
03 Sep 2025 6:03pm GMT
28 Aug 2025
Android Developers Blog
Tune in on September 3: recapping the latest from Made by Google and more in our summer episode of The Android Show
Posted by Christopher Katsaros - Senior Product Marketing Manager
In just a few days, on Wednesday September 3 at 11AM PT, we'll be dropping our summer episode of #TheAndroidShow, on YouTube and on developer.android.com! In this quarterly show, we'll be unpacking all of the goodies coming out of this month's Made by Google event and what you as Android developers need to know!
With the new Pixel Watch 4 running Wear OS 6, we'll show you how to get building for the wrist. And with the latest foldable from Google, the Pixel 10 Pro Fold, we'll show how you can leverage out of the box APIs and multi-window experiences to make your apps adaptive for this new form factor. Plus, we'll be unpacking a set of new features for Gemini in Android Studio to help you be even more productive.
#TheAndroidShow is your conversation with the Android developer community, this time hosted by Annyce Davis and John Zoeller. You'll hear the latest from the developers and engineers who build Android. Don't forget to tune in live on September 3 at 10AM PT, live on YouTube and on developer.android.com/events/show!
28 Aug 2025 6:30pm GMT
The evolution of Wear OS authentication
Posted by John Zoeller - Developer Relations Engineer
This post is part of Wear OS Spotlight Week. Today, we're focusing on implementing Credential Manager on Wear OS, aiming to streamline the authentication experience.
For all software developers, crafting a fast and secure authentication flow is paramount, and this is equally important on Wear OS.
The traditional Wear OS methods require users to have their phone nearby to complete authentication, often with a separate mobile flow or 2-factor auth code.
Credential Manager's arrival simplifies this process, allowing for authentication directly from a user's watch with no need for a nearby phone.
As a unified API, Credential Manager enables you to reuse your mobile app's code on Wear OS, streamlining development across form factors. With a single tap, users can authenticate with passwords, federated identities like Sign in with Google, or passkeys, the new industry standard for security.

The power of passkeys
Passkeys are built on the principle of asymmetric encryption. During creation, a system authenticator generates a unique, mathematically linked pair of keys: a public key that is securely stored online with the service, and a private key that remains exclusively on the user's device.
When signing in, the device uses the private key to cryptographically prove to the service that it possesses the key.
This process is highly secure because the private key never leaves the device during authorization (only during syncs from credential providers) and can only be used with the user's explicit permission. This makes passkeys resistant to server breaches, as a breach could only ever expose the public half of the key pair. Additionally, since there is no passphrase to steal, passkeys are virtually phishing-proof.
The user experience of passkeys is seamless: to log in, a user confirms their presence with their device's lock (e.g., biometric credential or PIN), and they are signed in. This eliminates the need to remember complex passphrases and provides a faster, more secure method of authentication that works seamlessly across devices.

Designing authentication with Credential Manager
Credential Manager should be the base of a Wear app's authentication flow. Developers should decide which of its built-in methods to implement based on what is implemented in their mobile experiences, and based on the variety of authentication methods their users need.
Passkeys are the preferred built-in solution due to their inherent security and simplicity, but the other built-in options Credential Manager provides can also be implemented. Passwords are valuable because of their familiarity to users, and federated identities like Sign in with Google provide users with the comfort of a trusted provider.

Developers should maintain at least one of their existing authentication options as a backup as they transition their users to Credential Manager. If Credential Manager is dismissed by a user, or if all of its methods fail, or if credentials are not available, developers can present their backup options.
The Wear Authentication developer guide includes details on supported Wear OS backup authentication options. These include solutions like OAuth 2.0, which has traditionally been a popular choice on Wear OS; and data layer token sharing, which can be used to automatically authenticate users at app launch time if their phone is nearby to sync a signed in account.
Read the full Wear sign-in design guidance to learn about all the best practices for designing your authentication flow, including our special guidance around data layer token sharing.

Implementing Credential Manager on Wear OS
Basic GetCredential setup
At its core, Credential Manager consolidates multiple authentication methods into a single, unified API call: getCredential. By configuring a GetCredentialRequest with your authentication options, you can use the response to validate a user's identity with your app's server that contains the credentials, like so:
val request = GetCredentialRequest(getCredentialOptions()) val getCredentialResponse = credentialManager.getCredential(activity, request) login(getCredentialResponse.credential)
Sync Credentials with Digital Asset Links
For a truly seamless experience, a user's credentials must sync effortlessly from their other devices to their watch, since it is currently not possible to create credentials on Wear OS.
To enable this, you must add an entry for Wear OS in your Digital Asset Links to associate your Wear OS app with other versions of your app. Be sure to precisely fill out the asset link entry, including your app's applicationId and the SHA-256 cryptographic hash from your application's digital signature. You can test them out with our app link verification guide.
Furnishing getCredential with built-in credentials
To allow users to sign in with Credential Manager, provide getCredential with options for the three built-in authentication types: passkeys, passwords, and federated identities like Sign in With Google.
// Adding options is part of creating the credential request GetCredentialRequest(getCredentialOptions())) // Furnish list of CredentialOptions for the request suspend fun getCredentialOptions(): List<CredentialOption> { return listOf( // Passkey: Furnish a GetPublicKeyCredentialOption with public key // data from your authentication server GetPublicKeyCredentialOption(authServer.getPublicKeyRequestOptions()), // Password: Add the provided GetPasswordOption type in your list GetPasswordOption(), // Federated Identity: Add your desired option type (GetGoogleIdOption, below) // to orchestrate a token exchange with the federated identity server. GetGoogleIdOption.Builder().setServerClientId(SERVER_CLIENT_ID).build(), ) }
When getCredential is called, Credential Manager will use the options developers provide to present users with a UI to choose how they want to log in.

Handling built-in Credential types
After a user selects their desired credential in the Credential Manager UI, use the result of getCredential (which contains the selected credential) to route to your authentication handlers.
// getCredential returns the selected credential login(getCredentialResponse.credential) // Route to your credential handling functions to login suspend fun login(credential: Credential): LoginResult { when (credential) { is PublicKeyCredential -> { return authHandler.loginWithPasskey(credential.authenticationResponseJson) } is PasswordCredential -> { return authHandler.loginWithPassword(credential.id, credential.password) } is CustomCredential -> { return authHandler.loginWithCustomCredential( credential.type, credential.data) } // 'else' case, etc…
The handling logic for each of the above loginWith'x' methods is slightly different, although they all set up network calls to dedicated authentication endpoints. Below are simplified versions of these methods which demonstrate network calls to authenticate users based on their selected method.
Passkeys require the signed passkey JSON data. Your server will use this data to cryptographically verify the user.
suspend fun loginWithPasskey(passkeyResponseJSON: String): LoginResult { val validatedPasskey = httpClient.post( "myendpoint/passkey", passkeyResponseJSON, /*other args*/) return LoginResult(validatedPasskey) }
Passwords require network logic to validate the username and password, our example uses subsequent calls to validate the username first. Your backend will validate these against its user database.
suspend fun loginWithPassword(userName: String, password: String): LoginResult { val validatedUserName = httpClient.post( "myendpoint/username", userName, /*other args*/) val validatedPassword = httpClient.post( "myendpoint/password", password, validatedUserName, /*other args*/) return LoginResult(ValidatedPassword) }
Federated identities like Sign in with Google require that a secure connection is established between your server and your app. Our sample shows a challenge-response flow initiated from the server, but a client generated nonce works as well.
Our sample server provides a challenge to our app on request (federatedSessionId, below) which is subsequently used to validate the federated token to authenticate the user.
suspend fun loginWithCustomCredential(type: String, data: Bundle): LoginResult { if (type == GoogleIdTokenCredential.TYPE_GOOGLE_ID_TOKEN_CREDENTIAL) { token = GoogleIdTokenCredential.createFrom(data).idToken } // Establish a federated session for with your server and obtain its info val federatedSessionId = httpClient.post("myendpoint/ObtainFederatedSession", /*federated backend address=*/"https://accounts.google.com") // Validate the token with the established federated session. val validatedCustomCredential = httpClient.post( "myendpoint/verifyToken", token, federatedSessionID, /*federated backend address=*/"https://accounts.google.com") return LoginResult(validatedCustomCredential) }
Handling secondary Credential types
If a user taps dismiss, or swipes back from Credential Manager, a GetCredentialCancellationException will be thrown for developers to use to navigate to their backup login screens, which will provide secondary authentication options to users. These options are detailed in the Designing Authentication with Credential Manager section, above.
// Catch the user dismissal catch (e: GetCredentialCancellationException) { // Trigger event that navigates to 'BackupLoginScreen' uiEvents.send(UiEvent.NavigateToBackupLogin) }
Special Note: The version of Google Sign in that exists outside of Credential Manager is now deprecated and will be removed, and should not be provided as a secondary option to avoid presenting two buttons for the same purpose.
See the Wear OS transition guide for more details.
Get started with Credential Manager on Wear OS
Implementing Credential Manager on Wear OS is a straightforward process that delivers significant benefits. By adopting this API, you can provide your users with a secure, seamless, and efficient way to authenticate. To begin implementation, explore our developer documentation and official sample app.
To learn how apps have migrated to Credential Manager on Wear OS, check out our case study with Todoist, who were able to streamline their authentication whilst reusing their mobile implementation.
For a look at how passkeys can improve login success rate, you can read all about how X adopted passkeys to achieve a more secure and user-friendly authentication experience.
Finally, you can watch the new credential manager video blog on YouTube to reinforce everything you've learned here.
Happy coding!
28 Aug 2025 4:00pm GMT