08 Mar 2026
TalkAndroid
Get ready to cry: this Turkish drama is breaking hearts everywhere
Clear your calendars and bring a box of tissues-Netflix's latest Turkish drama, To Love, To Lose, is here…
08 Mar 2026 7:00am GMT
Boba Story Lid Recipes – 2026
Look no further for all the latest Boba Story Lid Recipes. They are all right here!
08 Mar 2026 4:32am GMT
Dice Dreams Free Rolls – Updated Daily
Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.
08 Mar 2026 2:10am GMT
07 Mar 2026
TalkAndroid
Did This Movie Completely Rewrite the Rules of Modern Science Fiction?
Remember that first time you walked out of a theater thinking, "Well, I guess movies can now do…
07 Mar 2026 4:30pm GMT
Wave goodbye to banking apps—manage cards instantly on your phone now
Tired of hopping from one app to another just to manage your cards? Google Wallet's latest move might…
07 Mar 2026 4:00pm GMT
How “A Knight of the Seven Kingdoms” Will Change Everything You Thought About Honor
Just when you thought your quest in Westeros was over, HBO whisks you back-minus the dragons, but brimming…
07 Mar 2026 7:30am GMT
Sharing links just got shockingly easier—discover the update everyone’s buzzing about
Ever tried to copy just a link from a message and found yourself tangled in a copy-paste circus?…
07 Mar 2026 7:00am GMT
06 Mar 2026
TalkAndroid
Soon your phone will click everything for you—here’s what changes for users
Soon your phone could do the clicking for you-no more frantic screen-tapping or thumb gymnastics. But before you…
06 Mar 2026 4:30pm GMT
The best series of 2025 nearly lost its star before filming began
Picture this: the best series of 2025 nearly didn't have its chilling heartbeat, Bill Skarsgård, at the center.…
06 Mar 2026 4:00pm GMT
This shocking series could reveal family secrets you never dared to share
What if the secrets buried in your family's past were spilled on Netflix for all to see? Well,…
06 Mar 2026 7:30am GMT
Smallville fans rejoice: the entire series just landed for streaming now
Calling all superhero fans and 2000s nostalgics: your wish has been granted! The iconic series Smallville is back…
06 Mar 2026 7:00am GMT
05 Mar 2026
Android Developers Blog
Instagram and Facebook deliver instant playback and boost user engagement with Media3 PreloadManager
In the dynamic world of social media, user attention is won or lost quickly. Meta apps (Facebook and Instagram) are among the world's largest social platforms and serve billions of users globally. For Meta, delivering videos seamlessly isn't just a feature, it's the core of their user experience. Short-form videos, particularly Facebook Newsfeed and Instagram Reels, have become a primary driver of engagement. They enable creative expression and rapid content consumption; connecting and entertaining people around the world.
This blog post takes you through the journey of how Meta transformed video playback for billions by delivering true instant playback.
Short-form videos lead to highly fast paced interactions as users quickly scroll through their feeds. Delivering a seamless transition between videos in an ever-changing feed introduces unique hurdles for instantaneous playback. Hence we need solutions that go beyond traditional disk caching and standard reactive playback strategies.
To address the shifts in consumption habits from rise in short form content and the limitations of traditional long form playback architecture, Jetpack Media3 introduced PreloadManager. This component allows developers to move beyond disk caching, offering granular control and customization to keep media ready in memory before the user hits play. Read this blog series to understand technical details about media playback with PreloadManager.
Previously, Meta used a combination of warmup (to get players ready) and prefetch (to cache content on disk) for video delivery. While these methods helped improve network efficiency, they introduced significant challenges. Warmup required instantiating multiple player instances sequentially, which consumed significant memory and limited preloading to only a few videos. This high resource demand meant that a more scalable robust solution could be applied to deliver the instant playback expected on modern, fast-scrolling social feeds.
Integrating Media3 PreloadManager
Optimization and Performance Tuning
The team then performed extensive testing and iterations to optimize performance across Meta's diverse global device ecosystem. Initial aggressive preloading sometimes caused issues, including increased memory usage and scroll performance slowdowns. To solve this, they fine-tuned the implementation by using careful memory measurements, considering device fragmentation, and tailoring the system to specific UI patterns.
Meta applied different preloading strategies and tailored the behavior to match the specific UI patterns of each app:
-
Facebook Newsfeed: The UI prioritizes the video currently coming into view. The manager preloads only the current video to ensure it starts the moment the user pauses their scroll. This "current-only" focus minimizes data and memory footprints in an environment where users may see many static posts between videos. While the system is presently designed to preload just the video in view, it can be adjusted to also preload upcoming (future) videos.
-
Instagram Reels: This is a pure video environment where users swipe vertically. For this UI, the team implemented an "adjacent preload" strategy. The PreloadManager keeps the videos immediately after the current Reel ready in memory. This bi-directional approach ensures that whether a user swipes up or down, the transition remains instant and smooth. The result was a dramatic improvement in the Quality of Experience (QoE) including improvements in Playback Start and Time to First Frame for the user.
Scaling for a diverse global device ecosystem
Scaling a high-performance video stack across billions of devices requires more than just aggressive preloading; it requires intelligence. Meta faced initial challenges with memory pressure and scroll lag, particularly on mid-to-low-end hardware. To solve this, they built a Device Stress Detection system around the Media3 implementation. The apps now monitor I/O and CPU signals in real-time. If a device is under heavy load, preloading is paused to prioritize UI responsiveness.
This device-aware optimization ensures that the benefit of instant playback doesn't come at the cost of system stability, allowing even users on older hardware to experience a smoother, uninterrupted feed.
Architectural wins and code health
Beyond the user-facing metrics, the migration to Media3 PreloadManageroffered long-term architectural benefits. While the integration and tuning process needed multiple iterations to balance performance, the resulting codebase is more maintainable. The team found that the PreloadManager API integrated cleanly with the existing Media3 ecosystem, allowing for better resource sharing. For Meta, the adoption of Media3 PreloadManager was a strategic investment in the future of video consumption.
By adopting preloading and adding device-intelligent gates, they successfully increased total watch time on their apps and improved the overall engagement of their global community.
The proactive architecture delivered immediate and measurable improvements across both platforms.
-
Facebook experienced faster playback starts, decreased playback stall rates and a reduction in bad sessions (like rebuffering, delayed start time, lower quality,etc) which overall resulted in higher watch time.
-
Instagram saw faster playback starts and an increase in total watch time. Eliminating join latency (the interval from the user's action to the first frame display) directly increased engagement metrics. The fewer interruptions due to reduced buffering meant users watched more content, which showed through engagement metrics.

As media consumption habits evolve, the demand for instant experiences will continue to grow. Implementing proactive memory management and optimizing for scale and device diversity ensures your application can meet these expectations efficiently.
-
Prioritize intelligent preloading
Focus on delivering a reliable experience by minimizing stutters and loading times through preloading. Rather than simple disk caching, leveraging memory-level preloading ensures that content is ready the moment a user interacts with it.
-
Align your implementation with UI patterns
Customize preloading behavior as per your apps's UI. For example, use a "current-only" focus for mixed feeds like Facebook to save memory, and an "adjacent preload" strategy for vertical environments like Instagram Reels.
-
Leverage Media3 for long-term code health
Integrating with Media3 APIs rather than a custom caching solution allows for better resource sharing between the player and the PreloadManager, enabling you to manage multiple videos with a single player instance. This results in a future-proof codebase that is easier for engineering teams to not only maintain and optimize over time but also benefit from the latest feature updates.
-
Implement device aware optimizations
Broaden your market reach by testing on various devices, including mid-to-low-end models. Use real-time signals like CPU, memory, and I/O to adapt features and resource usage dynamically.
To get started and learn more, visit
-
Explore the Media3 PreloadManager documentation.
-
Read the blog series for advanced technical and implementation details.
-
Check out the sample app to see preloading in action.
Now you know the secrets for instant playback. Go try them out!
05 Mar 2026 6:03pm GMT
TalkAndroid
This search trick lets you ditch Google without changing your browsing habits
Let's be honest: most of us have "Google reflexes". You know, that automatic swoosh to the search bar…
05 Mar 2026 4:30pm GMT
5 hidden apps that instantly unlock the full power of your car’s screen
Imagine sitting behind the wheel, your favorite music humming through the speakers, your route set, and your car's…
05 Mar 2026 4:00pm GMT
Infinix Note 60 Ultra Debuts With 200MP Camera, Satellite Calling, And Supercar-inspired Design
The Note 60 Ultra packs a punch
05 Mar 2026 3:16pm GMT
Android Developers Blog
Elevating AI-assisted Android development and improving LLMs with Android Bench

Posted by Matthew McCullough, VP of Product Management, Android Developer
We want to make it faster and easier for you to build high-quality Android apps, and one way we're helping you be more productive is by putting AI at your fingertips. We know you want AI that truly understands the nuances of the Android platform, which is why we've been measuring how LLMs perform Android development tasks. Today we released the first version of Android Bench, our official leaderboard of LLMs for Android development.
Our goal is to provide model creators with a benchmark to evaluate LLM capabilities for Android development. By establishing a clear, reliable baseline for what high quality Android development looks like, we're helping model creators identify gaps and accelerate improvements-which empowers developers to work more efficiently with a wider range of helpful models to choose for AI assistance-which ultimately will lead to higher quality apps across the Android ecosystem.
Designed with real-world Android development tasks
We created the benchmark by curating a task set against a range of common Android development areas. It is composed of real challenges of varying difficulty, sourced from public GitHub Android repositories. Scenarios include resolving breaking changes across Android releases, domain-specific tasks like networking on wearables, and migrating to the latest version of Jetpack Compose, to name a few.
Each evaluation attempts to have an LLM fix the issue reported in the task, which we then verify using unit or instrumentation tests. This model-agnostic approach allows us to measure a model's ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day.
We validated this methodology with several LLM makers, including JetBrains.
"Measuring AI's impact on Android is a massive challenge, so it's great to see a framework that's this sound and realistic. While we're active in benchmarking ourselves, Android Bench is a unique and welcome addition. This methodology is exactly the kind of rigorous evaluation Android developers need right now."
- Kirill Smelov, Head of AI Integrations at JetBrains.
The first Android Bench results
For this initial release, we wanted to purely measure model performance and not focus on agentic or tool use. The models were able to successfully complete 16-72% of the tasks. This is a wide range that demonstrates some LLMs already have a strong baseline for Android knowledge, while others have more room for improvement. Regardless of where the models are at now, we're anticipating continued improvement as we encourage LLM makers to enhance their models for Android development.
The LLM with the highest average score for this first release is Gemini 3.1 Pro, followed closely by Claude Opus 4.6. You can try all of the models we evaluated for AI assistance for your Android projects by using API keys in the latest stable version of Android Studio.
Providing developers and LLM makers with transparency
We value an open and transparent approach, so we made our methodology, dataset, and test harness publicly available on GitHub.
One challenge for any public benchmark is the risk of data contamination, where models may have seen evaluation tasks during their training process. We have taken measures to ensure our results reflect genuine reasoning rather than memorization or guessing, including a thorough manual review of agent trajectories, or the integration of a canary string to discourage training.
Looking ahead, we will continue to evolve our methodology to preserve the integrity of the dataset, while also making improvements for future releases of the benchmark-for example, growing the quantity and complexity of tasks.
We're looking forward to how Android Bench can improve AI assistance long-term. Our vision is to close the gap between concept and quality code. We're building the foundation for a future where no matter what you imagine, you can build it on Android.
05 Mar 2026 2:03pm GMT
TalkAndroid
Honor MagicPad4 launches in the UK with £100 off and a £170 accessory bundle
Honor's MagicPad4 has officially landed in the UK, and early buyers are getting a pretty generous launch deal.…
05 Mar 2026 1:26pm GMT
Galaxy S26 Ultra fixes the S24 Ultra’s biggest problems. Quietly
More of the same
05 Mar 2026 11:47am GMT
Android Developers Blog
Battery Technical Quality Enforcement is Here: How to Optimize Common Wake Lock Use Cases
In recognition that excessive battery drain is top of mind for Android users, Google has been taking significant steps to help developers build more power-efficient apps. On March 1st, 2026, Google Play Store began rolling out the wake lock technical quality treatments to improve battery drain. This treatment will roll out gradually to impacted apps over the following weeks. Apps that consistently exceed the "Excessive Partial Wake Lock" threshold in Android vitals may see tangible impacts on their store presence, including warnings on their store listing and exclusion from discovery surfaces such as recommendations.
Users may see a warning on your store listing if your app exceeds the bad behavior threshold.
This initiative elevated battery efficiency to a core vital metric alongside stability metrics like crashes and ANRs. The "bad behavior threshold" is defined as holding a non-exempted partial wake lock for at least two hours on average while the screen is off in more than 5% of user sessions in the past 28 days. A wake lock is exempted if it is a system held wake lock that offers clear user benefits that cannot be further optimized, such as audio playback, location access, or user-initiated data transfer. You can view the full definition of excessive wake locks in our Android vitals documentation.
As part of our ongoing initiative to improve battery life across the Android ecosystem, we have analyzed thousands of apps and how they use partial wake locks. While wake locks are sometimes necessary, we often see apps holding them inefficiently or unnecessarily, when more efficient solutions exist. This blog will go over the most common scenarios where excessive wake locks occur and our recommendations for optimizing wake locks. We have already seen measurable success from partners like WHOOP, who leveraged these recommendations to optimize their background behavior.
Using a foreground service vs partial wake locks
We've often seen developers struggle to understand the difference between two concepts when doing background execution: foreground service and partial wake locks.
A foreground service is a lifecycle API that signals to the system that an app is performing user-perceptible work and should not be killed to reclaim memory, but it does not automatically prevent the CPU from sleeping when the screen turns off. In contrast, a partial wake lock is a mechanism specifically designed to keep the CPU running even while the screen is off.
While a foreground service is often necessary to continue a user action, a manual acquisition of a partial wake lock is only necessary in conjunction with a foreground service for the duration of the CPU activity. In addition, you don't need to use a wake lock if you're already utilizing an API that keeps the device awake.
Refer to the flow chart in Choose the right API to keep the device awake to ensure you have a strong understanding of what tool to use to avoid acquiring a wake lock in scenarios where it's not necessary.
Third party libraries acquiring wake locks
It is common for an app to discover that it is flagged for excessive wake locks held by a third-party SDK or system API acting on its behalf. To identify and resolve these wake locks, we recommend the following steps:
-
Check Android vitals: Find the exact name of the offending wake lock in the excessive partial wake locks dashboard. Cross-reference this name with the Identify wake locks created by other APIs guidance to see if it was created by a known system API or Jetpack library. If it is, you may need to optimize your usage of the API and can refer to the recommended guidance.
-
Capture a System Trace: If the wake lock cannot be easily identified, reproduce the wake lock issue locally using a system trace and inspect it with the Perfetto UI. You can learn more about how to do this in the Debugging other types of excessive wake locks section of this blog post.
-
Evaluate Alternatives: If an inefficient third-party library is responsible and cannot be configured to respect battery life, consider communicating the issue with the SDK's owners, finding an alternative SDK or building the functionality in-house.
Below is a breakdown of some of the specific use cases we have reviewed, along with the recommended path to optimize your wake lock implementation.
User-Initiated Upload or Download
Example use cases:
-
Video streaming apps where the user triggers a download of a large file for offline access.
-
Media backup apps where the user triggers uploading their recent photos via a notification prompt.
How to reduce wake locks:
-
Do not acquire a manual wake lock. Instead, use the User-Initiated Data Transfer (UIDT) API. This is the designated path for long running data transfer tasks initiated by the user, and it is exempted from excessive wake lock calculations.
One-Time or Periodic Background Syncs
Example use cases:
-
An app performs periodic background syncs to fetch data for offline access.
-
Pedometer apps that fetch step count periodically.
How to reduce wake locks:
-
Do not acquire a manual wake lock. Use WorkManager configured for one-time or periodic work. WorkManager respects system health by batching tasks and has a minimum periodic interval (15 minutes), which is generally sufficient for background updates.
- If you identify wake locks created by WorkManager or JobScheduler with high wake lock usage, it may be because you've misconfigured your worker to not complete in certain scenarios. Consider analyzing the worker stop reasons, particularly if you're seeing high occurrences of STOP_REASON_TIMEOUT.
workManager.getWorkInfoByIdFlow(syncWorker.id)
.collect { workInfo ->
if (workInfo != null) {
val stopReason = workInfo.stopReason
logStopReason(syncWorker.id, stopReason)
}
}
-
In addition to logging worker stop reasons, refer to our documentation on debugging your workers. Also, consider collecting and analyzing system traces to understand when wake locks are acquired and released.
- Finally, check out our case study with WHOOP, where they were able to discover an issue with configuration of their workers and reduce their wake lock impact significantly.
Bluetooth Communication
Example use cases:
-
Companion device app prompts the user to pair their Bluetooth external device.
-
Companion device app listens for hardware events on an external device and user visible change in notification.
-
Companion device app's user initiates a file transfer between the mobile and bluetooth device.
-
Companion device app performs occasional firmware updates to an external device via Bluetooth.
How to reduce wake locks:
-
Use companion device pairing to pair Bluetooth devices to avoid acquiring a manual wake lock during Bluetooth pairing.
-
Consult the Communicate in the background guidance to understand how to do background Bluetooth communication.
-
Using WorkManager is often sufficient if there is no user impact to a delayed communication. If a manual wake lock is deemed necessary, only hold the wake lock for the duration of Bluetooth activity or processing of the activity data.
Location Tracking
Example use cases:
-
Fitness apps that cache location data for later upload such as plotting running routes
-
Food delivery apps that pull location data at a high frequency to update progress of delivery in a notification or widget UI.
How to reduce wake locks:
-
Consult our guidance to Optimize location usage. Consider implementing timeouts, leveraging location request batching, or utilizing passive location updates to ensure battery efficiency.
-
When requesting location updates using the FusedLocationProvider or LocationManager APIs, the system automatically triggers a device wake-up during the location event callback. This brief, system-managed wake lock is exempted from excessive partial wake lock calculations.
- Avoid acquiring a separate, continuous wake lock for caching location data, as this is redundant. Instead, persist location events in memory or local storage and leverage WorkManager to process them at periodic intervals.
override fun onCreate(savedInstanceState: Bundle?) { locationCallback = object : LocationCallback() { override fun onLocationResult(locationResult: LocationResult?) { locationResult ?: return // System wakes up CPU for short duration for (location in locationResult.locations){ // Store data in memory to process at another time } } } }
High Frequency Sensor Monitoring
Example use cases:
-
Pedometer apps that passively collect steps, or distance traveled.
-
Safety apps that monitor the device sensors for rapid changes in real time, to provide features such as crash detection or fall detection.
How to reduce wake locks:
-
If using SensorManager, reduce usage to periodic intervals and only when the user has explicitly granted access through a UI interaction. High frequency sensor monitoring can drain the battery heavily due to the number of CPU wake-ups and processing that occurs.
-
If you're tracking step counts or distance traveled, rather than using SensorManager, leverage Recording API or consider utilizing Health Connect to access historical and aggregated device step counts to capture data in a battery-efficient manner.
-
If you're registering a sensor with SensorManager, specify a maxReportLatencyUs of 30 seconds or more to leverage sensor batching to minimize the frequency of CPU interrupts. When the device is subsequently woken by another trigger such as a user interaction, location retrieval, or a scheduled job, the system will immediately dispatch the cached sensor data.
val accelerometer = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER) sensorManager.registerListener(this, accelerometer, samplingPeriodUs, // How often to sample data maxReportLatencyUs // Key for sensor batching )
-
If your app requires both location and sensor data, synchronize their event retrieval and processing. By piggybacking sensor readings onto the brief wake lock the system holds for location updates, you avoid needing a wake lock to keep the CPU awake. Use a worker or a short-duration wake lock to handle the upload and processing of this combined data.
Remote Messaging
Example use cases:
-
Video or sound monitoring companion apps that need to monitor events that occur on an external device connected using a local network.
-
Messaging apps that maintain a network socket connection with the desktop variant.
How to reduce wake locks:
-
If the network events can be processed on the server side, use FCM to receive information on the client. You may choose to schedule an expedited worker if additional processing of FCM data is required.
-
If events must be processed on the client side via a socket connection, a wake lock is not needed to listen for event interrupts. When data packets arrive at the Wi-Fi or Cellular radio, the radio hardware triggers a hardware interrupt in the form of a kernel wake lock. You may then choose to schedule a worker or acquire a wake lock to process the data.
- For example, if you're using ktor-network to listen for data packets on a network socket, you should only acquire a wake lock when packets have been delivered to the client and need to be processed.
val readChannel = socket.openReadChannel() while (!readChannel.isClosedForRead) { // CPU can safely sleep here while waiting for the next packet val packet = readChannel.readRemaining(1024) if (!packet.isEmpty) { // Data Arrived: The system woke the CPU and we should keep it awake via manual wake lock (urgent) or scheduling a worker (non-urgent) performWorkWithWakeLock { val data = packet.readBytes() // Additional logic to process data packets } } }
Summary
By adopting these recommended solutions for common use cases like background syncs, location tracking, sensor monitoring and network communication, developers can work towards reducing unnecessary wake lock usage. To continue learning, read our other technical blog post or watch our technical video on how to discover and debug wake locks: Optimize your app battery using Android vitals wake lock metric. Also, consult our updated wakelock documentation. To help us continue improving our technical resources, please share any additional feedback on our guidance in our documentation feedback survey.05 Mar 2026 12:00am GMT
04 Mar 2026
Android Developers Blog
How WHOOP decreased excessive partial wake lock sessions by over 90%
Posted by Breana Tate, Developer Relations Engineer, Mayank Saini, Senior Android Engineer, Sarthak Jagetia, Senior Android Engineer and Manmeet Tuteja, Android Engineer II
Building an Android app for a wearable means the real work starts when the screen turns off. WHOOP helps members understand how their body responds to training, recovery, sleep, and stress, and for the many WHOOP members on Android, reliable background syncing and connectivity are what make those insights possible.
Earlier this year, Google Play released a new metric in Android vitals: Excessive partial wake locks. This metric measures the percentage of user sessions where cumulative, non-exempt wake lock usage exceeds 2 hours in a 24-hour period. The aim of this metric is to help you identify and address possible sources of battery drain, which is crucial for delivering a great user experience.
Beginning March 1, 2026, apps that continue to not meet the quality threshold may be excluded from Google Play discovery surfaces. A warning may also be placed on the Google Play Store listing, indicating the app might use more battery than expected.
According to Mayank Saini, Senior Android Engineer at WHOOP, this "presented the team with an opportunity to raise the bar on Android efficiency," after Android vitals flagged the app's excessive partial wake lock % as 15%-which exceeded the recommended 5% threshold.
The team viewed the Android vitals metric as a clear signal that their background work was holding the CPU awake longer than necessary. Resolving this would allow them to continue to deliver a great user experience while simultaneously decreasing wasted background time and maintaining reliable and timely Bluetooth connectivity and syncing.
Identifying the issue
To figure out where to get started, the team first turned to Android vitals for more insight into which wake locks were affecting the metric. By consulting the Android vitals excessive partial wake locks dashboard, they were able to identify the biggest contributor to excessive partial wake locks as one of their WorkManager workers (identified in the dashboard as androidx.work.impl.background.systemjob.SystemJobService). To support the WHOOP "always-on experience", the app uses WorkManager for background tasks like periodic syncing and delivering recurring updates to the wearable.
While the team was aware that WorkManager acquires a wake lock while executing tasks in the background, they previously did not have visibility into how all of their background work (beyond just WorkManager) was distributed until the introduction of the excessive partial wake locks metric in Android vitals.
With the dashboard identifying WorkManager as the main contributor, the team was then able to focus their efforts on identifying which of their workers was contributing the most and work towards resolving the issue.
Making use of internal metrics and data to better narrow down the cause
WHOOP already had internal infrastructure set up to monitor WorkManager metrics. They periodically monitor:
-
Average Runtime: For how long does the worker run?
-
Timeouts: How often is the worker timing out instead of completing?
-
Retries: How often does the worker retry if the work timed out or failed?
-
Cancellations: How often was the work cancelled?
Tracking more than just worker successes and failures gives the team visibility into their work's efficiency.
The internal metrics flagged high average runtime for a select few workers, enabling them to narrow the investigation down even further.
In addition to their internal metrics, the team also used Android Studio's Background Task Inspector to inspect and debug the workers of interest, with a specific focus on associated wake locks, to align with the metric flagged in Android vitals.
Investigation: Distinguishing between worker variants
WHOOP uses both one-time and periodic scheduling for some workers. This allows the app to reuse the same Worker logic for identical tasks with the same success criteria, differing only in timing.
Using their internal metrics made it possible to narrow their search to a specific worker, but they couldn't tell if the bug occurred when the worker was one-time, periodic, or both. So, they rolled out an update to use WorkManager's setTraceTag method to distinguish between the one-time and periodic variants of the same Worker.
This extra detail would allow them to definitively identify which Worker variant (periodic or one-time) was contributing the most to sessions with excessive partial wake locks. However, the team was surprised when the data revealed that neither variant appeared to be contributing more than the other.
Manmeet Tuteja, Android Engineer II at WHOOP said "that split also helped us confirm the issue was happening in both variants, which pointed away from scheduling configuration and toward a shared business logic problem inside the worker implementation."
Diving deeper on worker behavior and fixing the root cause
With the knowledge that they needed to take a look at logic within the worker, the team re-examined worker behavior for the workers that had been flagged during their investigation. Specifically, they were looking for instances in which work may have been getting stuck and not completing.
All of this culminated in finding the root cause of the excessive wake locks:
A CoroutineWorker that was designed to wait for a connection to the WHOOP sensor before proceeding.
If the work started with no sensor connected, whoopSensorFlow-which indicates if the sensor is connected- was null. The SensorWorker didn't treat this as an early-exit condition and kept running, effectively waiting indefinitely for a connection. As a result, WorkManager held a partial wake lock until the work timed out, leading to high background wake lock usage and frequent, unwanted rescheduling of the SensorWorker.
To address this, the WHOOP team updated the worker logic to check the connection status before attempting to execute the core business logic.
If the sensor isn't available, the worker exits, avoiding a timeout scenario and releasing the wake lock. The following code snippet shows the solution:
class SensorWorker(appContext: Context, params: WorkerParameters): CoroutineWorker(appContext, params) { override suspend fun doWork(): Result { ... // Check the sensor state and perform work or return failure return whoopSensorFlow.replayCache .firstOrNull() ?.let { cachedData -> processSensorData(cachedData) Result.success() } ?: run { Result.failure() } }
Achieving a 90% decrease in sessions with excessive partial wake locks
After rolling out the fix, the team continued to monitor the Android vitals dashboard to confirm the impact of the changes.
Ultimately, WHOOP saw their excessive partial wake lock percentage drop from 15% to less than 1% just 30 days after implementing the changes to their Worker.

As a result of the changes, the team has seen fewer instances of work timing out without completing, resulting in lower average runtimes.
The WHOOP team's advice to other developers that want to improve their background work's efficiency:
Get Started
If you're interested in trying to reduce your app's excessive partial wake locks or trying to improve worker efficiency, view your app's excessive partial wake locks metric in Android vitals, and review the wake locks documentation for more best practices and debugging strategies.
04 Mar 2026 6:00pm GMT
A new era for choice and openness
.png)
Expanded billing choice on Google Play for users and developers
Google Play is giving developers even more billing choice and freedom in how they handle transactions. Mobile developers will have the option to use their own billing systems in their app alongside Google Play's billing, or they can guide users outside of their app to their own websites for purchases. Our goal is to offer this flexibility in a way that maximizes choice and safety for users.
Leading the way in store choice
We're introducing a program that makes sideloading qualified app stores even easier. Our new Registered App Stores program will provide a more streamlined installation flow for Android app stores that meet certain quality and safety benchmarks.
Once this change has rolled out, app stores that choose to participate in this optional program will have registered with us and so users who sideload them will have a more simplified installation flow (see graphic below). If a store chooses not to participate, nothing changes for them and they retain the same experience as any other sideloaded app on Android.
This gives app stores more ways to reach users and gives users more ways to easily and safely access the apps and games they love.
This Registered App Store program will begin outside of the US first, and we intend to bring it to the US as well, subject to court approval.
Lower pricing and new programs to support developers
Google Play's fees are already the lowest among major app stores, and today we are taking this even further by introducing a new business model that decouples fees for using our billing system and introduces new, lower service fees. Once this rolls out:
-
Billing: For those developers who choose to use Google Play's billing system, they will be charged a market-specific rate separate from the service fee. In the European Economic Area (EEA), UK, and US that rate will be 5%.
-
Service Fees:
-
For new installs (first time installs from users after the new fees are launched in a region), we are reducing the in-app purchase (IAP) service fee to 20%.
-
We are launching an Apps Experience Program and revamping our Google Play Games Level Up program to incentivize building great software experiences across Android form factors associated with clear quality benchmarks and enhanced user benefits. Those developers who choose to participate in these programs will have even lower rates. Participating IAP developers will have a 20% service fee for transactions from existing installs and a 15% fee on transactions from new app installs.
-
Our service fee for recurring subscriptions will be 10%.
-
Rollout timelines
This is a significant evolution, and we plan to share additional details in the coming months. To make sure we have enough time to build the necessary technical infrastructure, enable a seamless transition for developers, and ensure alignment with local regulations, these updated fees will roll out on the following staggered schedule:
-
By June 30: EEA, the United Kingdom and the US.
-
By September 30: Australia
-
By December 31: Korea and Japan
-
By September 30, 2027: The updates will reach the rest of the world.
We will also launch the updated Google Play Games Level Up program and new App Experience program by September 30 for EEA, UK, US, and Australia and then it will roll out in line with the rest of the schedule above.
We plan to launch Registered App Stores with a version of a major Android release by the end of the year.
Resolving disputes with Epic Games
With these updates, we have also resolved our disputes worldwide with Epic Games.
We believe these changes will make for a stronger Android ecosystem with even more successful developers and higher-quality apps and games available across more form factors for everyone. We look forward to our continued work with the developer community to build the next generation of digital experiences.
04 Mar 2026 2:40pm GMT
03 Mar 2026
Android Developers Blog
Android devices extend seamlessly to connected displays

We are excited to announce a major milestone in bringing mobile and desktop computing closer together on Android: connected display support has reached general availability with the Android 16 QPR3 release!
As shown at Google I/O 2025, connected displays allow users to connect their Android devices to an external monitor and instantly access a desktop windowing environment. Apps can be used in free-form or maximized windows and users can multitask just like they would on a desktop OS.
Google and Samsung have collaborated to bring a seamless and powerful desktop windowing experience to devices across the Android ecosystem running Android 16 while connected to an external display.
This is now generally available on supported devices* to users who can connect their supported Pixel and Samsung phones to external monitors, enabling new opportunities for building more engaging and more productive app experiences that adapt across form factors.
How does it work?
When a supported Android phone or foldable is connected to an external display, a new desktop session starts on the connected display.
The experience on the connected display is similar to the experience on a desktop, including a taskbar that shows active apps and lets users pin apps for quick access. Users are able to run multiple apps side by side simultaneously in freely resizable windows on the connected display.
Phone connected to an external display with a desktop session on the display while the phone maintains its own state.
Why does it matter?
In the Android 16 QPR3 release, we finalized the windowing behaviors, taskbar interactions, and input compatibility (mouse and keyboard) that define the connected display experience. We also included compatibility treatments to scale windows and avoid app restarts when switching displays.
If your app is built with adaptive design principles, it will automatically have the desktop look and feel, and users will feel right at home. If the app is locked to portrait or assumes a touch-only interface, now is the time to modernize.
In particular, pay attention to these key best practices for optimal app experiences on connected displays:
-
Don't assume a constant Display object: The Display object associated with your app's context can change when an app window is moved to an external display or if the display configuration changes. Your app should gracefully handle configuration change events and query display metrics dynamically rather than caching them.
-
Account for density configuration changes: External displays can have vastly different pixel densities than the primary device screen. Ensure your layouts and resources adapt correctly to these changes to maintain UI clarity and usability. Use density-independent pixels (dp) for layouts, provide density-specific resources, and ensure your UI scales appropriately.
-
Correctly support external peripherals: When users connect to an external monitor, they often create a more desktop-like environment. This frequently involves using external keyboards, mice, trackpads, webcams, microphones, and speakers. Improve the support for keyboard and mouse interactions.
Building for the desktop future with modern tools
We provide several tools to help you build the desktop experience. Let's recap the latest updates to our core adaptive libraries!
The biggest update in Jetpack WindowManager 1.5.0 is the addition of two new width window size classes: Large and Extra-large.
Window size classes are our official, opinionated set of viewport breakpoints that help you design and develop adaptive layouts. With 1.5.0, we're extending this guidance for screens that go beyond the size of typical tablets.
Here are the new width breakpoints:
- Large: For widths between 1200dp and 1600dp
- Extra-large: For widths ≥1600dp
On very large surfaces, simply scaling up a tablet's Expanded layout isn't always the best user experience. An email client, for example, might comfortably show two panes (a mailbox and a message) in the Expanded window size class. But on an Extra-large desktop monitor, the email client could elegantly display three or even four panes, perhaps a mailbox, a message list, the full message content, and a calendar/tasks panel, all at once.
To include the new window size classes in your project, simply call the function from the WindowSizeClass.BREAKPOINTS_V2 set instead of WindowSizeClass.BREAKPOINTS_V1:
val currentWindowMetrics = WindowMetricsCalculator.getOrCreate() .computeCurrentWindowMetrics(LocalContext.current) val sizeClass = WindowSizeClass.BREAKPOINTS_V2 .computeWindowSizeClass(currentWindowMetrics)
if(sizeClass.isWidthAtLeastBreakpoint( WindowSizeClass.WIDTH_DP_LARGE_LOWER_BOUND)){ ... // Window is at least 1200 dp wide. }
Navigation 3 is the latest addition to the Jetpack collection. Navigation 3, which just reached its first stable release, is a powerful navigation library designed to work with Compose.
Navigation 3 is also a great tool for building adaptive layouts by allowing multiple destinations to be displayed at the same time and allowing seamless switching between those layouts.
This system for managing your app's UI flow is based on Scenes. A Scene is a layout that displays one or more destinations at the same time. A SceneStrategy determines whether it can create a Scene. Chaining SceneStrategy instances together allows you to create and display different scenes for different screen sizes and device configurations.
For out-of-the-box canonical layouts, like list-detail and supporting pane, you can use the Scenes from the Compose Material 3 Adaptive library (available in version 1.3 and above).
It's also easy to build your own custom Scenes by modifying the Scene recipes or starting from scratch. For example, let's consider a Scene that displays three panes side by side:
class ThreePaneScene<T : Any>( override val key: Any, override val previousEntries: List<NavEntry<T>>, val firstEntry: NavEntry<T>, val secondEntry: NavEntry<T>, val thirdEntry: NavEntry<T> ) : Scene<T> { override val entries: List<NavEntry<T>> = listOf(firstEntry, secondEntry, thirdEntry) override val content: @Composable (() -> Unit) = { Row(modifier = Modifier.fillMaxSize()) { Column(modifier = Modifier.weight(1f)) { firstEntry.Content() } Column(modifier = Modifier.weight(1f)) { secondEntry.Content() } Column(modifier = Modifier.weight(1f)) { thirdEntry.Content() } } }
class ThreePaneSceneStrategy<T : Any>(val windowSizeClass: WindowSizeClass) : SceneStrategy<T> { override fun SceneStrategyScope<T>.calculateScene(entries: List<NavEntry<T>>): Scene<T>? { if (windowSizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_LARGE_LOWER_BOUND)) { val lastThree = entries.takeLast(3) if (lastThree.size == 3 && lastThree.all { it.metadata.containsKey(MULTI_PANE_KEY) }) { val firstEntry = lastThree[0] val secondEntry = lastThree[1] val thirdEntry = lastThree[2] return ThreePaneScene( key = Triple(firstEntry.contentKey, secondEntry.contentKey, thirdEntry.contentKey), previousEntries = entries.dropLast(3), firstEntry = firstEntry, secondEntry = secondEntry, thirdEntry = thirdEntry ) } } return null } }
val strategy = ThreePaneSceneStrategy() then TwoPaneSceneStrategy() NavDisplay(..., sceneStrategy = strategy, entryProvider = entryProvider { entry<MyScreen>(metadata = mapOf(MULTI_PANE_KEY to true))) { ... } ... other entries... } )
If there isn't enough space to display three or two panes-both our custom scene strategies return null. In this case, NavDisplay falls back to displaying the last entry in the back stack in a single pane using SinglePaneScene.
By using scenes and strategies, you can add one, two, and three pane layouts to your app!
Checkout the documentation to learn more on how to create custom layouts using Scenes in Navigation 3.
Standalone adaptive layouts
If you need a standalone layout, the Compose Material 3 Adaptive library helps you create adaptive UIs like list-detail and supporting pane layouts that adapt themselves to window configurations automatically based on window size classes or device postures.
The good news is that the library is already up to date with the new breakpoints! Starting from version 1.2, the default pane scaffold directive functions support Large and Extra-large width window size classes.
You only need to opt-in by declaring in your Gradle build file that you want to use the new breakpoints:
currentWindowAdaptiveInfo(supportLargeAndXLargeWidth = true)
Getting started
Explore the connected display feature in the latest Android release. Get Android 16 QPR3 on a supported device, then connect it to an external monitor to start testing your app today!
Dive into the updated documentation on multi-display support and window management to learn more about implementing these best practices.
Feedback
Your feedback is crucial as we continue to refine the connected display desktop experience. Share your thoughts and report any issues through our official feedback channels.
We're committed to making Android a versatile platform that adapts to the many ways users want to interact with their apps and devices. The improvements to connected display support are another step in that direction, and we think your users will love the desktop experiences you'll build!
*Note: At the time the article is written, connected displays are supported on Pixel 8, 9, 10 series and on a wide array of Samsung devices, including S26, Fold7, Flip7, and Tab S11.
03 Mar 2026 6:00pm GMT
Go from prompt to working prototype with Android Studio Panda 2
Android Studio Panda 2 is now stable and ready for you to use in production. This release brings new agentic capabilities to Android Studio, enabling the agent to create an entire working application from scratch with the AI-powered New Project flow, and allowing the agent to automate the manual work of dependency updates.
Whether you're building your first prototype or maintaining a large, established codebase, these updates bring new efficiency to your workflow by enabling Gemini in Android Studio to help more than ever.
Here's a deep dive into what's new:
Create New Projects with AI
Say goodbye to boilerplate starter templates that just get you to the start line. With the AI-powered New Project flow, you can now build a working app prototype with just a single prompt.
The agent reduces the time you spend setting up dependencies, writing boilerplate code, and creating basic navigation, allowing you to focus on the creative aspects of app development. The AI-powered New Project flow allows you to describe exactly what you want to build - you can even upload images for style inspiration. The agent then creates a detailed project plan for your review.
When you're ready, the agent turns your plan into a first draft of your app using Android best practices, including Kotlin, Compose, and the latest stable libraries. Under your direction, it creates an autonomous generation loop: it generates the necessary code, builds the project, analyzes any build errors, and attempts to self-correct the code, looping until your project builds successfully. It then deploys your app to an Android Emulator and walks through each screen, verifying that the implementation works correctly and is true to your original request. Whether you need a simple single-screen layout, a multi-page app with navigation, or even an application integrated with Gemini APIs, the AI-powered New Project flow can handle it.
Getting Started
To use the agent to set up a project, do the following:
-
Start Android Studio.
-
Select New Project on the Welcome to Android Studio screen (or File > New > New Project from within a project)
- Select Create with AI.
- Type your prompt into the text entry field and click Next. For best results we recommend using a paid Gemini API key or third-party remote model.
5. Name your app and click Finish to start the generation process.
6. Validate the finished app using the project plan and by running your app in the Android Emulator or on an Android device.
For more details on the New Project flow, check out the official documentation.
Share What You Build
We want to hear from you and see the apps you're able to build using the New Project flow. Share your apps with us by using #AndroidStudio in your social posts. We'll be amplifying some of your submissions on our social channels.
Unlock more with your Gemini API key
While the agent works out-of-the-box using Android Studio's default no-cost model, providing your own Google AI Studio API key unlocks the full potential of the assistant. By connecting a paid Gemini API key, you get access to the fastest and latest models from Google. It also allows the New Project flow to access Nano Banana, our best model for image generation, in order to ideate on UI design - allowing the agent to create richer, higher fidelity application designs.
In the AI-powered New Project flow, this increased capability means larger context windows for more tailored generation, as well as superior code quality. Furthermore, because the Agent uses Nano Banana behind the scenes for enhanced design generation, your prototype doesn't just work well-it features visually appealing, modern UI layouts and looks professional from the get go.
Version Upgrade Assistant
Keeping your project dependencies up to date is time-consuming and often causes cascading build errors. You fix one issue by updating a dependency, only to introduce a new issue somewhere else.
The Version Upgrade Assistant in Android Studio just made that a problem of the past. You can now let AI do the heavy lifting of managing dependencies and boilerplate so you can focus on creating unique experiences for your users.
To use this feature, simply right-click in your version catalog, select AI, and then Update Dependencies.
You can also access the Version Upgrade Assistant from the Refactor menu-just choose Update all libraries with AI.
The agent runs multiple automated rounds-attempting builds, reading error messages, and adjusting versions-until the build succeeds. Instead of manually fighting through dependency conflicts, you can let the agent handle the iterative process of finding a stable configuration for you. Read the documentation for more information on Version Upgrade Assistant.
Gemini 3.1 Pro is available in Android Studio
We released Gemini 3.1 Pro preview, and it is even better than Gemini 3 Pro for reasoning and intelligence. You can access it in Android Studio by plugging in your Gemini API key. Put the new model to work on your toughest bugs, code completion, and UI logic. Let us know what you think of the new model.
Get started
Dive in and accelerate your development. Download Android Studio Panda 2 and start exploring these powerful new agentic features today.
03 Mar 2026 2:00pm GMT
02 Mar 2026
Android Developers Blog
Supercharge your Android development with 6 expert tips for Gemini in Android Studio

In January we announced Android Studio Otter 3 Feature Drop in stable, including Agent Mode enhancements and many other updates to provide more control and flexibility over using AI to help you build high quality Android apps. To help you get the most out of Gemini in Android Studio and all the new capabilities, we sat down with Google engineers and Google Developer Experts to gather their best practices for working with the latest features-including Agent mode and the New Project Assistant. Here are some useful insights to help you get the best out of your development:
-
Build apps from scratch with the New Project Assistant
The new Project Assistant-now available in the latest Canary builds-integrates Gemini with the Studio's New Project wizard. By simply providing prompts and (optionally) design mockups, you can generate entire applications from scratch, including scaffolding, architecture, and Jetpack Compose layouts.
Integrated with the Android Emulator, it can deploy your build and "walk through" the app, making sure it's functioning correctly and that the rendered screens actually match your vision. Additionally, you can use Agent Mode to then continue to work on the app and iterate, leveraging Gemini to refine your app to fit your vision.
Also, while this feature works with the default (no-cost) model, we highly recommend using this feature with an AI Studio API Key to access the latest models - like Gemini 3.1 Pro or 3.0 Flash - which excel at agentic workflows. Additionally, adding your API Key allows the New Project Assistant to use Nano Banana behind the scenes to help with ideating on UI design, improving the visual fidelity of the generated application! - Trevor Johns, Developer Relations Engineer.
Dialog for setting up a new project.
2. Ask the Agent to refine your code by providing it with 'intentional' contexts
When using Gemini Agents, the quality of the output is directly tied to the boundaries you set. Don't just ask it to "fix this code"- be very intentional with the context that you provide it and be specific about what you want (and what you don't). Improve the output by providing recent blogs or docs so the model can make accurate suggestions based on these.
Ask the Agent to simplify complex logic, or if it see's any fundamental problems with it, or even ask it to scan for security risks in areas where you feel uncertain. Being firm with your instructions-even telling the model "please do not invent things" in instances where you are using very new or experimental APIs-helps keep the AI focused on the outputs you are trying to achieve. - Alejandra Stamato, Android Google Developer Expert and Android Engineer at HubSpot.
3. Use documentation with Agent mode to provide context for new libraries
To prevent the model from hallucinating code for niche or brand-new libraries, leverage Android Studio's Agent tools, to have access to documentation: Search Android Docs and Fetch Android Docs. You can direct Gemini to search the Android Knowledge Base or specific documentation articles. The model can choose to use this if it thinks it's missing some information, which is good especially when you use niche API's, or one's which aren't as common.
If you are certain you want the model to consult the documentation and to make sure those tools are triggered, a good trick is to add something like 'search the official documentation' or 'check the docs' to your prompts. And for documentation on different libraries which aren't Android specific, install a MCP Server that lets you access documentation like Context7 (or something similar). - Jose Alcérreca, Android Developer Relations Engineer, Google.
4. Use AI to help build Agents.md files for using custom frameworks, libraries and design systems
To make sure Agent uses custom frameworks, libraries and design systems you have two options 1) In settings, Android Studio allows you to specify rules to be followed when Gemini is performing these actions for you. Or 2) Create Agents.md files in your application and specify how things should be done or act as guidance for when AI is performing a task, specific frameworks, design systems, or specific ways of doing things (such as the exact architecture, things to do or what not to do), in a standard bullet point way to give AI clear instructions.
Manage AGENTS.md files as context.
You can also use Agents.md file at the root of the project, and can have them in different modules (or even subdirectories) of your project as well! The more context you have or the more guidance available when you're working, that will be available for AI to access. If you get stuck creating these Agents.md files you can use AI to help build them, or give you foundations based on the projects you have and then edit them so you don't have to start from scratch. - Joe Birch, Android Google Developer Expert and Staff Engineer at Buffer.
5. Offload the tedious tasks to Agent and save yourself time
You can get Gemini in Android Studio agent to help you make tasks such as writing and reviewing faster. For example it can help writing commit messages, giving you a good summary which you can then review and save yourself time. Additionally, get it to write tests; under your direction the Agent can look at the other tests in your project and write a good test for you to run following best practices just by looking at them. Another good example of a tedious task is writing a new parser for a certain JSON format. Just give Gemini a few examples and it will get you started very quickly. - Diego Perez, Android Software Engineer, Google
6. Control what you are sharing with AI using simple opt-outs or commands, alongside paid models.
If you want to control what is shared with AI whilst on the no-cost plans, you can opt out some or all your code from model training by adding an AI exclusions file ('.aiexclude') to your project. This file uses glob pattern matching similar to a .gitignore file, specifying sensitive directories or files that should be hidden from the AI. You can place .aiexclude files anywhere within the project and its VCS roots to control which files AI features are allowed to access.
An example of an `.aiexclude` file in Android Studio.
Alternatively, in Android Studio settings, you can also opt out of context sharing either on a per project or per user basis (although this method limits the functionality of a number of features because the AI won't see your code).
Remember, paid plans never use your code for model training. This includes both users using an AI Studio API Key, and businesses who are subscribed to Gemini Code Assist. - Trevor Johns, Developer Relations Engineer.
Hear more from the Android team and Google Developer Experts about Gemini in Android Studio in our recent fireside chat and download Android Studio to get started.
02 Mar 2026 2:00pm GMT
26 Feb 2026
Android Developers Blog
The Second Beta of Android 17

Today we're releasing the second beta of Android 17, continuing our work to build a platform that prioritizes privacy, security, and refined performance. This update delivers a range of new capabilities, including the EyeDropper API and a privacy-preserving Contacts Picker. We're also adding advanced ranging, cross-device handoff APIs, and more.
This release continues the shift in our release cadence, following this annual major SDK release in Q2 with a minor SDK update.
User Experience & System UI
Bubbles
Bubbles is a windowing mode feature that offers a new floating UI experience separate from the messaging bubbles API. Users can create an app bubble on their phone, foldable, or tablet by long-pressing an app icon on the launcher. On large screens, there is a bubble bar as part of the taskbar where users can organize, move between, and move bubbles to and from anchored points on the screen.
You should follow the guidelines for supporting multi-window mode to ensure your apps work correctly as bubbles.
Bubbles aren't yet fully enabled in Beta 2. Look for them in a future build of Android 17.
A new system-level EyeDropper API allows your app to request a color from any pixel on the display without requiring sensitive screen capture permissions.

val eyeDropperLauncher = registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result -> if (result.resultCode == Activity.RESULT_OK) { val color = result.data?.getIntExtra(Intent.EXTRA_COLOR, Color.BLACK) // Use the picked color in your app } } fun launchColorPicker() { val intent = Intent(Intent.ACTION_OPEN_EYE_DROPPER) eyeDropperLauncher.launch(intent) }
Contacts Picker
A new system-level contacts picker via ACTION_PICK_CONTACTS grants temporary, session-based read access to only the specific data fields requested by the user, reducing the need for the broad READ_CONTACTS permissions. It also allows for selections from the device's personal or work profiles.

val contactPicker = rememberLauncherForActivityResult(StartActivityForResult()) {
if (it.resultCode == RESULT_OK) {
val uri = it.data?.data ?: return@rememberLauncherForActivityResult
// Handle result logic
processContactPickerResults(uri)
}
}
val dataFields = arrayListOf(Email.CONTENT_ITEM_TYPE, Phone.CONTENT_ITEM_TYPE)
val intent = Intent(ACTION_PICK_CONTACTS).apply {
putStringArrayListExtra(EXTRA_PICK_CONTACTS_REQUESTED_DATA_FIELDS, dataFields)
putExtra(EXTRA_ALLOW_MULTIPLE, true)
putExtra(EXTRA_PICK_CONTACTS_SELECTION_LIMIT, 5)
}
contactPicker.launch(intent)Easier pointer capture compatibility with touchpads
Previously, touchpads reported events in a very different way from mice when an app had captured the pointer, reporting the locations of fingers on the pad rather than the relative movements that would be reported by a mouse. This made it quite difficult to support touchpads properly in first-person games. Now, by default the system will recognize pointer movement and scrolling gestures when the touchpad is captured, and report them just like mouse events. You can still request the old, detailed finger location data by explicitly requesting capture in the new "absolute" mode.
// To request the new default relative mode (mouse-like events) // This is the same as requesting with View.POINTER_CAPTURE_MODE_RELATIVE view.requestPointerCapture() // To request the legacy absolute mode (raw touch coordinates) view.requestPointerCapture(View.POINTER_CAPTURE_MODE_ABSOLUTE)
Connectivity & Cross-Device
Cross-device app handoff
A new Handoff API allows you to specify application state to be resumed on another device, such as an Android tablet. When opted in, the system synchronizes state via CompanionDeviceManager and displays a handoff suggestion in the launcher of the user's nearby devices. This feature is designed to offer seamless task continuity, enabling users to pick up exactly where they left off in their workflow across their Android ecosystem. Critically, Handoff supports both native app-to-app transitions and app-to-web fallback, providing maximum flexibility and ensuring a complete experience even if the native app is not installed on the receiving device.
Advanced ranging APIs
We are adding support for 2 new ranging technologies -
-
UWB DL-TDOA which enables apps to use UWB for indoor navigation. This API surface is FIRA (Fine Ranging Consortium) 4.0 DL-TDOA spec compliant and enables privacy preserving indoor navigation (avoiding tracking of the device by the anchor).
-
Proximity Detection which enables apps to use the new ranging specification being adopted by WFA (WiFi Alliance). This technology provides improved reliability and accuracy compared to existing Wifi Aware based ranging specification.
Data plan enhancements
To optimize media quality, your app can now retrieve carrier-allocated maximum data rates for streaming applications using getStreamingAppMaxDownlinkKbps and getStreamingAppMaxUplinkKbps.Core Functionality, Privacy & Performance
Local Network Access
Android 17 introduces the ACCESS_LOCAL_NETWORK runtime permission to protect users from unauthorized local network access. Because this falls under the existing NEARBY_DEVICES permission group, users who have already granted other NEARBY_DEVICES permissions will not be prompted again. By declaring and requesting this permission, your app can discover and connect to devices on the local area network (LAN), such as smart home devices or casting receivers. This prevents malicious apps from exploiting unrestricted local network access for covert user tracking and fingerprinting. Apps targeting Android 17 or higher will now have two paths to maintain communication with LAN devices: adopt system-mediated, privacy-preserving device pickers to skip the permission prompt, or explicitly request this new permission at runtime to maintain local network communication.
Time zone offset change broadcast
Android now provides a reliable broadcast intent, ACTION_TIMEZONE_OFFSET_CHANGED, triggered when the system's time zone offset changes, such as during Daylight Saving Time transitions. This complements the existing broadcast intents ACTION_TIME_CHANGED and ACTION_TIMEZONE_CHANGED, which are triggered when the Unix timestamp changes and when the time zone ID changes, respectively.
NPU Management and Prioritization
Apps targeting Android 17 that need to directly access the NPU must declare FEATURE_NEURAL_PROCESSING_UNIT in their manifest to avoid being blocked from accessing the NPU. This includes apps that use the LiteRT NPU delegate, vendor-specific SDKs, as well as the deprecated NNAPI.
Core internationalization libraries have been updated to ICU 78, expanding support for new scripts, characters, and emoji blocks, and enabling direct formatting of time objects.
SMS OTP protection
Android is expanding its SMS OTP protection by automatically delaying access to SMS messages with OTP. Previously, the protection was primarily focused on the SMS Retriever format wherein the delivery of messages containing an SMS retriever hash is delayed for most apps for three hours. However, for certain apps like the default SMS app, etc and the app that corresponds to the hash are exempt from this delay. This update extends the protection to all SMS messages with OTP. For most apps, SMS messages containing an OTP will only be accessible after a delay of three hours to help prevent OTP hijacking. The SMS_RECEIVED_ACTION broadcast will be withheld and sms provider database queries will be filtered. The SMS message will be available to these apps after the delay.
Delayed access to WebOTP format SMS messages
If the app has the permission to read SMS messages but is not the intended recipient of the OTP (as determined by domain verification), the WebOTP format SMS message will only be accessible after three hours have elapsed. This change is designed to improve user security by ensuring that only apps associated with the domain mentioned in the message can programmatically read the verification code. This change applies to all apps regardless of their target API level.
Delayed access to standard SMS messages with OTP
Certain apps such as the default SMS, assistant app, along with connected device companion apps, etc will be exempt from this delay.
All apps that rely on reading SMS messages for OTP extraction should transition to using SMS Retriever or SMS User Consent APIs to ensure continued functionality.
The Android 17 schedule
We're going to be moving quickly from this Beta to our Platform Stability milestone, targeted for March. At this milestone, we'll deliver final SDK/NDK APIs. From that time forward, your app can target SDK 37 and publish to Google Play to help you complete your testing and collect user feedback in the several months before the general availability of Android 17.

A year of releases
We plan for Android 17 to continue to get updates in a series of quarterly releases. The upcoming release in Q2 is the only one where we introduce planned app breaking behavior changes. We plan to have a minor SDK release in Q4 with additional APIs and features.
%20(1).png)
Get started with Android 17
You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.
If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 2.
If you have Android 26Q1 Beta and would like to take the final stable release of 26Q1 and exit Beta, you need to ignore the over-the-air update to 26Q2 Beta 2 and wait for the release of 26Q1.
We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.
For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you're set up, here are some of the things you should do:
-
Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
-
Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it.
over-the-air for all later previews and Betas.
For complete information, visit the Android 17 developer site.
Join the conversation
As we move toward Platform Stability and the general availability of Android 17 later this year, your feedback remains our most valuable asset. Whether you're an early adopter on the Canary channel or an app developer testing on Beta 2, consider joining our communities and filing feedback. We're listening.26 Feb 2026 9:08pm GMT
25 Feb 2026
Android Developers Blog
The Intelligent OS: Making AI agents more helpful for Android apps

Posted by Matthew McCullough, VP of Product Management, Android Development
User expectations for AI on their devices are fundamentally shifting how they interact with their apps. Instead of opening apps to do tasks step-by-step, they're asking AI to do the heavy lifting for them. In this new interaction model, success is shifting from getting users to open your app, to successfully fulfilling their tasks and helping them get more done faster.
To help you evolve your apps for this agentic future, we're introducing early stage developer capabilities that bridge the gap between your apps and agentic apps and personalized assistants, such as Google Gemini. While we are in the early, beta stages of this journey, we're designing these features with privacy and security at their core as our first step in exploring this paradigm shift as an app ecosystem.
Empowering apps with AppFunctions
Android AppFunctions allows apps to expose data and functionality directly to AI agents and assistants. With the AppFunctions Jetpack library and platform APIs, developers can create self-describing functions that agentic apps can discover and execute via natural language. Mirroring how backend capabilities are declared via MCP cloud servers, AppFunctions provides an on-device solution for Android apps. Much like WebMCP, it executes these functions locally on the device rather than on a server.
The Samsung Gallery integration with Gemini on the Galaxy S26 series showcases AppFunctions in action. Instead of manually scrolling through photo albums, you can now simply ask Gemini to "Show me pictures of my cat from Samsung Gallery." Gemini takes the user query, intelligently identifies and triggers the right function, and presents the returned photos from Samsung Gallery directly in the Gemini app, so users never need to leave. This experience is multimodal and can be done via voice or text. Users can even use the returned photos in follow-up conversations, like sending them to friends in a text message.
Enabling agentic apps with intelligent UI automation
While AppFunctions provides a structured framework and more control for apps to communicate with AI agents and assistants, we know that not every interaction has a dedicated integration yet. We're also developing a UI automation framework for AI agents and assistants to intelligently execute generic tasks on users' installed apps, with user transparency and control built in. This is the platform doing the heavy lifting, so developers can get agentic reach with zero code. It's a low-effort way to extend their reach without a major engineering lift right now.
To get feedback as we refine this framework, we're starting with an early preview on the Galaxy S26 series and select Pixel 10 devices, where users will be able to delegate multi-step tasks to Gemini with just a long press of the power button. Launching as a beta feature in the Gemini app, this will support a curated selection of apps in the food delivery, grocery, and rideshare categories in the US and Korea to start. Whether users need to place a complex pizza order for their family members with particular tastes, coordinate a multi-stop rideshare with co-workers, or reorder their last grocery purchase, Gemini can help complete tasks using the context already available from your apps, without any developer work needed.
Users are in control while a task is being actioned in the background through UI automation. For any automation action, users have the option to monitor a task's progress via notifications or "live view" and can switch to manual control at any point to take over the experience. Gemini is also designed to alert users before completing sensitive tasks, such as making a purchase.
Looking ahead
In Android 17, we're looking to broaden these capabilities to reach even more users, developers, and device manufacturers.
We are currently building experiences with a small set of app developers, focusing on high-quality user experiences as the ecosystem evolves. We plan to share more details later this year on how you can use AppFunctions and UI automation to enable agentic integrations for your app. Stay tuned for updates.
25 Feb 2026 11:47pm GMT
17 Feb 2026
Android Developers Blog
Get ready for Google I/O May 19-20
Posted by The Google I/O Team
Google I/O returns May 19-20
Google I/O is back! Join us online as we share our latest AI breakthroughs and updates in products across the company, from Gemini to Android, Chrome, Cloud, and more.
Tune in to learn about agentic coding and the latest Gemini model updates. The event will feature keynote addresses from Google leaders, forward-looking panel discussions, and product demos designed to showcase the next frontier of technology.
Register now and tune in live
Visit io.google and register to receive updates about Google I/O. Kicking off May 19 at 10am PT, this year we'll be livestreaming keynotes, demos, and more sessions across two days. We'll also be bringing back the popular Dialogues sessions featuring big thinkers and bold leaders discussing how AI is shaping our future.
17 Feb 2026 8:00pm GMT
Under the hood: Android 17’s lock-free MessageQueue
Posted by Shai Barack, Android Platform Performance Lead and Charles Munger, Principal Software Engineer

In Android 17, apps targeting SDK 37 or higher will receive a new implementation of MessageQueue where the implementation is lock-free. The new implementation improves performance and reduces missed frames, but may break clients that reflect on MessageQueue private fields and methods. To learn more about the behavior change and how you can mitigate impact, check out the MessageQueue behavior change documentation. This technical blog post provides an overview of the MessageQueue rearchitecture and how you can analyze lock contention issues using Perfetto.
The Looper drives the UI thread of every Android application. It pulls work from a MessageQueue, dispatches it to a Handler, and repeats. For two decades, MessageQueue used a single monitor lock (i.e. a synchronized code block) to protect its state.
Android 17 introduces a significant update to this component: a lock-free implementation named DeliQueue.
This post explains how locks affect UI performance, how to analyze these issues with Perfetto, and the specific algorithms and optimizations used to improve the Android main thread.
The problem: Lock Contention and Priority Inversion
The legacy MessageQueue functioned as a priority queue protected by a single lock. If a background thread posts a message while the main thread performs queue maintenance, the background thread blocks the main thread.
When two or more threads are competing for exclusive use of the same lock, this is called Lock contention. This contention can cause Priority Inversion, leading to UI jank and other performance problems.
Priority inversion can happen when a high-priority thread (like the UI thread) is made to wait for a low-priority thread. Consider this sequence:
-
A low priority background thread acquires the MessageQueue lock to post the result of work that it did.
-
A medium priority thread becomes runnable and the Kernel's scheduler allocates it CPU time, preempting the low priority thread.
-
The high priority UI thread finishes its current task and attempts to read from the queue, but is blocked because the low priority thread holds the lock.
The low-priority thread blocks the UI thread, and the medium-priority work delays it further.
Analyzing contention with Perfetto
You can diagnose these issues using Perfetto. In a standard trace, a thread blocked on a monitor lock enters the sleeping state, and Perfetto shows a slice indicating the lock owner.
When you query trace data, look for slices named "monitor contention with …" followed by the name of the thread that owns the lock and the code site where the lock was acquired.
Case study: Launcher jank
To illustrate, let's analyze a trace where a user experienced jank while navigating home on a Pixel phone immediately after taking a photo in the camera app. Below we see a screenshot of Perfetto showing the events leading up to the missed frame:

-
Symptom: The Launcher main thread missed its frame deadline. It blocked for 18ms, which exceeds the 16ms deadline required for 60Hz rendering.
-
Diagnosis: Perfetto showed the main thread blocked on the MessageQueue lock. A "BackgroundExecutor" thread owned the lock.
-
Root Cause: The BackgroundExecutor runs at Process.THREAD_PRIORITY_BACKGROUND (very low priority). It performed a non-urgent task (checking app usage limits). Simultaneously, medium priority threads were using CPU time to process data from the camera. The OS scheduler preempted the BackgroundExecutor thread to run the camera threads.
This sequence caused the Launcher's UI thread (high priority) to become indirectly blocked by the camera worker thread (medium priority), which was keeping the Launcher's background thread (low priority) from releasing the lock.
Querying traces with PerfettoSQL
You can use PerfettoSQL to query trace data for specific patterns. This is useful if you have a large bank of traces from user devices or tests, and you're searching for specific traces that demonstrate a problem.
For example, this query finds MessageQueue contention coincident with dropped frames (jank):
INCLUDE PERFETTO MODULE android.monitor_contention; INCLUDE PERFETTO MODULE android.frames.jank_type; SELECT process_name, -- Convert duration from nanoseconds to milliseconds SUM(dur) / 1000000 AS sum_dur_ms, COUNT(*) AS count_contention FROM android_monitor_contention WHERE is_blocked_thread_main AND short_blocked_method LIKE "%MessageQueue%" -- Only look at app processes that had jank AND upid IN ( SELECT DISTINCT(upid) FROM actual_frame_timeline_slice WHERE android_is_app_jank_type(jank_type) = TRUE ) GROUP BY process_name ORDER BY SUM(dur) DESC;
In this more complex example, join trace data that spans multiple tables to identify MessageQueue contention during app startup:
INCLUDE PERFETTO MODULE android.monitor_contention; INCLUDE PERFETTO MODULE android.startup.startups; -- Join package and process information for startups DROP VIEW IF EXISTS startups; CREATE VIEW startups AS SELECT startup_id, ts, dur, upid FROM android_startups JOIN android_startup_processes USING(startup_id); -- Intersect monitor contention with startups in the same process. DROP TABLE IF EXISTS monitor_contention_during_startup; CREATE VIRTUAL TABLE monitor_contention_during_startup USING SPAN_JOIN(android_monitor_contention PARTITIONED upid, startups PARTITIONED upid); SELECT process_name, SUM(dur) / 1000000 AS sum_dur_ms, COUNT(*) AS count_contention FROM monitor_contention_during_startup WHERE is_blocked_thread_main AND short_blocked_method LIKE "%MessageQueue%" GROUP BY process_name ORDER BY SUM(dur) DESC;
You can use your favorite LLM to write PerfettoSQL queries to find other patterns.
At Google, we use BigTrace to run PerfettoSQL queries across millions of traces. In doing so, we confirmed that what we saw anecdotally was, in fact, a systemic issue. The data revealed that MessageQueue lock contention impacts users across the entire ecosystem, substantiating the need for a fundamental architectural change.
Solution: lock-free concurrency
We addressed the MessageQueue contention problem by implementing a lock-free data structure, using atomic memory operations rather than exclusive locks to synchronize access to shared state. A data structure or algorithm is lock-free if at least one thread can always make progress regardless of the scheduling behavior of the other threads. This property is generally hard to achieve, and is usually not worth pursuing for most code.
The atomic primitives
Lock-free software often relies on atomic Read-Modify-Write primitives that the hardware provides.
On older generation ARM64 CPUs, atomics used a Load-Link/Store-Conditional (LL/SC) loop. The CPU loads a value and marks the address. If another thread writes to that address, the store fails, and the loop retries. Because the threads can keep trying and succeed without waiting for another thread, this operation is lock-free.
ARM64 LL/SC loop example
retry:
ldxr x0, [x1] // Load exclusive from address x1 to x0
add x0, x0, #1 // Increment value by 1
stxr w2, x0, [x1] // Store exclusive.
// w2 gets 0 on success, 1 on failure
cbnz w2, retry // If w2 is non-zero (failed), branch to retr
Newer ARM architectures (ARMv8.1) support Large System Extensions (LSE) which include instructions in the form of Compare-And-Swap (CAS) or Load-And-Add (demonstrated below). In Android 17 we added support to the Android Runtime (ART) compiler to detect when LSE is supported and emit optimized instructions:
/ ARMv8.1 LSE atomic example ldadd x0, x1, [x2] // Atomic load-add. // Faster, no loop required.
In our benchmarks, high-contention code that uses CAS achieves a ~3x speedup over the LL/SC variant.
The Java programming language offers atomic primitives via java.util.concurrent.atomic that rely on these and other specialized CPU instructions.
The Data Structure: DeliQueue
To remove lock contention from MessageQueue, our engineers designed a novel data structure called DeliQueue. DeliQueue separates Message insertion from Message processing:
-
The list of Messages (Treiber stack): A lock-free stack. Any thread can push new Messages here without contention.
-
The priority queue (Min-heap): A heap of Messages to handle, exclusively owned by the Looper thread (hence no synchronization or locks are needed to access).
Enqueue: pushing to a Treiber stack
The list of Messages is kept in a Treiber stack [1], a lock-free stack that uses a CAS loop to update the head pointer.
public class TreiberStack <E> { AtomicReference<Node<E>> top = new AtomicReference<Node<E>>(); public void push(E item) { Node<E> newHead = new Node<E>(item); Node<E> oldHead; do { oldHead = top.get(); newHead.next = oldHead; } while (!top.compareAndSet(oldHead, newHead)); } public E pop() { Node<E> oldHead; Node<E> newHead; do { oldHead = top.get(); if (oldHead == null) return null; newHead = oldHead.next; } while (!top.compareAndSet(oldHead, newHead)); return oldHead.item; } }
Source code based on Java Concurrency in Practice [2], available online and released to the public domain
Any producer can push new Messages to the stack at any time. This is like pulling a ticket at a deli counter - your number is determined by when you showed up, but the order you get your food in doesn't have to match. Because it's a linked stack, every Message is a sub-stack - you can see what the Message queue was like at any point in time by tracking the head and iterating forwards - you won't see any new Messages pushed on top, even if they're being added during your traversal.
Dequeue: bulk transfer to a min-heap
To find the next Message to handle, the Looper processes new Messages from the Treiber stack by walking the stack starting from the top and iterating until it finds the last Message that it previously processed. As the Looper traverses down the stack, it inserts Messages into the deadline-ordered min-heap. Since the Looper exclusively owns the heap, it orders and processes Messages without locks or atomics.
In walking down the stack, the Looper also creates links from stacked Messages back to their predecessors, thus forming a doubly-linked list. Creating the linked list is safe because links pointing down the stack are added via the Treiber stack algorithm with CAS, and links up the stack are only ever read and modified by the Looper thread. These back links are then used to remove Messages from arbitrary points in the stack in O(1) time.
This design provides O(1) insertion for producers (threads posting work to the queue) and amortized O(log N) processing for the consumer (the Looper).
Using a min-heap to order Messages also addresses a fundamental flaw in the legacy MessageQueue, where Messages were kept in a singly-linked list (rooted at the top). In the legacy implementation, removal from the head was O(1), but insertion had a worst case of O(N) - scaling poorly for overloaded queues! Conversely, insertion to and removal from the min-heap scale logarithmically, delivering competitive average performance but really excelling in tail latencies.
|
Legacy (locked) MessageQueue |
DeliQueue |
|
|
Insert |
O(N) |
O(1) for calling thread O(logN) for Looper thread |
|
Remove from head |
O(1) |
O(logN) |
In the legacy queue implementation, producers and the consumer used a lock to coordinate exclusive access to the underlying singly-linked list. In DeliQueue, the Treiber stack handles concurrent access, and the single consumer handles ordering its work queue.
Removal: consistency via tombstones
DeliQueue is a hybrid data structure, joining a lock-free Treiber stack with a single-threaded min-heap. Keeping these two structures in sync without a global lock presents a unique challenge: a message might be physically present in the stack but logically removed from the queue.
To solve this, DeliQueue uses a technique called "tombstoning." Each Message tracks its position in the stack via the backwards and forwards pointers, its index in the heap's array, and a boolean flag indicating whether it has been removed. When a Message is ready to run, the Looper thread will CAS its removed flag, then remove it from the heap and stack.
When another thread needs to remove a Message, it doesn't immediately extract it from the data structure. Instead, it performs the following steps:
-
Logical removal: the thread uses a CAS to atomically set the Message's removal flag from false to true. The Message remains in the data structure as evidence of its pending removal, a so-called "tombstone". Once a Message is flagged for removal, DeliQueue treats it as if it no longer exists in the queue whenever it's found.
-
Deferred cleanup: The actual removal from the data structure is the responsibility of the Looper thread, and is deferred until later. Rather than modifying the stack or heap, the remover thread adds the Message to another lock-free freelist stack.
-
Structural removal: Only the Looper can interact with the heap or remove elements from the stack. When it wakes up, it clears the freelist and processes the Messages it contained. Each Message is then unlinked from the stack and removed from the heap.
This approach keeps all management of the heap single-threaded. It minimizes the number of concurrent operations and memory barriers required, making the critical path faster and simpler.
Traversal: benign Java memory model data races
Most concurrency APIs, such as Future in the Java standard library, or Kotlin's Job and Deferred, include a mechanism to cancel work before it completes. An instance of one of these classes matches 1:1 with a unit of underlying work, and calling cancel on an object cancels the specific operations associated with them.
Today's Android devices have multi-core CPUs and concurrent, generational garbage collection. But when Android was first developed, it was too expensive to allocate one object for each unit of work. Consequently, Android's Handler supports cancellation via numerous overloads of removeMessages - rather than removing a specific Message, it removes all Messages that match the specified criteria. In practice, this requires iterating through all Messages inserted before removeMessages was called and removing the ones that match.
When iterating forward, a thread only requires one ordered atomic operation, to read the current head of the stack. After that, ordinary field reads are used to find the next Message. If the Looper thread modifies the next fields while removing Messages, the Looper's write and another thread's read are unsynchronized - this is a data race. Normally, a data race is a serious bug that can cause huge problems in your app - leaks, infinite loops, crashes, freezes, and more. However, under certain narrow conditions, data races can be benign within the Java Memory Model. Suppose we start with a stack of:

We perform an atomic read of the head, and see A. A's next pointer points to B. At the same time as we process B, the looper might remove B and C, by updating A to point to C and then D.
Even though B and C are logically removed, B retains its next pointer to C, and C to D. The reading thread continues traversing through the detached removed nodes and eventually rejoins the live stack at D.
By designing DeliQueue to handle races between traversal and removal, we allow for safe, lock-free iteration.
Quitting: Native refcount
Looper is backed by a native allocation that must be manually freed once the Looper has quit. If some other thread is adding Messages while the Looper is quitting, it could use the native allocation after it's freed, a memory safety violation. We prevent this using a tagged refcount, where one bit of the atomic is used to indicate whether the Looper is quitting.
Before using the native allocation, a thread reads the refcount atomic. If the quitting bit is set, it returns that the Looper is quitting and the native allocation must not be used. If not, it attempts a CAS to increment the number of active threads using the native allocation. After doing what it needs to, it decrements the count. If the quitting bit was set after its increment but before the decrement, and the count is now zero, then it wakes up the Looper thread.
When the Looper thread is ready to quit, it uses CAS to set the quitting bit in the atomic. If the refcount was 0, it can proceed to free its native allocation. Otherwise, it parks itself, knowing that it will be woken up when the last user of the native allocation decrements the refcount. This approach does mean that the Looper thread waits for the progress of other threads, but only when it's quitting. That only happens once and is not performance sensitive, and it keeps the other code for using the native allocation fully lock-free.
There's a lot of other tricks and complexity in the implementation. You can learn more about DeliQueue by reviewing the source code.
Optimization: branchless programming
While developing and testing DeliQueue, the team ran many benchmarks and carefully profiled the new code. One issue identified using the simpleperf tool was pipeline flushes caused by the Message comparator code.
A standard comparator uses conditional jumps, with the condition for deciding which Message comes first simplified below:
static int compareMessages(@NonNull Message m1, @NonNull Message m2) { if (m1 == m2) { return 0; } // Primary queue order is by when. // Messages with an earlier when should come first in the queue. final long whenDiff = m1.when - m2.when; if (whenDiff > 0) return 1; if (whenDiff < 0) return -1; // Secondary queue order is by insert sequence. // If two messages were inserted with the same `when`, the one inserted // first should come first in the queue. final long insertSeqDiff = m1.insertSeq - m2.insertSeq; if (insertSeqDiff > 0) return 1; if (insertSeqDiff < 0) return -1; return 0; }
This code compiles to conditional jumps (b.le and cbnz instructions). When the CPU encounters a conditional branch, it can't know whether the branch is taken until the condition is computed, so it doesn't know which instruction to read next, and has to guess, using a technique called branch prediction. In a case like binary search, the branch direction will be unpredictably different at each step, so it's likely that half the predictions will be wrong. Branch prediction is often ineffective in searching and sorting algorithms (such as the one used in a min-heap), because the cost of guessing wrong is larger than the improvement from guessing correctly. When the branch predictor guesses wrong, it must throw away the work it did after assuming the predicted value, and start again from the path that was actually taken - this is called a pipeline flush.
To find this issue, we profiled our benchmarks using the branch-misses performance counter, which records stack traces where the branch predictor guesses wrong. We then visualized the results with Google pprof, as shown below:
Recall that the original MessageQueue code used a singly-linked list for the ordered queue. Insertion would traverse the list in sorted order as a linear search, stopping at the first element that's past the point of insertion and linking the new Message ahead of it. Removal from the head simply required unlinking the head. Whereas DeliQueue uses a min-heap, where mutations require reordering some elements (sifting up or down) with logarithmic complexity in a balanced data structure, where any comparison has an even chance of directing the traversal to a left child or to a right child. The new algorithm is asymptotically faster, but exposes a new bottleneck as the search code stalls on branch misses half the time.
Realizing that branch misses were slowing down our heap code, we optimized the code using branch-free programming:
// Branchless Logic static int compareMessages(@NonNull Message m1, @NonNull Message m2) { final long when1 = m1.when; final long when2 = m2.when; final long insertSeq1 = m1.insertSeq; final long insertSeq2 = m2.insertSeq; // signum returns the sign (-1, 0, 1) of the argument, // and is implemented as pure arithmetic: // ((num >> 63) | (-num >>> 63)) final int whenSign = Long.signum(when1 - when2); final int insertSeqSign = Long.signum(insertSeq1 - insertSeq2); // whenSign takes precedence over insertSeqSign, // so the formula below is such that insertSeqSign only matters // as a tie-breaker if whenSign is 0. return whenSign * 2 + insertSeqSign; }
To understand the optimization, disassemble the two examples in Compiler Explorer and use LLVM-MCA, a CPU simulator that can generate an estimated timeline of CPU cycles.
The original code: Index 01234567890123 [0,0] DeER . . . sub x0, x2, x3 [0,1] D=eER. . . cmp x0, #0 [0,2] D==eER . . cset w0, ne [0,3] .D==eER . . cneg w0, w0, lt [0,4] .D===eER . . cmp w0, #0 [0,5] .D====eER . . b.le #12 [0,6] . DeE---R . . mov w1, #1 [0,7] . DeE---R . . b #48 [0,8] . D==eE-R . . tbz w0, #31, #12 [0,9] . DeE--R . . mov w1, #-1 [0,10] . DeE--R . . b #36 [0,11] . D=eE-R . . sub x0, x4, x5 [0,12] . D=eER . . cmp x0, #0 [0,13] . D==eER. . cset w0, ne [0,14] . D===eER . cneg w0, w0, lt [0,15] . D===eER . cmp w0, #0 [0,16] . D====eER. csetm w1, lt [0,17] . D===eE-R. cmp w0, #0 [0,18] . .D===eER. csinc w1, w1, wzr, le [0,19] . .D====eER mov x0, x1 [0,20] . .DeE----R ret
Note the one conditional branch, b.le, which avoids comparing the insertSeq fields if the result is already known from comparing the when fields.
The branchless code: Index 012345678 [0,0] DeER . . sub x0, x2, x3 [0,1] DeER . . sub x1, x4, x5 [0,2] D=eER. . cmp x0, #0 [0,3] .D=eER . cset w0, ne [0,4] .D==eER . cneg w0, w0, lt [0,5] .DeE--R . cmp x1, #0 [0,6] . DeE-R . cset w1, ne [0,7] . D=eER . cneg w1, w1, lt [0,8] . D==eeER add w0, w1, w0, lsl #1 [0,9] . DeE--R ret
Here, the branchless implementation takes fewer cycles and instructions than even the shortest path through the branchy code - it's better in all cases. The faster implementation plus the elimination of mispredicted branches resulted in a 5x improvement in some of our benchmarks!
However, this technique is not always applicable. Branchless approaches generally require doing work that will be thrown away, and if the branch is predictable most of the time, that wasted work can slow your code down. In addition, removing a branch often introduces a data dependency. Modern CPUs execute multiple operations per cycle, but they can't execute an instruction until its inputs from a previous instruction are ready. In contrast, a CPU can speculate about data in branches, and work ahead if a branch is predicted correctly.
Testing and Validation
Validating the correctness of lock-free algorithms is notoriously difficult!
In addition to standard unit tests for continuous validation during development, we also wrote rigorous stress tests to verify queue invariants and to attempt to induce data races if they existed. In our test labs we could run millions of test instances on emulated devices and on real hardware.
With Java ThreadSanitizer (JTSan) instrumentation, we could use the same tests to also detect some data races in our code. JTSan did not find any problematic data races in DeliQueue, but - surprisingly -actually detected two concurrency bugs in the Robolectric framework, which we promptly fixed.
To improve our debugging capabilities, we built new analysis tools. Below is an example showing an issue in Android platform code where one thread is overloading another thread with Messages, causing a large backlog, visible in Perfetto thanks to the MessageQueue instrumentation feature that we added.
To enable MessageQueue tracing in the system_server process, include the following in your Perfetto configuration:
data_sources {
config {
name: "track_event"
target_buffer: 0 # Change this per your buffers configuration
track_event_config {
enabled_categories: "mq"
}
}
}
Impact
DeliQueue improves system and app performance by eliminating locks from MessageQueue.
-
Synthetic benchmarks: multi-threaded insertions into busy queues is up to 5,000x faster than the legacy MessageQueue, thanks to improved concurrency (the Treiber stack) and faster insertions (the min-heap).
-
In Perfetto traces acquired from internal beta testers, we see a reduction of 15% in app main thread time spent in lock contention.
-
On the same test devices, the reduced lock contention leads to significant improvements to the user experience, such as:
-
-
-4% missed frames in apps.
-
-7.7% missed frames in System UI and Launcher interactions.
-
-9.1% in time from app startup to the first frame drawn, at the 95%ile.
-
Next steps
DeliQueue is rolling out to apps in Android 17. App developers should review preparing your app for the new lock-free MessageQueue on the Android Developers blog to learn how to test their apps.
References
[1] Treiber, R.K., 1986. Systems programming: Coping with parallelism. International Business Machines Incorporated, Thomas J. Watson Research Center.
[2] Goetz, B., Peierls, T., Bloch, J., Bowbeer, J., Holmes, D., & Lea, D. (2006). Java Concurrency in Practice. Addison-Wesley Professional.
17 Feb 2026 4:00pm GMT
13 Feb 2026
Android Developers Blog
Prepare your app for the resizability and orientation changes in Android 17
Posted by Miguel Montemayor, Developer Relations Engineer, Android
With the release of Android 16 in 2025, we shared our vision for a device ecosystem where apps adapt seamlessly to any screen-whether it's a phone, foldable, tablet, desktop, car display, or XR. Users expect their apps to work everywhere. Whether multitasking on a tablet, unfolding a device to read comfortably, or running apps in a desktop windowing environment, users expect the UI to fill the available display space and adapt to the device posture.
We introduced significant changes to orientation and resizability APIs to facilitate adaptive behavior, while providing a temporary opt-out to help you make the transition. We've already seen many developers successfully adapt to this transition when targeting API level 36.
Now with the release of the Android 17 Beta, we're moving to the next phase of our adaptive roadmap: Android 17 (API level 37) removes the developer opt-out for orientation and resizability restrictions on large screen devices (sw > 600 dp). When you target API level 37, your app must be capable of adapting to a variety of display sizes.
The behavior changes ensure that the Android ecosystem offers a consistent, high-quality experience on all device form factors.
What's changing in Android 17
Apps targeting Android 17 must ensure their compatibility with the phase out of manifest attributes and runtime APIs introduced in Android 16. We understand for some apps this may be a big transition, so we've included best practices and tools for helping avoid common issues later in this blog post.
No new changes have been introduced since Android 16, but the developer opt-out is no longer possible. As a reminder: when your app is running on a large screen-where large screen means that the smaller dimension of the display is greater than or equal to 600 dp-the following manifest attributes and APIs are ignored:
Note: As previously mentioned with Android 16, these changes do not apply for screens that are smaller than sw 600 dp or apps categorized as games based on the android:appCategory flag.
| Manifest attributes/API | Ignored values |
| screenOrientation | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
| setRequestedOrientation() | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
| resizeableActivity | all |
| minAspectRatio | all |
| maxAspectRatio | all |
Also, users retain control. In the aspect ratio settings, users can explicitly opt-in to using the app's requested behavior.
Prepare your app
Apps will need to support landscape and portrait layouts for display sizes in the full range of aspect ratios in which users can choose to use apps, including resizable windows, as there will no longer be a way to restrict the aspect ratio and orientation to portrait or to landscape.
Test your app
Your first step is to test your app with these changes to make sure the app works well across display sizes.
Use Android 17 Beta 1 with the Pixel Tablet and Pixel Fold series emulators in Android Studio, and set the targetSdkPreview = "CinnamonBun". Alternatively, you can use the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag if your app does not target API level 36 yet.
We have additional tools to ensure your layouts adapt correctly. You can automatically audit your UI and get suggestions to make your UI more adaptive with Compose UI Check, and simulate specific display characteristics in your tests using DeviceConfigurationOverride.
For apps that have historically restricted orientation and aspect ratio, we commonly see issues with skewed or misoriented camera previews, stretched layouts, inaccessible buttons, or loss of user state when handling configuration changes.
Let's take a look at some strategies for addressing these common issues.
Ensure camera compatibility
A common problem on landscape foldables or for aspect ratio calculations in scenarios like multi-window, desktop windowing, or connected displays, is when the camera preview appears stretched, rotated, or cropped.
Ensure your camera preview isn't stretched or rotated.
This issue often happens on large screen and foldable devices because apps assume fixed relationships between camera features (like aspect ratio and sensor orientation) and device features (like device orientation and natural orientation).
To ensure your camera preview adapts correctly to any window size or orientation, consider these four solutions:
Solution 1: Jetpack CameraX (preferred)
The simplest and most robust solution is to use the Jetpack CameraX library. Its PreviewView UI element is designed to handle all preview complexities automatically:
-
PreviewView correctly adjusts for sensor orientation, device rotation, and scaling
-
PreviewView maintains the aspect ratio of the camera image, typically by centering and cropping (FILL_CENTER)
-
You can set the scale type to FIT_CENTER to letterbox the preview if needed
For more information, see Implement a preview in the CameraX documentation.
Solution 2: CameraViewfinder
If you are using an existing Camera2 codebase, the CameraViewfinder library (backward compatible to API level 21) is another modern solution. It simplifies displaying the camera feed by using a TextureView or SurfaceView and applying all the necessary transformations (aspect ratio, scale, and rotation) for you.
For more information, see the Introducing Camera Viewfinder blog post and Camera preview developer guide.
Solution 3: Manual Camera2 implementation
If you can't use CameraX or CameraViewfinder, you must manually calculate the orientation and aspect ratio and ensure the calculations are updated on each configuration change:
-
Get the camera sensor orientation (for example, 0, 90, 180, 270 degrees) from CameraCharacteristics
-
Get the device's current display rotation (for example, 0, 90, 180, 270 degrees)
-
Use the camera sensor orientation and display rotation values to determine the necessary transformations for your SurfaceView or TextureView
-
Ensure the aspect ratio of your output Surface matches the aspect ratio of the camera preview to prevent distortion
Important: Note the camera app might be running in a portion of the screen, either in multi-window or desktop windowing mode or on a connected display. For this reason, screen size should not be used to determine the dimensions of the camera viewfinder; use window metrics instead. Otherwise you risk a stretched camera preview.
For more information, see the Camera preview developer guide and Your Camera app on different form factors video.
Solution 4: Perform basic camera actions using an Intent
If you don't need many camera features, a simple and straightforward solution is to perform basic camera actions like capturing a photo or video using the device's default camera application. In this case, you can simply use an Intent instead of integrating with a camera library, for easier maintenance and adaptability.
For more information, see Camera intents.
Avoid stretched UI or inaccessible buttons
If your app assumes a specific device orientation or display aspect ratio, the app may run into issues when it's now used across various orientations or window sizes.
Ensure buttons, textfields, and other elements aren't stretched on large screens.
You may have set buttons, text fields, and cards to fillMaxWidth or match_parent. On a phone, this looks great. However, on a tablet or foldable in landscape, UI elements stretch across the entire large screen. In Jetpack Compose, you can use the widthIn modifier to set a maximum width for components to avoid stretched content:
Box(
contentAlignment = Alignment.Center,
modifier = Modifier.fillMaxSize()
) {
Column(
modifier = Modifier
.widthIn(max = 300.dp) // Prevents stretching beyond 300dp
.fillMaxWidth() // Fills width up to 300dp
.padding(16.dp)
) {
// Your content
}
}
If a user opens your app in landscape orientation on a foldable or tablet, action buttons like Save or Login at the bottom of the screen may be rendered offscreen. If the container is not scrollable, the user can be blocked from proceeding. In Jetpack Compose, you can add a verticalScroll modifier to your component:
Column(
modifier = Modifier
.fillMaxSize()
.verticalScroll(rememberScrollState())
.padding(16.dp)
)
By combining max-width constraints with vertical scrolling, you ensure your app remains functional and usable, regardless of how wide or short the app window size becomes.
See our guide on building adaptive layouts.
Preserve state with configuration changes
Removing orientation and aspect ratio restrictions means your app's window size will change much more frequently. Users may rotate their device, fold/unfold it, or resize your app dynamically in split-screen or desktop windowing modes.
By default, these configuration changes destroy and recreate your activity. If your app does not properly manage this lifecycle event, users will have a frustrating experience: scroll positions are reset to the top, half-filled forms are wiped clean, and navigation history is lost. To ensure a seamless adaptive experience, it's critical your app preserves state through these configuration changes. With Jetpack Compose, you can opt-out of recreation, and instead allow window size changes to recompose your UI to reflect the new amount of space available.
See our guide on saving UI state.
Targeting API level 37 by August 2027
If your app previously opted out of these changes when targeting API level 36, your app will only be impacted by the Android 17 opt-out removal after your app targets API level 37. To help you plan ahead and make the necessary adjustments to your app, here's the timeline when these changes will take effect:
-
Android 17: Changes described above will be the baseline experience for large screen devices (smallest screen width > 600 dp) for apps that target API level 37. Developers will not have an option to opt-out.
The deadlines for targeting a specific API level are app-store specific. For Google Play, new apps and updates will be required to target API level 37, making this behavior mandatory for distribution in August 2027.
Preparing for Android 17
Refer to the Android 17 changes page for all changes impacting apps in Android 17. To test your app, download Android 17 Beta 1 and update to targetSdkPreview = "CinnamonBun" or use the app compatibility framework to enable specific changes.
The future of Android is adaptive, and we're here to help you get there. As you prepare for Android 17, we encourage you to review our guides for building adaptive layouts and our large screen quality guidelines. These resources are designed to help you handle multiple form factors and window sizes with confidence.
Don't wait. Start getting ready for Android 17 today!
13 Feb 2026 7:34pm GMT
The First Beta of Android 17

Posted by Matthew McCullough, VP of Product Management, Android Developer
Today we're releasing the first beta of Android 17, continuing our work to build a platform that prioritizes privacy, security, and refined performance. This build continues our work for more adaptable Android apps, introduces significant enhancements to camera and media capabilities, new tools for optimizing connectivity, and expanded profiles for companion devices. This release also highlights a fundamental shift in the way we're bringing new releases to the developer community, from the traditional Developer Preview model to the Android Canary program
Beyond the Developer Preview
Android has replaced the traditional "Developer Preview" with a continuous Canary channel. This new "always-on" model offers three main benefits:
- Faster Access: Features and APIs land in Canary as soon as they pass internal testing, rather than waiting for a quarterly release.
- Better Stability: Early "battle-testing" in Canary results in a more polished Beta experience with new APIs and behavior changes that are closer to being final.
- Easier Testing: Canary supports OTA updates (no more manual flashing) and, as a separate update channel, more easily integrates with CI workflows and gives you the earliest window to give immediate feedback on upcoming potential changes.
The Android 17 schedule
With the release of the Android 17 Beta, we're moving to the next phase of our adaptive roadmap: Android 17 (API level 37) removes the developer opt-out for orientation and resizability restrictions on large screen devices (sw > 600 dp).
When your app targets SDK 37, it must be ready to adapt. Users expect their apps to work everywhere-whether multitasking on a tablet, unfolding a device, or using a desktop windowing environment-and they expect the UI to fill the space and respect their device posture.
Key Changes for SDK 37
Apps targeting Android 17 must ensure compatibility with the phase-out of manifest attributes and runtime APIs introduced in Android 16. When running on a large screen (smaller dimension ≥ 600dp), the following attributes and APIs will be ignored:| Manifest attributes/API | Ignored values |
| screenOrientation | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
| setRequestedOrientation() | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
| resizeableActivity | all |
| minAspectRatio | all |
| maxAspectRatio | all |
These changes are specific to large screens; they do not apply to screens smaller than sw600dp (including traditional slate form factor phones). Additionally, apps categorized as games (based on the android:appCategory flag) are exempt from these restrictions.
It is also important to note that users remain in control. They can explicitly opt-in/out to using an app's default behavior via the system's aspect ratio settings.
Updates to configuration changesPerformance
Lock-free MessageQueue
In Android 17, apps targeting SDK 37 or higher will receive a new implementation of android.os.MessageQueue where the implementation is lock-free. The new implementation improves performance and reduces missed frames, but may break clients that reflect on MessageQueue private fields and methods.
Generational garbage collection
Android 17 introduces generational garbage collection to ART's Concurrent Mark-Compact collector. This optimization introduces more frequent, less resource-intensive young-generation collections alongside full-heap collections. aiming to reduce overall garbage collection CPU cost and time duration. ART improvements are also available to over a billion devices running Android 12 (API level 31) and higher through Google Play System updates.
Static final fields now truly final
Starting from Android 17 apps targeting Android 17 or later won't be able to modify "static final" fields, allowing the runtime to apply performance optimizations more aggressively. An attempt to do so via reflection (and deep reflection) will always lead to IllegalAccessException being thrown. Modifying them via JNI's SetStatic<Type>Field methods family will immediately crash the application.
Custom Notification View Restrictions
To reduce memory usage we are restricting the size of custom notification views. This update closes a loophole that allows apps to bypass existing limits using URIs. This behavior is gated by the target SDK version and takes effect for apps targeting API 37 and higher.
New performance debugging ProfilingManager triggers
We've introduced several new system triggers to ProfilingManager to help you collect in-depth data to debug performance issues. These triggers are TRIGGER_TYPE_COLD_START, TRIGGER_TYPE_OOM, and TRIGGER_TYPE_KILL_EXCESSIVE_CPU_USAGE.
To understand how to set up the new system triggers, check out the trigger-based profiling and retrieve and analyze profiling data documentation.
fun updateCameraSession(session: CameraCaptureSession, newOutputConfigs: List<OutputConfiguration>)) { // Dynamically update the session without closing and reopening try { // Update the output configurations session.updateOutputConfigurations(newOutputConfigs) } catch (e: CameraAccessException) { // Handle error } }
Logical multi-camera device metadata
When working with logical cameras that combine multiple physical camera sensors, you can now request additional metadata from all active physical cameras involved in a capture, not just the primary one. Previously, you had to implement workarounds, sometimes allocating unnecessary physical streams, to obtain metadata from secondary active cameras (e.g., during a lens switch for zoom where a follower camera is active). This feature introduces a new key, LOGICAL_MULTI_CAMERA_ADDITIONAL_RESULTS, in CaptureRequest and CaptureResult. By setting this key to ON in your CaptureRequest, the TotalCaptureResult will include metadata from these additional active physical cameras. You can access this comprehensive metadata using TotalCaptureResult.getPhysicalCameraTotalResults() to get more detailed information that may enable you to optimize resource usage in your camera applications.
Versatile Video Coding (VVC) Support
Android 17 adds support for the Versatile Video Coding (VVC) standard. This includes defining the video/vvc MIME type in MediaFormat, adding new VVC profiles in MediaCodecInfo, and integrating support into MediaExtractor. This feature will be coming to devices with hardware decode support and capable drivers.
Constant Quality for Video Recording
We have added setVideoEncodingQuality() to MediaRecorder. This allows you to configure a constant quality (CQ) mode for video encoders, giving you finer control over video quality beyond simple bitrate settings.
Background Audio Hardening
Starting in Android 17, the audio framework will enforce restrictions on background audio interactions including audio playback, audio focus requests, and volume change APIs to ensure that these changes are started intentionally by the user.
If the app tries to call audio APIs while the application is not in a valid lifecycle, the audio playback and volume change APIs will fail silently without an exception thrown or failure message provided. The audio focus API will fail with the result code AUDIOFOCUS_REQUEST_FAILED.
Privacy and Security
Deprecation of Cleartext Traffic Attribute
The android:usesCleartextTraffic attribute is now deprecated. If your app targets (Android 17) or higher and relies on usesCleartextTraffic="true" without a corresponding Network Security Configuration, it will default to disallowing cleartext traffic. You are encouraged to migrate to Network Security Configuration files for granular control.
We are introducing a public Service Provider Interface (SPI) for an implementation of HPKE hybrid cryptography, enabling secure communication using a combination of public key and symmetric encryption (AEAD).
Connectivity and Telecom
Enhanced VoIP Call History
We are introducing user preference management for app VoIP call history integration. This includes support for caller and participant avatar URIs in the system dialer, enabling granular user control over call log privacy and enriching the visual display of integrated VoIP call logs.
Wi-Fi Ranging and Proximity
Wi-Fi Ranging has been enhanced with new Proximity Detection capabilities, supporting continuous ranging and secure peer-to-peer discovery. Updates to Wi-Fi Aware ranging include new APIs for peer handles and PMKID caching for 11az secure ranging.
Developer Productivity and Tools
Updates for companion device apps
We have introduced two new profiles to the CompanionDeviceManager to improve device distinction and permission handling:
-
Medical Devices: This profile allows medical device mobile applications to request all necessary permissions with a single tap, simplifying the setup process.
-
Fitness Trackers: The DEVICE_PROFILE_FITNESS_TRACKER profile allows companion apps to explicitly indicate they are managing a fitness tracker. This ensures accurate user experiences with distinct icons while reusing existing watch role permissions.
Also, the CompanionDeviceManager now offers a unified dialog for device association and Nearby permission requests. You can leverage the new setExtraPermissions method in AssociationRequest.Builder to bundle nearby permission prompts within the existing association flow, reducing the number of dialogs presented to the user.
Get started with Android 17You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.
If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 1.
If you have Android 26Q1 Beta and would like to take the final stable release of 26Q1 and exit Beta, you need to ignore the over-the-air update to 26Q2 Beta 1 and wait for the release of 26Q1.
We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.
For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you're set up, here are some of the things you should do:
-
Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
-
Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it.
We'll update the preview/beta system images and SDK regularly throughout the Android 17 release cycle. Once you've installed a beta build, you'll automatically get future updates over-the-air for all later previews and Betas.
For complete information, visit the Android 17 developer site.
Join the conversation
As we move toward Platform Stability and the final stable release of Android 17 later this year, your feedback remains our most valuable asset. Whether you're an early adopter on the Canary channel or an app developer testing on Beta 1, consider joining our communities and filing feedback. We're listening.
13 Feb 2026 7:23pm GMT
29 Jan 2026
Android Developers Blog
Accelerating your insights with faster, smarter monetization data and recommendations
Posted by Phalene Gowling, Product Manager, Google Play
To build a thriving business on Google Play, you need more than just data - you need a clear path to action. Today, we're announcing a suite of upgrades to the Google Play Console and beyond, giving you greater visibility into your financial performance and specific, data-backed steps to improve it.
From new, actionable recommendations to more granular sales reporting, here's how we're helping you maximize your ROI.
New: Monetization insights and recommendations
Launch Status: Rolling out today
The Monetize with Play overview page is designed to be your ultimate command center. Today, we are upgrading it with a new dynamic insights section designed to give you a clearer view of your revenue drivers.
.gif)
- Optimize conversion: Track your new Cart Conversion Rate.
-
Reduce churn: Track cancelled subscriptions over time.
-
Optimize pricing: Monitor your Average Revenue Per Paying User (ARPPU).
-
Increase buyer reach: Analyze how much of your engaged audience convert to buyers.

We recently rolled out new Sales Channel data in your financial reporting. This allows you to attribute revenue to specific surfaces - including your app, the Play Store, and platforms like Google Play Games on PC.
For native-PC game developers and media & entertainment subscription businesses alike, this granularity allows you to calculate the precise ROI of your cross-platform investments and understand exactly which channels are driving your growth. Learn more.

The Orders API provides programmatic access to one-time and recurring order transaction details. If you haven't integrated it yet, this API allows you to ingest real-time data directly into your internal dashboards for faster reconciliation and improved customer support.

Level Infinite (Tencent) says the API "works so well that we want every app to use it."
Continuous improvements towards objective-led reporting
You've told us that the biggest challenge isn't just accessing data, but connecting the dots across different metrics to see the full picture. We're enhancing reporting that goes beyond data dumps to provide straightforward, actionable insights that help you reach business objectives faster.
Our goal is to create a more cohesive product experience centered around your objectives. By shifting from static reporting to dynamic, goal-orientated tools, we're making it easier to track and optimize for revenue, conversion rates, and churn. These updates are just the beginning of a transformation designed to help you turn data into measurable growth.
29 Jan 2026 5:00pm GMT
28 Jan 2026
Android Developers Blog
How Automated Prompt Optimization Unlocks Quality Gains for ML Kit’s GenAI Prompt API
Posted by Chetan Tekur, PM at AI Innovation and Research, Chao Zhao, SWE at AI Innovation and Research, Paul Zhou, Prompt Quality Lead at GCP Cloud AI and Industry Solutions, and Caren Chang, Developer Relations Engineer at Android
Automated Prompt Optimization (APO)
To further help bring your ML Kit Prompt API use cases to production, we are excited to announce Automated Prompt Optimization (APO) targeting On-Device models on Vertex AI. Automated Prompt Optimization is a tool that helps you automatically find the optimal prompt for your use cases.
The era of On-Device AI is no longer a promise-it is a production reality. With the release of Gemini Nano v3, we are placing unprecedented language understanding and multimodal capabilities directly into the palms of users. Through the Gemini Nano family of models, we have wide coverage of supported devices across the Android Ecosystem. But for developers building the next generation of intelligent apps, access to a powerful model is only step one. The real challenge lies in customization: How do you tailor a foundation model to expert-level performance for your specific use case without breaking the constraints of mobile hardware?
In the server-side world, the larger LLMs tend to be highly capable and require less domain adaptation. Even when needed, more advanced options such as LoRA (Low-Rank Adaptation) fine-tuning can be feasible options. However, the unique architecture of Android AICore prioritizes a shared, memory-efficient system model. This means that deploying custom LoRA adapters for every individual app comes with challenges on these shared system services.
But there is an alternate path that can be equally impactful. By leveraging Automated Prompt Optimization (APO) on Vertex AI, developers can achieve quality approaching fine-tuning, all while working seamlessly within the native Android execution environment. By focusing on superior system instruction, APO enables developers to tailor model behavior with greater robustness and scalability than traditional fine-tuning solutions.
Note: Gemini Nano V3 is a quality optimized version of the highly acclaimed Gemma 3N model. Any prompt optimizations that are made on the open source Gemma 3N model will apply to Gemini Nano V3 as well. On supported devices, ML Kit GenAI APIs leverage the nano-v3 model to maximize the quality for Android Developers
APO treats the prompt not as a static text, but as a programmable surface that can be optimized. It leverages server-side models (like Gemini Pro and Flash) to propose prompts, evaluate variations and find the optimal one for your specific task. This process employs three specific technical mechanisms to maximize performance:
-
Automated Error Analysis: APO analyzes error patterns from training data to Automatically identify specific weaknesses in the initial prompt.
-
Semantic Instruction Distillation: It analyzes massive training examples to distill the "true intent" of a task, creating instructions that more accurately reflect the real data distribution.
-
Parallel Candidate Testing: Instead of testing one idea at a time, APO generates and tests numerous prompt candidates in parallel to identify the global maximum for quality.
Why APO Can Approach Fine Tuning Quality
It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:
-
Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model's weights to over-index on a specific distribution of data. This often leads to "catastrophic forgetting," where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.
-
Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model's latent capabilities, often discovering strategies that might be hard for human engineers to find.
To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.
|
Use Case |
Task Type |
Task Description |
Metric |
APO Improvement |
|
Topic classification |
Text classification |
Classify a news article into topics such as finance, sports, etc |
Accuracy |
+5% |
|
Intent classification |
Text classification |
Classify a customer service query into intents |
Accuracy |
+8.0% |
|
Webpage translation |
Text translation |
Translate a webpage from English to a local language |
BLEU |
+8.57% |
A Seamless, End-to-End Developer Workflow
It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:
-
Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model's weights to over-index on a specific distribution of data. This often leads to "catastrophic forgetting," where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.
-
Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model's latent capabilities, often discovering strategies that might be hard for human engineers to find.
To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.
Conclusion
The release of Automated Prompt Optimization (APO) marks a turning point for on-device generative AI. By bridging the gap between foundation models and expert-level performance, we are giving developers the tools to build more robust mobile applications. Whether you are just starting with Zero-Shot Optimization or scaling to production with Data-Driven refinement, the path to high-quality on-device intelligence is now clearer. Launch your on-device use cases to production today with ML Kit's Prompt API and Vertex AI's Automated Prompt Optimization.
Relevant links:
28 Jan 2026 5:00pm GMT







.png)


.png)

.png)
