15 Mar 2026
TalkAndroid
Leaving WhatsApp groups just got invisible: No more public exit alerts for a drama-free goodbye
Tired of tiptoeing out of WhatsApp group chats, worried about triggering the infamous "X left the group" moment?…
15 Mar 2026 7:30am GMT
Bone Lake: The dark thriller everyone is binge-watching now tops the Netflix charts
Feeling those post-Stranger Things blues? Netflix has some fresh thrills to keep you company in early 2026! As…
15 Mar 2026 7:00am GMT
14 Mar 2026
TalkAndroid
Why Everyone’s Calling This New Thriller Series “The Best of the Decade” — The Netflix Hit You Can’t Miss
If you're looking for the one series everyone will be buzzing about heading into the new year, look…
14 Mar 2026 4:30pm GMT
Google Gemini set to revolutionize Android with AI-generated music—coming soon?
Could your next musical hit be generated by artificial intelligence? Google seems ready to take its AI game…
14 Mar 2026 4:00pm GMT
Still using your old Android phone? Experts warn a billion devices are now dangerously exposed
Are you still using that trusty old Android phone? It might feel like it's running just fine, but…
14 Mar 2026 7:30am GMT
Why You Can’t Miss These 5 Must-See Korean Dramas Coming to Netflix in 2026 ?
Get ready, K-drama fans: 2026 is shaping up to be another year ruled by Korean series on Netflix!…
14 Mar 2026 7:00am GMT
Boba Story Lid Recipes – 2026
Look no further for all the latest Boba Story Lid Recipes. They are all right here!
14 Mar 2026 3:01am GMT
Dice Dreams Free Rolls – Updated Daily
Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.
14 Mar 2026 3:01am GMT
13 Mar 2026
Android Developers Blog
Room 3.0 - Modernizing the Room
The first alpha of Room 3.0 has been released! Room 3.0 is a major breaking version of the library that focuses on Kotlin Multiplatform (KMP) and adds support for JavaScript and WebAssembly (WASM) on top of the existing Android, iOS and JVM desktop support.
In this blog we outline the breaking changes, the reasoning behind Room 3.0, and the various things you can do to migrate from Room 2.0.
Breaking changes
Room 3.0 includes the following breaking API changes:
- Dropping SupportSQLite APIs: Room 3.0 is fully backed by the androidx.sqlite driver APIs. The SQLiteDriver APIs are KMP-compatible and removing Room's dependency on Android's API simplifies the API surface for Android since it avoids having two possible backends.
- No more Java code generation: Room 3.0 exclusively generates Kotlin code. This aligns with the evolving Kotlin-first paradigm but also simplifies the codebase and development process, enabling faster iterations.
- Focus on KSP: We are also dropping support for Java Annotation Processing (AP) and KAPT. Room 3.0 is solely a KSP (Kotlin Symbol Processing) processor, allowing for better processing of Kotlin codebases without being limited by the Java language.
- Coroutines first: Room 3.0 embraces Kotlin coroutines, making its APIs coroutine-first. Coroutines is the KMP-compatible asynchronous framework and making Room be asynchronous by nature is a critical requirement for supporting web platforms.
A new package
To prevent compatibility issues with existing Room 2.x implementations and for libraries with transitive dependencies to Room (for example, WorkManager), Room 3.0 resides in a new package which means it also has a new maven group and artifact ids. For example, androidx.room:room-runtime has become androidx.room3:room3-runtime and classes such as androidx.room.RoomDatabase will now be located at android.room3.RoomDatabase.
Kotlin and Coroutines First
With no more Java code generation, Room 3.0 also requires KSP and the Kotlin compiler even if the codebase interacting with Room is in Java. It is recommended to have a multi-module project where Room usage is concentrated and the Kotlin Gradle Plugin and KSP can be applied without affecting the rest of the codebase.
Room 3.0 also requires Coroutines and more specifically DAO functions have to be suspending unless they are returning a reactive type, such as a Flow. Room 3.0 disallows blocking DAO functions. See the Coroutines on Android documentation on getting started integrating Coroutines into your application.
Migration to SQLiteDriver APIs
With the shift away from SupportSQLite, apps will need to migrate to the SQLiteDriver APIs. This migration is essential to leveraging the full benefits of Room 3.0, including allowing the use of the bundled SQLite library via the BundledSQLiteDriver. You can start migrating to the driver APIs today with Room 2.7.0+. We strongly encourage you to avoid any further usage of SupportSQLite. If you migrate your Room integrations to SQLiteDriver APIs, then the transition to Room 3.0 is easier since the package change mostly involves updating symbol references (imports) and might require minimal changes to call-sites.
For a brief overview of the SQLiteDriver APIs, check out the SQLiteDriver APIs documentation.
For more details on how to migrate Room to use SQLiteDriver APIs, check out the official documentation to migrate from SupportSQLite.
Room SupportSQLite wrapper
We understand completely removing SupportSQLite might not be immediately feasible for all projects. To ease this transition, Room 2.8.0, the latest version of the Room 2.0 series, introduced a new artifact called androidx.room:room-sqlite-wrapper. This artifact offers a compatibility API that allows you to convert a RoomDatabase into a SupportSQLiteDatabase, even if the SupportSQLite APIs in the database have been disabled due to a SQLiteDriver being installed. This provides a temporary bridge for developers who need more time to fully migrate their codebase. This artifact continues to exist in Room 3.0 as androidx.room3:room3-sqlite-wrapper to enable the migration to Room 3.0 while still supporting critical SupportSQLite usage.
For example, invocations of Database.openHelper.writableDatabase can be replaced by roomDatabase.getSupportWrapper() and a wrapper would be provided even if setDriver() is called on Room's builder.
For more details check out the room-sqlite-wrapper documentation.
Room and SQLite Web Support
Support for the Kotlin Multiplatform targets JS and WasmJS and brings some of the most significant API changes. Specifically, many APIs in Room 3.0 are suspend functions since proper support for web storage is asynchronous. The SQLiteDriver APIs have also been updated to support the Web and a new web asynchronous driver is available in androidx.sqlite:sqlite-web. It is a Web Worker based driver that enables persisting the database in the Origin private file system (OPFS).
Room 3.0 introduces the ability to add custom integrations to Room similar to RxJava and Paging. Through a new annotation API called @DaoReturnTypeConverter you can create your own integration such that Room's generated code becomes accessible at runtime, this enables @Dao functions having their custom return types without having to wait for the Room team to add the support. Existing integrations are migrated to use this functionality and thus will now require for those who rely on it to add the converters to the @Database or @Dao definitions.
For example, the Paging converter will be located in the android.room3:room3-paging artifact and it's called PagingSourceDaoReturnTypeConverter. Meanwhile for LiveData the converter is in android.room3:room3-livedata and is called LiveDataReturnTypeConverter.
Since the development of Room will be focused on Room 3, the current Room 2.x version enters maintenance mode. This means that no major features will be developed but patch releases (2.8.1, 2.8.2, etc.) will still occur with bug fixes and dependency updates. The team is committed to this work until Room 3 becomes stable.
We are incredibly excited about the potential of Room 3.0 and the opportunities it unlocks for the Kotlin ecosystem. Stay tuned for more updates as we continue this journey!
13 Mar 2026 5:00pm GMT
TalkAndroid
Netflix’s Most-Watched Thriller of 2025: Why Is Everyone Obsessed With This Mini-Series?
If your nights on Netflix are starting to feel a bit repetitive, you're not alone. There's a new…
13 Mar 2026 4:30pm GMT
Waze’s long-awaited update finally delivers smarter, safer driving—here’s what’s new
After what felt like ages in the slow lane, Waze is finally rolling out a suite of headline-grabbing…
13 Mar 2026 4:00pm GMT
Play Plinko on Android for Real Money
Let's see how to play the Classic Casino Drop Game with Crypto: Plinko has every appearance of a…
13 Mar 2026 2:39pm GMT
Android Developers Blog
TikTok reduces code size by 58% and improves app performance for new features with Jetpack Compose
Posted by Ajesh R Pai, Developer Relations Engineer & Ben Trengrove, Developer Relations Engineer
TikTok is a global short-video platform known for its massive user base and innovative features. The team is constantly releasing updates, experiments, and new features for their users. Faced with the challenge of maintaining velocity while managing technical debt, the TikTok Android team turned to Jetpack Compose.
The team wanted to enable faster, higher-quality iteration of product requirements. By leveraging Compose, the team sought to improve engineering efficiency by writing less code and reducing cognitive load, while also achieving better performance and stability.
TikTok pages are often more complex than they appear, containing numerous layered conditional requirements. This complexity often resulted in difficult-to-maintain, sub-optimally structured View hierarchies and excessive View nesting, which caused performance degradation due to an increased number of measure passes.
Compose offered a direct solution to this structural problem.
Furthermore, Compose's measurement strategy helps reduce double taxation, making measure performance easier to optimize.
To improve developer productivity, TikTok's central Design System team provides a component library for teams working on different app features. The team observed that Development in Compose is simple; leveraging small composables is highly effective, while incorporating large UI blocks with conditional logic is both straightforward and has minimal overhead.
Building a path forward through strategic migration
By strategically adopting Jetpack Compose, TikTok was able to stay on top of technical debt, while also continuing to focus on creating great experiences for their users. The ability of Compose to handle conditional logic cleanly and streamline composition allowed the team to achieve up to a 78% reduction in page loading time on new or fully rewritten pages. This improvement was 20-30% in smaller cases, and 70-80% for full rewrites and new features. They also were able to reduce their code size by 58%, when compared to the same feature built in Views. The team has further shared a couple of learnings:
TikTok team's overall strategy was to incrementally migrate specific user journeys. This gave them an opportunity to migrate, confirm measurable benefits, then scale to more screens. They started with using Compose to simplify the overall structure in the QR code feature and saw the improvements. The team later expanded the migration to the Login and Sign-up experiences.
The team shared some additional learnings:
While checking performance during migration, the TikTok team found that using many small ComposeViews to replace elements inside a single ViewHolder, caused composition overhead. They achieved better results by expanding the migration to use one single ComposeView for the entire ViewHolder.
When migrating a Fragment inside ViewPager, which has custom height logic and conditional logic to hide and show ui based on experiments, the performance wasn't impacted. In this case, migrating the ViewPager to Composable performed better than migrating the Fragment.
Jun Shen really likes that Compose "reduces the amount of code required for feature development, improves testability, and accelerates delivery". The team plans to steadily increase Compose adoption, making it their preferred framework in the long term. Jetpack Compose proved to be a powerful solution for improving both their developer experience and production metrics at scale.
Get Started with Jetpack Compose
Learn more about how Jetpack Compose can help your team.
13 Mar 2026 1:00pm GMT
TalkAndroid
This crime miniseries just shattered records—Season 2 is officially on the way
When a new crime miniseries sweeps up nearly 93 million viewers on Netflix in 2025, you know something…
13 Mar 2026 7:30am GMT
Gemini Live finally gets the upgrade users have been demanding with a powerful new feature on Android
If you've been eagerly awaiting a smarter, more convenient way to interact with Gemini Live on your Android…
13 Mar 2026 7:00am GMT
12 Mar 2026
TalkAndroid
“It’s Official: The Monster of Florence Becomes Netflix’s No. 1 Global Sensation in Just 48 Hours”
Barely two days after its launch on Netflix, "The Monster of Florence" has already clawed its way to…
12 Mar 2026 4:00pm GMT
If You Loved Stranger Things, You Need to Watch This ’80s Classic That Inspired It
Stranger Things is over-no need to panic! Why not take a trip down memory lane and dive into…
12 Mar 2026 7:30am GMT
Android Auto 16.0 is here: Long-awaited media redesign finally reaches all drivers
After months of quiet behind-the-scenes testing, Android Auto 16.0 is finally rolling out to the masses, and what…
12 Mar 2026 7:00am GMT
11 Mar 2026
Android Developers Blog
Level Up: Test Sidekick and prepare for upcoming program milestones
Last September, we shared our vision for the future of Google Play Games grounded in a core belief: the best way to drive your game's success is to deliver a world-class player experience. We launched the Google Play Games Level Up program to recognize and reward great gaming experiences, while providing you with a powerful toolkit and new promotional opportunities to grow your games.
The momentum since our announcement has been incredibly positive, with more than 600 million gamers now using Play Games Services every month. Developers are also finding success, with one-third of all game installs on the Play Store now coming from editorially-driven organic discovery. In fact, in 2025, Level Up features have driven over 2.5 billion incremental acquisitions for featured games, in addition to an average uplift of 25% in installs during the featuring windows.
Today, we're inviting you to start testing Play Games Sidekick to keep your players in the action, sharing new Play Console updates to optimize your reach, and helping you prepare for our upcoming program milestones.
- Pre-reg device breakdowns: To aid launch decisions, you can now analyze the device distribution of your pre-registered audience by key device attributes including Android version, RAM and SoC. This enables you to optimize game performance, minimum specs, and marketing spend for the players already waiting for your game.
- Real-time feedback: With Level Up+, our tier for high-performing games, qualifying titles can unlock promotional content featuring and tools like deep-links and audience targeting. While submissions must meet Play's quality guidelines, you no longer have to wait 24 hours to learn about issues. You can now get immediate feedback on quality whenever possible.
- Integrate Play Games Sidekick to offer a quick and easy entry point to access rewards, offers, and achievements through an in-game overlay.
- Implement achievements with Play Games Services, to support authentication with the modern Gamer Profile, and to keep players engaged across the lifespan of your game.
- Implement cloud save to enable progress sync across devices.
Last week, we announced that we're working on an expanded Level Up program that builds on our successful foundation to further improve gaming experiences. The update will introduce new requirements that will unlock additional benefits like lower service fees. Engaging with the program now ensures your work is strategically aligned with these future updates. We'll share more details in the coming months.
In the meantime, the path to your first program milestone begins today. By prioritizing these user experience guidelines now, you're investing in the long-term value of your game and ensuring it's built to thrive for every player. Head over to Play Console to start testing Sidekick and take the next step in your Level Up journey.
11 Mar 2026 8:02pm GMT
Expanding our stage for PC and paid titles
Posted by Aurash Mahbod, VP and GM, Games on Google Play
Google Play is proud to be the home of over 200,000 games-many of which defined the mobile-first era. But as cross-platform becomes the standard for players, we are evolving our ecosystem to match the scale of your ambitions. In recent years, we focused on elevating Android gaming quality while significantly deepening our support for native PC titles.
We know that maximizing your game's reach across different platforms is complex. The Level Up program serves as your strategic roadmap, helping you prioritize optimizations that drive great experiences on Android. Building on this foundation, we're doubling down on our investment to make Play the most accessible home for every category of play. We're adding new tools for paid games and making the PC game discovery to purchase seamless. Keep reading to learn more about how we're creating a bigger stage for your games.
Scale your discovery across mobile and PC platforms
Building a bigger stage starts with making your games easier to find-and easier to buy-no matter which device your players prefer. We're expanding your reach by bringing cross-platform discovery directly to the mobile storefront.
-
With the new PC section in the Games tab, your PC titles gain high visibility placement among our most active mobile players.
- The PC badge ensures your cross-platform investment is recognized. This creates more opportunities to acquire players on mobile and transition them seamlessly to your high-fidelity PC experience.
-
With 'buy once play anywhere' pricing, we're making it easier to sell your games across different devices. If you choose to opt-in your mobile game for Google Play Games on PC, you can now offer a single price that covers both mobile and PC versions. We're rolling out this feature in EAP with select games including Brotato: Premium.
-
For PC-only games, players can now complete the full purchase journey on Google Play Games on PC with the same trusted security and privacy standards they expect from Google Play.
Lower the purchase barrier with Game Trials
To help you convert high-intent buyers with less friction, we're introducing Game Trials, a feature that enables players to experience your game for a limited time before making a purchase on mobile. Accessible directly from your game's store listing, Game Trials provides a fast-track for players to start exploring your world with a single tap. Game trials are now in testing with select titles and we'll roll it out to more titles soon.
-
To ensure this is low maintenance for you, Game Trials is added directly into your Android App Bundle. This enables you to offer a high quality trial without the burden of a separate codebase or a demo version of your app.
-
Play ensures trials are secure and seamless. Game Trials are once per user and protects your game while the trial is active. When it ends, players can purchase your game and keep their progress.
-
We're also working on tools that will give you more control-such as specifying a custom time limit or an in-game event to conclude the trial.
Diversify your revenue with a dedicated player community on Play Pass
Play Pass is another way to diversify revenue and grow your player audience. It has been a strong launchpad for indie hits such as Isle of Arrows, Slay the Spire, and Dead Cells. With Play Pass, you can reach highly dedicated players seeking a more curated gaming experience, free of ads and in-app purchases. To help you deepen engagement, paid titles on Play Pass can now opt in to Google Play Games on PC - making it easy for players to find and play your games on a larger screen. Later this year, you can nominate your game through a streamlined opt-in process directly in Play Console.
Drive long term sales with Wishlists and Discounts
Wishlists and Discounts are one of the most effective ways to capture player intent and drive long term sales. To support players at every stage of their purchase journey, we're integrating them directly into Play. Players can save titles to their wishlist and manage them from library settings. To keep your game top-of-mind, players will receive automated notifications for your latest discounts - starting with mobile and expanding soon to PC games.
How leading studios are finding a new path to success on Play
We're thrilled to welcome Sledding Game, 9 Kings, Potion Craft, Moonlight Peaks, and Low Budget Repairs to Play [1]. It marks an exciting expansion of our catalog and a step forward in our mission to build a bigger gaming ecosystem for all developers. This growth is fueled by our developer community, whose feedback continues to shape our roadmap and help us better support your success.
That mission brings us to GDC and the Independent Games Festival (IGF) Awards [2], where the next generation of games awaits! This year, we're inviting you to come along for the ride as we go backstage to chat with the finalists and winners, sharing the moments of triumph and the creative stories behind their development. Not joining us at GDC? You can take the next step in your journey to launch your game on Google Play today.
1. Sledding Game, 9 Kings, Potion Craft, and Moonlight Peaks are coming to Google Play in 2026. Low Budget Repairs is scheduled for release in 2027. [Back]
2. Independent Games Festival (IGF) Awards is hosted by Game Developers Conference (GDC) and requires a valid GDC pass for entry. [Back]
11 Mar 2026 8:02pm GMT
10 Mar 2026
Android Developers Blog
Boosting Android Performance: Introducing AutoFDO for the Kernel
We are the Android LLVM toolchain team. One of our top priorities is to improve Android performance through optimization techniques in the LLVM ecosystem. We are constantly searching for ways to make Android faster, smoother, and more efficient. While much of our optimization work happens in userspace, the kernel remains the heart of the system. Today, we're excited to share how we are bringing Automatic Feedback-Directed Optimization (AutoFDO) to the Android kernel to deliver significant performance wins for users.
What is AutoFDO?
During a standard software build, the compiler makes thousands of small decisions, such as whether to inline a function and which branch of a conditional is likely to be taken, based on static code hints.While these heuristics are useful, they don't always accurately predict code execution during real-world phone usage.
AutoFDO changes this by using real-world execution patterns to guide the compiler. These patterns represent the most common instruction execution paths the code takes during actual use, captured by recording the CPU's branching history. While this data can be collected from fleet devices, for the kernel we synthesize it in a lab environment using representative workloads, such as running the top 100 most popular apps. We use a sampling profiler to capture this data, identifying which parts of the code are 'hot' (frequently used) and which are 'cold'. When we rebuild the kernel with these profiles, the compiler can make much smarter optimization decisions tailored to actual Android workloads.
To understand the impact of this optimization, consider these key facts:
- On Android, the kernel accounts for about 40% of CPU time.
- We are already using AutoFDO to optimize native executables and libraries in the userspace, achieving about 4% cold app launch improvement and a 1% boot time reduction.
Real-World Performance Wins
We have seen impressive improvements across key Android metrics by leveraging profiles from controlled lab environments. These profiles were collected using app crawling and launching, and measured on Pixel devices across the 6.1, 6.6, and 6.12 kernels.
The most noticeable improvements are listed below. Details on the AutoFDO profiles for these kernel versions can be found in the respective Android kernel repositories for android16-6.12 and android15-6.6 kernels.
How It Works: The Pipeline
Our deployment strategy involves a sophisticated pipeline to ensure profiles stay relevant and performance remains stable.Step 1: Profile Collection
While we rely on our internal test fleet to profile userspace binaries, we shifted to a controlled lab environment for the Generic Kernel Image (GKI). Decoupling profiling from the device release cycle allows for flexible, immediate updates independent of deployed kernel versions. Crucially, tests confirm that this lab-based data delivers performance gains comparable to those from real-world fleets.-
Tools & Environment: We flash test devices with the latest kernel image and use simpleperf to capture instruction execution streams. This process relies on hardware capabilities to record branching history, specifically utilizing ARM Embedded Trace Extension (ETE) and ARM Trace Buffer Extension (TRBE) on Pixel devices.
-
Workloads: We construct a representative workload using the top 100 most popular apps from the Android App Compatibility Test Suite (C-Suite). To capture the most accurate data, we focus on:
-
-
App Launching: Optimizing for the most visible user delays
-
AI-Driven App Crawling: Simulating contiguous, evolving user interactions
-
System-Wide Monitoring: Capturing not only foreground app activities, but also critical background workloads and inter-process communications
-
-
Validation: This synthesized workload shows an 85% similarity to execution patterns collected from our internal fleet.
-
Targeted Data: By repeating these tests sufficiently, we capture high-fidelity execution patterns that accurately represent real-world user interaction with the most popular applications. Furthermore, this extensible framework allows us to seamlessly integrate additional workloads and benchmarks to broaden our coverage.
Step 2: Profile Processing
We post-process the raw trace data to ensure it is clean, effective, and ready for the compiler.-
Aggregation: We consolidate data from multiple test runs and devices into a single system view.
-
Conversion: We convert raw traces into the AutoFDO profile format, filtering out unwanted symbols as needed.
-
Profile Trimming: We trim profiles to remove data for "cold" functions, allowing them to use standard optimization. This prevents regressions in rarely used code and avoids unnecessary increases in binary size.
Step 3: Profile Testing
Before deployment, profiles undergo rigorous verification to ensure they deliver consistent performance wins without stability risks.-
Profile & Binary Analysis: We strictly compare the new profile's content (including hot functions, sample counts, and profile size) against previous versions. We also use the profile to build a new kernel image, analyzing binaries to ensure that changes to the text section are consistent with expectations.
-
Performance Verification: We run targeted benchmarks on the new kernel image. This confirms that it maintains the performance improvements established by previous baselines.
Continuous Updates
Code naturally "drifts" over time, so a static profile would eventually lose its effectiveness. To maintain peak performance, we run the pipeline continuously to drive regular updates:-
Regular Refresh: We refresh profiles in Android kernel LTS branches ahead of each GKI release, ensuring every build includes the latest profile data.
-
Future Expansion: We are currently delivering these updates to the android16-6.12 and android15-6.6 branches and will expand support to newer GKI versions, such as the upcoming android17-6.18.
Ensuring Stability
To further guarantee consistent behavior, we apply a "conservative by default" strategy. Functions not captured in our high-fidelity profiles are optimized using standard compiler methods. This ensures that the "cold" or rarely executed parts of the kernel behave exactly as they would in a standard build, preventing performance regressions or unexpected behaviors in corner cases.
Looking Ahead
We are currently deploying AutoFDO across the android16-6.12 and android15-6.6 branches. Beyond this initial rollout, we see several promising avenues to further enhance the technology:
-
Expanded Reach: We look forward to deploying AutoFDO profiles to newer GKI kernel versions and additional build targets beyond the current aarch64 support.
-
GKI Module Optimization: Currently, our optimization is focused on the main kernel binary (vmlinux). Expanding AutoFDO to GKI modules could bring performance benefits to a larger portion of the kernel subsystem.
-
Vendor Module Support: We are also interested in supporting AutoFDO for vendor modules built using the Driver Development Kit (DDK). With support already available in our build system (Kleaf) and profiling tools (simpleperf), this allows vendors to apply these same optimization techniques to their specific hardware drivers.
-
Broader Profile Coverage: There is potential to collect profiles from a wider range of Critical User Journeys (CUJs) to optimize them.
By bringing AutoFDO to the Android kernel, we're ensuring that the very foundation of the OS is optimized for the way you use your device every day.
10 Mar 2026 11:00pm GMT
05 Mar 2026
Android Developers Blog
Instagram and Facebook deliver instant playback and boost user engagement with Media3 PreloadManager
In the dynamic world of social media, user attention is won or lost quickly. Meta apps (Facebook and Instagram) are among the world's largest social platforms and serve billions of users globally. For Meta, delivering videos seamlessly isn't just a feature, it's the core of their user experience. Short-form videos, particularly Facebook Newsfeed and Instagram Reels, have become a primary driver of engagement. They enable creative expression and rapid content consumption; connecting and entertaining people around the world.
This blog post takes you through the journey of how Meta transformed video playback for billions by delivering true instant playback.
Short-form videos lead to highly fast paced interactions as users quickly scroll through their feeds. Delivering a seamless transition between videos in an ever-changing feed introduces unique hurdles for instantaneous playback. Hence we need solutions that go beyond traditional disk caching and standard reactive playback strategies.
To address the shifts in consumption habits from rise in short form content and the limitations of traditional long form playback architecture, Jetpack Media3 introduced PreloadManager. This component allows developers to move beyond disk caching, offering granular control and customization to keep media ready in memory before the user hits play. Read this blog series to understand technical details about media playback with PreloadManager.
Previously, Meta used a combination of warmup (to get players ready) and prefetch (to cache content on disk) for video delivery. While these methods helped improve network efficiency, they introduced significant challenges. Warmup required instantiating multiple player instances sequentially, which consumed significant memory and limited preloading to only a few videos. This high resource demand meant that a more scalable robust solution could be applied to deliver the instant playback expected on modern, fast-scrolling social feeds.
Integrating Media3 PreloadManager
Optimization and Performance Tuning
The team then performed extensive testing and iterations to optimize performance across Meta's diverse global device ecosystem. Initial aggressive preloading sometimes caused issues, including increased memory usage and scroll performance slowdowns. To solve this, they fine-tuned the implementation by using careful memory measurements, considering device fragmentation, and tailoring the system to specific UI patterns.
Meta applied different preloading strategies and tailored the behavior to match the specific UI patterns of each app:
-
Facebook Newsfeed: The UI prioritizes the video currently coming into view. The manager preloads only the current video to ensure it starts the moment the user pauses their scroll. This "current-only" focus minimizes data and memory footprints in an environment where users may see many static posts between videos. While the system is presently designed to preload just the video in view, it can be adjusted to also preload upcoming (future) videos.
-
Instagram Reels: This is a pure video environment where users swipe vertically. For this UI, the team implemented an "adjacent preload" strategy. The PreloadManager keeps the videos immediately after the current Reel ready in memory. This bi-directional approach ensures that whether a user swipes up or down, the transition remains instant and smooth. The result was a dramatic improvement in the Quality of Experience (QoE) including improvements in Playback Start and Time to First Frame for the user.
Scaling for a diverse global device ecosystem
Scaling a high-performance video stack across billions of devices requires more than just aggressive preloading; it requires intelligence. Meta faced initial challenges with memory pressure and scroll lag, particularly on mid-to-low-end hardware. To solve this, they built a Device Stress Detection system around the Media3 implementation. The apps now monitor I/O and CPU signals in real-time. If a device is under heavy load, preloading is paused to prioritize UI responsiveness.
This device-aware optimization ensures that the benefit of instant playback doesn't come at the cost of system stability, allowing even users on older hardware to experience a smoother, uninterrupted feed.
Architectural wins and code health
Beyond the user-facing metrics, the migration to Media3 PreloadManageroffered long-term architectural benefits. While the integration and tuning process needed multiple iterations to balance performance, the resulting codebase is more maintainable. The team found that the PreloadManager API integrated cleanly with the existing Media3 ecosystem, allowing for better resource sharing. For Meta, the adoption of Media3 PreloadManager was a strategic investment in the future of video consumption.
By adopting preloading and adding device-intelligent gates, they successfully increased total watch time on their apps and improved the overall engagement of their global community.
The proactive architecture delivered immediate and measurable improvements across both platforms.
-
Facebook experienced faster playback starts, decreased playback stall rates and a reduction in bad sessions (like rebuffering, delayed start time, lower quality,etc) which overall resulted in higher watch time.
-
Instagram saw faster playback starts and an increase in total watch time. Eliminating join latency (the interval from the user's action to the first frame display) directly increased engagement metrics. The fewer interruptions due to reduced buffering meant users watched more content, which showed through engagement metrics.

As media consumption habits evolve, the demand for instant experiences will continue to grow. Implementing proactive memory management and optimizing for scale and device diversity ensures your application can meet these expectations efficiently.
-
Prioritize intelligent preloading
Focus on delivering a reliable experience by minimizing stutters and loading times through preloading. Rather than simple disk caching, leveraging memory-level preloading ensures that content is ready the moment a user interacts with it.
-
Align your implementation with UI patterns
Customize preloading behavior as per your apps's UI. For example, use a "current-only" focus for mixed feeds like Facebook to save memory, and an "adjacent preload" strategy for vertical environments like Instagram Reels.
-
Leverage Media3 for long-term code health
Integrating with Media3 APIs rather than a custom caching solution allows for better resource sharing between the player and the PreloadManager, enabling you to manage multiple videos with a single player instance. This results in a future-proof codebase that is easier for engineering teams to not only maintain and optimize over time but also benefit from the latest feature updates.
-
Implement device aware optimizations
Broaden your market reach by testing on various devices, including mid-to-low-end models. Use real-time signals like CPU, memory, and I/O to adapt features and resource usage dynamically.
To get started and learn more, visit
-
Explore the Media3 PreloadManager documentation.
-
Read the blog series for advanced technical and implementation details.
-
Check out the sample app to see preloading in action.
Now you know the secrets for instant playback. Go try them out!
05 Mar 2026 6:03pm GMT
Elevating AI-assisted Android development and improving LLMs with Android Bench

Posted by Matthew McCullough, VP of Product Management, Android Developer
We want to make it faster and easier for you to build high-quality Android apps, and one way we're helping you be more productive is by putting AI at your fingertips. We know you want AI that truly understands the nuances of the Android platform, which is why we've been measuring how LLMs perform Android development tasks. Today we released the first version of Android Bench, our official leaderboard of LLMs for Android development.
Our goal is to provide model creators with a benchmark to evaluate LLM capabilities for Android development. By establishing a clear, reliable baseline for what high quality Android development looks like, we're helping model creators identify gaps and accelerate improvements-which empowers developers to work more efficiently with a wider range of helpful models to choose for AI assistance-which ultimately will lead to higher quality apps across the Android ecosystem.
Designed with real-world Android development tasks
We created the benchmark by curating a task set against a range of common Android development areas. It is composed of real challenges of varying difficulty, sourced from public GitHub Android repositories. Scenarios include resolving breaking changes across Android releases, domain-specific tasks like networking on wearables, and migrating to the latest version of Jetpack Compose, to name a few.
Each evaluation attempts to have an LLM fix the issue reported in the task, which we then verify using unit or instrumentation tests. This model-agnostic approach allows us to measure a model's ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day.
We validated this methodology with several LLM makers, including JetBrains.
"Measuring AI's impact on Android is a massive challenge, so it's great to see a framework that's this sound and realistic. While we're active in benchmarking ourselves, Android Bench is a unique and welcome addition. This methodology is exactly the kind of rigorous evaluation Android developers need right now."
- Kirill Smelov, Head of AI Integrations at JetBrains.
The first Android Bench results
For this initial release, we wanted to purely measure model performance and not focus on agentic or tool use. The models were able to successfully complete 16-72% of the tasks. This is a wide range that demonstrates some LLMs already have a strong baseline for Android knowledge, while others have more room for improvement. Regardless of where the models are at now, we're anticipating continued improvement as we encourage LLM makers to enhance their models for Android development.
The LLM with the highest average score for this first release is Gemini 3.1 Pro, followed closely by Claude Opus 4.6. You can try all of the models we evaluated for AI assistance for your Android projects by using API keys in the latest stable version of Android Studio.
Providing developers and LLM makers with transparency
We value an open and transparent approach, so we made our methodology, dataset, and test harness publicly available on GitHub.
One challenge for any public benchmark is the risk of data contamination, where models may have seen evaluation tasks during their training process. We have taken measures to ensure our results reflect genuine reasoning rather than memorization or guessing, including a thorough manual review of agent trajectories, or the integration of a canary string to discourage training.
Looking ahead, we will continue to evolve our methodology to preserve the integrity of the dataset, while also making improvements for future releases of the benchmark-for example, growing the quantity and complexity of tasks.
We're looking forward to how Android Bench can improve AI assistance long-term. Our vision is to close the gap between concept and quality code. We're building the foundation for a future where no matter what you imagine, you can build it on Android.
05 Mar 2026 2:03pm GMT
Battery Technical Quality Enforcement is Here: How to Optimize Common Wake Lock Use Cases
In recognition that excessive battery drain is top of mind for Android users, Google has been taking significant steps to help developers build more power-efficient apps. On March 1st, 2026, Google Play Store began rolling out the wake lock technical quality treatments to improve battery drain. This treatment will roll out gradually to impacted apps over the following weeks. Apps that consistently exceed the "Excessive Partial Wake Lock" threshold in Android vitals may see tangible impacts on their store presence, including warnings on their store listing and exclusion from discovery surfaces such as recommendations.
Users may see a warning on your store listing if your app exceeds the bad behavior threshold.
This initiative elevated battery efficiency to a core vital metric alongside stability metrics like crashes and ANRs. The "bad behavior threshold" is defined as holding a non-exempted partial wake lock for at least two hours on average while the screen is off in more than 5% of user sessions in the past 28 days. A wake lock is exempted if it is a system held wake lock that offers clear user benefits that cannot be further optimized, such as audio playback, location access, or user-initiated data transfer. You can view the full definition of excessive wake locks in our Android vitals documentation.
As part of our ongoing initiative to improve battery life across the Android ecosystem, we have analyzed thousands of apps and how they use partial wake locks. While wake locks are sometimes necessary, we often see apps holding them inefficiently or unnecessarily, when more efficient solutions exist. This blog will go over the most common scenarios where excessive wake locks occur and our recommendations for optimizing wake locks. We have already seen measurable success from partners like WHOOP, who leveraged these recommendations to optimize their background behavior.
Using a foreground service vs partial wake locks
We've often seen developers struggle to understand the difference between two concepts when doing background execution: foreground service and partial wake locks.
A foreground service is a lifecycle API that signals to the system that an app is performing user-perceptible work and should not be killed to reclaim memory, but it does not automatically prevent the CPU from sleeping when the screen turns off. In contrast, a partial wake lock is a mechanism specifically designed to keep the CPU running even while the screen is off.
While a foreground service is often necessary to continue a user action, a manual acquisition of a partial wake lock is only necessary in conjunction with a foreground service for the duration of the CPU activity. In addition, you don't need to use a wake lock if you're already utilizing an API that keeps the device awake.
Refer to the flow chart in Choose the right API to keep the device awake to ensure you have a strong understanding of what tool to use to avoid acquiring a wake lock in scenarios where it's not necessary.
Third party libraries acquiring wake locks
It is common for an app to discover that it is flagged for excessive wake locks held by a third-party SDK or system API acting on its behalf. To identify and resolve these wake locks, we recommend the following steps:
-
Check Android vitals: Find the exact name of the offending wake lock in the excessive partial wake locks dashboard. Cross-reference this name with the Identify wake locks created by other APIs guidance to see if it was created by a known system API or Jetpack library. If it is, you may need to optimize your usage of the API and can refer to the recommended guidance.
-
Capture a System Trace: If the wake lock cannot be easily identified, reproduce the wake lock issue locally using a system trace and inspect it with the Perfetto UI. You can learn more about how to do this in the Debugging other types of excessive wake locks section of this blog post.
-
Evaluate Alternatives: If an inefficient third-party library is responsible and cannot be configured to respect battery life, consider communicating the issue with the SDK's owners, finding an alternative SDK or building the functionality in-house.
Below is a breakdown of some of the specific use cases we have reviewed, along with the recommended path to optimize your wake lock implementation.
User-Initiated Upload or Download
Example use cases:
-
Video streaming apps where the user triggers a download of a large file for offline access.
-
Media backup apps where the user triggers uploading their recent photos via a notification prompt.
How to reduce wake locks:
-
Do not acquire a manual wake lock. Instead, use the User-Initiated Data Transfer (UIDT) API. This is the designated path for long running data transfer tasks initiated by the user, and it is exempted from excessive wake lock calculations.
One-Time or Periodic Background Syncs
Example use cases:
-
An app performs periodic background syncs to fetch data for offline access.
-
Pedometer apps that fetch step count periodically.
How to reduce wake locks:
-
Do not acquire a manual wake lock. Use WorkManager configured for one-time or periodic work. WorkManager respects system health by batching tasks and has a minimum periodic interval (15 minutes), which is generally sufficient for background updates.
- If you identify wake locks created by WorkManager or JobScheduler with high wake lock usage, it may be because you've misconfigured your worker to not complete in certain scenarios. Consider analyzing the worker stop reasons, particularly if you're seeing high occurrences of STOP_REASON_TIMEOUT.
workManager.getWorkInfoByIdFlow(syncWorker.id)
.collect { workInfo ->
if (workInfo != null) {
val stopReason = workInfo.stopReason
logStopReason(syncWorker.id, stopReason)
}
}
-
In addition to logging worker stop reasons, refer to our documentation on debugging your workers. Also, consider collecting and analyzing system traces to understand when wake locks are acquired and released.
- Finally, check out our case study with WHOOP, where they were able to discover an issue with configuration of their workers and reduce their wake lock impact significantly.
Bluetooth Communication
Example use cases:
-
Companion device app prompts the user to pair their Bluetooth external device.
-
Companion device app listens for hardware events on an external device and user visible change in notification.
-
Companion device app's user initiates a file transfer between the mobile and bluetooth device.
-
Companion device app performs occasional firmware updates to an external device via Bluetooth.
How to reduce wake locks:
-
Use companion device pairing to pair Bluetooth devices to avoid acquiring a manual wake lock during Bluetooth pairing.
-
Consult the Communicate in the background guidance to understand how to do background Bluetooth communication.
-
Using WorkManager is often sufficient if there is no user impact to a delayed communication. If a manual wake lock is deemed necessary, only hold the wake lock for the duration of Bluetooth activity or processing of the activity data.
Location Tracking
Example use cases:
-
Fitness apps that cache location data for later upload such as plotting running routes
-
Food delivery apps that pull location data at a high frequency to update progress of delivery in a notification or widget UI.
How to reduce wake locks:
-
Consult our guidance to Optimize location usage. Consider implementing timeouts, leveraging location request batching, or utilizing passive location updates to ensure battery efficiency.
-
When requesting location updates using the FusedLocationProvider or LocationManager APIs, the system automatically triggers a device wake-up during the location event callback. This brief, system-managed wake lock is exempted from excessive partial wake lock calculations.
- Avoid acquiring a separate, continuous wake lock for caching location data, as this is redundant. Instead, persist location events in memory or local storage and leverage WorkManager to process them at periodic intervals.
override fun onCreate(savedInstanceState: Bundle?) { locationCallback = object : LocationCallback() { override fun onLocationResult(locationResult: LocationResult?) { locationResult ?: return // System wakes up CPU for short duration for (location in locationResult.locations){ // Store data in memory to process at another time } } } }
High Frequency Sensor Monitoring
Example use cases:
-
Pedometer apps that passively collect steps, or distance traveled.
-
Safety apps that monitor the device sensors for rapid changes in real time, to provide features such as crash detection or fall detection.
How to reduce wake locks:
-
If using SensorManager, reduce usage to periodic intervals and only when the user has explicitly granted access through a UI interaction. High frequency sensor monitoring can drain the battery heavily due to the number of CPU wake-ups and processing that occurs.
-
If you're tracking step counts or distance traveled, rather than using SensorManager, leverage Recording API or consider utilizing Health Connect to access historical and aggregated device step counts to capture data in a battery-efficient manner.
-
If you're registering a sensor with SensorManager, specify a maxReportLatencyUs of 30 seconds or more to leverage sensor batching to minimize the frequency of CPU interrupts. When the device is subsequently woken by another trigger such as a user interaction, location retrieval, or a scheduled job, the system will immediately dispatch the cached sensor data.
val accelerometer = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER) sensorManager.registerListener(this, accelerometer, samplingPeriodUs, // How often to sample data maxReportLatencyUs // Key for sensor batching )
-
If your app requires both location and sensor data, synchronize their event retrieval and processing. By piggybacking sensor readings onto the brief wake lock the system holds for location updates, you avoid needing a wake lock to keep the CPU awake. Use a worker or a short-duration wake lock to handle the upload and processing of this combined data.
Remote Messaging
Example use cases:
-
Video or sound monitoring companion apps that need to monitor events that occur on an external device connected using a local network.
-
Messaging apps that maintain a network socket connection with the desktop variant.
How to reduce wake locks:
-
If the network events can be processed on the server side, use FCM to receive information on the client. You may choose to schedule an expedited worker if additional processing of FCM data is required.
-
If events must be processed on the client side via a socket connection, a wake lock is not needed to listen for event interrupts. When data packets arrive at the Wi-Fi or Cellular radio, the radio hardware triggers a hardware interrupt in the form of a kernel wake lock. You may then choose to schedule a worker or acquire a wake lock to process the data.
- For example, if you're using ktor-network to listen for data packets on a network socket, you should only acquire a wake lock when packets have been delivered to the client and need to be processed.
val readChannel = socket.openReadChannel() while (!readChannel.isClosedForRead) { // CPU can safely sleep here while waiting for the next packet val packet = readChannel.readRemaining(1024) if (!packet.isEmpty) { // Data Arrived: The system woke the CPU and we should keep it awake via manual wake lock (urgent) or scheduling a worker (non-urgent) performWorkWithWakeLock { val data = packet.readBytes() // Additional logic to process data packets } } }
Summary
By adopting these recommended solutions for common use cases like background syncs, location tracking, sensor monitoring and network communication, developers can work towards reducing unnecessary wake lock usage. To continue learning, read our other technical blog post or watch our technical video on how to discover and debug wake locks: Optimize your app battery using Android vitals wake lock metric. Also, consult our updated wakelock documentation. To help us continue improving our technical resources, please share any additional feedback on our guidance in our documentation feedback survey.05 Mar 2026 12:00am GMT
04 Mar 2026
Android Developers Blog
How WHOOP decreased excessive partial wake lock sessions by over 90%
Posted by Breana Tate, Developer Relations Engineer, Mayank Saini, Senior Android Engineer, Sarthak Jagetia, Senior Android Engineer and Manmeet Tuteja, Android Engineer II
Building an Android app for a wearable means the real work starts when the screen turns off. WHOOP helps members understand how their body responds to training, recovery, sleep, and stress, and for the many WHOOP members on Android, reliable background syncing and connectivity are what make those insights possible.
Earlier this year, Google Play released a new metric in Android vitals: Excessive partial wake locks. This metric measures the percentage of user sessions where cumulative, non-exempt wake lock usage exceeds 2 hours in a 24-hour period. The aim of this metric is to help you identify and address possible sources of battery drain, which is crucial for delivering a great user experience.
Beginning March 1, 2026, apps that continue to not meet the quality threshold may be excluded from Google Play discovery surfaces. A warning may also be placed on the Google Play Store listing, indicating the app might use more battery than expected.
According to Mayank Saini, Senior Android Engineer at WHOOP, this "presented the team with an opportunity to raise the bar on Android efficiency," after Android vitals flagged the app's excessive partial wake lock % as 15%-which exceeded the recommended 5% threshold.
The team viewed the Android vitals metric as a clear signal that their background work was holding the CPU awake longer than necessary. Resolving this would allow them to continue to deliver a great user experience while simultaneously decreasing wasted background time and maintaining reliable and timely Bluetooth connectivity and syncing.
Identifying the issue
To figure out where to get started, the team first turned to Android vitals for more insight into which wake locks were affecting the metric. By consulting the Android vitals excessive partial wake locks dashboard, they were able to identify the biggest contributor to excessive partial wake locks as one of their WorkManager workers (identified in the dashboard as androidx.work.impl.background.systemjob.SystemJobService). To support the WHOOP "always-on experience", the app uses WorkManager for background tasks like periodic syncing and delivering recurring updates to the wearable.
While the team was aware that WorkManager acquires a wake lock while executing tasks in the background, they previously did not have visibility into how all of their background work (beyond just WorkManager) was distributed until the introduction of the excessive partial wake locks metric in Android vitals.
With the dashboard identifying WorkManager as the main contributor, the team was then able to focus their efforts on identifying which of their workers was contributing the most and work towards resolving the issue.
Making use of internal metrics and data to better narrow down the cause
WHOOP already had internal infrastructure set up to monitor WorkManager metrics. They periodically monitor:
-
Average Runtime: For how long does the worker run?
-
Timeouts: How often is the worker timing out instead of completing?
-
Retries: How often does the worker retry if the work timed out or failed?
-
Cancellations: How often was the work cancelled?
Tracking more than just worker successes and failures gives the team visibility into their work's efficiency.
The internal metrics flagged high average runtime for a select few workers, enabling them to narrow the investigation down even further.
In addition to their internal metrics, the team also used Android Studio's Background Task Inspector to inspect and debug the workers of interest, with a specific focus on associated wake locks, to align with the metric flagged in Android vitals.
Investigation: Distinguishing between worker variants
WHOOP uses both one-time and periodic scheduling for some workers. This allows the app to reuse the same Worker logic for identical tasks with the same success criteria, differing only in timing.
Using their internal metrics made it possible to narrow their search to a specific worker, but they couldn't tell if the bug occurred when the worker was one-time, periodic, or both. So, they rolled out an update to use WorkManager's setTraceTag method to distinguish between the one-time and periodic variants of the same Worker.
This extra detail would allow them to definitively identify which Worker variant (periodic or one-time) was contributing the most to sessions with excessive partial wake locks. However, the team was surprised when the data revealed that neither variant appeared to be contributing more than the other.
Manmeet Tuteja, Android Engineer II at WHOOP said "that split also helped us confirm the issue was happening in both variants, which pointed away from scheduling configuration and toward a shared business logic problem inside the worker implementation."
Diving deeper on worker behavior and fixing the root cause
With the knowledge that they needed to take a look at logic within the worker, the team re-examined worker behavior for the workers that had been flagged during their investigation. Specifically, they were looking for instances in which work may have been getting stuck and not completing.
All of this culminated in finding the root cause of the excessive wake locks:
A CoroutineWorker that was designed to wait for a connection to the WHOOP sensor before proceeding.
If the work started with no sensor connected, whoopSensorFlow-which indicates if the sensor is connected- was null. The SensorWorker didn't treat this as an early-exit condition and kept running, effectively waiting indefinitely for a connection. As a result, WorkManager held a partial wake lock until the work timed out, leading to high background wake lock usage and frequent, unwanted rescheduling of the SensorWorker.
To address this, the WHOOP team updated the worker logic to check the connection status before attempting to execute the core business logic.
If the sensor isn't available, the worker exits, avoiding a timeout scenario and releasing the wake lock. The following code snippet shows the solution:
class SensorWorker(appContext: Context, params: WorkerParameters): CoroutineWorker(appContext, params) { override suspend fun doWork(): Result { ... // Check the sensor state and perform work or return failure return whoopSensorFlow.replayCache .firstOrNull() ?.let { cachedData -> processSensorData(cachedData) Result.success() } ?: run { Result.failure() } }
Achieving a 90% decrease in sessions with excessive partial wake locks
After rolling out the fix, the team continued to monitor the Android vitals dashboard to confirm the impact of the changes.
Ultimately, WHOOP saw their excessive partial wake lock percentage drop from 15% to less than 1% just 30 days after implementing the changes to their Worker.

As a result of the changes, the team has seen fewer instances of work timing out without completing, resulting in lower average runtimes.
The WHOOP team's advice to other developers that want to improve their background work's efficiency:
Get Started
If you're interested in trying to reduce your app's excessive partial wake locks or trying to improve worker efficiency, view your app's excessive partial wake locks metric in Android vitals, and review the wake locks documentation for more best practices and debugging strategies.
04 Mar 2026 6:00pm GMT
A new era for choice and openness
.png)
Expanded billing choice on Google Play for users and developers
Google Play is giving developers even more billing choice and freedom in how they handle transactions. Mobile developers will have the option to use their own billing systems in their app alongside Google Play's billing, or they can guide users outside of their app to their own websites for purchases. Our goal is to offer this flexibility in a way that maximizes choice and safety for users.
Leading the way in store choice
We're introducing a program that makes sideloading qualified app stores even easier. Our new Registered App Stores program will provide a more streamlined installation flow for Android app stores that meet certain quality and safety benchmarks.
Once this change has rolled out, app stores that choose to participate in this optional program will have registered with us and so users who sideload them will have a more simplified installation flow (see graphic below). If a store chooses not to participate, nothing changes for them and they retain the same experience as any other sideloaded app on Android.
This gives app stores more ways to reach users and gives users more ways to easily and safely access the apps and games they love.
This Registered App Store program will begin outside of the US first, and we intend to bring it to the US as well, subject to court approval.
Lower pricing and new programs to support developers
Google Play's fees are already the lowest among major app stores, and today we are taking this even further by introducing a new business model that decouples fees for using our billing system and introduces new, lower service fees. Once this rolls out:
-
Billing: For those developers who choose to use Google Play's billing system, they will be charged a market-specific rate separate from the service fee. In the European Economic Area (EEA), UK, and US that rate will be 5%.
-
Service Fees:
-
For new installs (first time installs from users after the new fees are launched in a region), we are reducing the in-app purchase (IAP) service fee to 20%.
-
We are launching an Apps Experience Program and revamping our Google Play Games Level Up program to incentivize building great software experiences across Android form factors associated with clear quality benchmarks and enhanced user benefits. Those developers who choose to participate in these programs will have even lower rates. Participating IAP developers will have a 20% service fee for transactions from existing installs and a 15% fee on transactions from new app installs.
-
Our service fee for recurring subscriptions will be 10%.
-
Rollout timelines
This is a significant evolution, and we plan to share additional details in the coming months. To make sure we have enough time to build the necessary technical infrastructure, enable a seamless transition for developers, and ensure alignment with local regulations, these updated fees will roll out on the following staggered schedule:
-
By June 30: EEA, the United Kingdom and the US.
-
By September 30: Australia
-
By December 31: Korea and Japan
-
By September 30, 2027: The updates will reach the rest of the world.
We will also launch the updated Google Play Games Level Up program and new App Experience program by September 30 for EEA, UK, US, and Australia and then it will roll out in line with the rest of the schedule above.
We plan to launch Registered App Stores with a version of a major Android release by the end of the year.
Resolving disputes with Epic Games
With these updates, we have also resolved our disputes worldwide with Epic Games.
We believe these changes will make for a stronger Android ecosystem with even more successful developers and higher-quality apps and games available across more form factors for everyone. We look forward to our continued work with the developer community to build the next generation of digital experiences.
04 Mar 2026 2:40pm GMT
03 Mar 2026
Android Developers Blog
Android devices extend seamlessly to connected displays

We are excited to announce a major milestone in bringing mobile and desktop computing closer together on Android: connected display support has reached general availability with the Android 16 QPR3 release!
As shown at Google I/O 2025, connected displays allow users to connect their Android devices to an external monitor and instantly access a desktop windowing environment. Apps can be used in free-form or maximized windows and users can multitask just like they would on a desktop OS.
Google and Samsung have collaborated to bring a seamless and powerful desktop windowing experience to devices across the Android ecosystem running Android 16 while connected to an external display.
This is now generally available on supported devices* to users who can connect their supported Pixel and Samsung phones to external monitors, enabling new opportunities for building more engaging and more productive app experiences that adapt across form factors.
How does it work?
When a supported Android phone or foldable is connected to an external display, a new desktop session starts on the connected display.
The experience on the connected display is similar to the experience on a desktop, including a taskbar that shows active apps and lets users pin apps for quick access. Users are able to run multiple apps side by side simultaneously in freely resizable windows on the connected display.
Phone connected to an external display with a desktop session on the display while the phone maintains its own state.
Why does it matter?
In the Android 16 QPR3 release, we finalized the windowing behaviors, taskbar interactions, and input compatibility (mouse and keyboard) that define the connected display experience. We also included compatibility treatments to scale windows and avoid app restarts when switching displays.
If your app is built with adaptive design principles, it will automatically have the desktop look and feel, and users will feel right at home. If the app is locked to portrait or assumes a touch-only interface, now is the time to modernize.
In particular, pay attention to these key best practices for optimal app experiences on connected displays:
-
Don't assume a constant Display object: The Display object associated with your app's context can change when an app window is moved to an external display or if the display configuration changes. Your app should gracefully handle configuration change events and query display metrics dynamically rather than caching them.
-
Account for density configuration changes: External displays can have vastly different pixel densities than the primary device screen. Ensure your layouts and resources adapt correctly to these changes to maintain UI clarity and usability. Use density-independent pixels (dp) for layouts, provide density-specific resources, and ensure your UI scales appropriately.
-
Correctly support external peripherals: When users connect to an external monitor, they often create a more desktop-like environment. This frequently involves using external keyboards, mice, trackpads, webcams, microphones, and speakers. Improve the support for keyboard and mouse interactions.
Building for the desktop future with modern tools
We provide several tools to help you build the desktop experience. Let's recap the latest updates to our core adaptive libraries!
The biggest update in Jetpack WindowManager 1.5.0 is the addition of two new width window size classes: Large and Extra-large.
Window size classes are our official, opinionated set of viewport breakpoints that help you design and develop adaptive layouts. With 1.5.0, we're extending this guidance for screens that go beyond the size of typical tablets.
Here are the new width breakpoints:
- Large: For widths between 1200dp and 1600dp
- Extra-large: For widths ≥1600dp
On very large surfaces, simply scaling up a tablet's Expanded layout isn't always the best user experience. An email client, for example, might comfortably show two panes (a mailbox and a message) in the Expanded window size class. But on an Extra-large desktop monitor, the email client could elegantly display three or even four panes, perhaps a mailbox, a message list, the full message content, and a calendar/tasks panel, all at once.
To include the new window size classes in your project, simply call the function from the WindowSizeClass.BREAKPOINTS_V2 set instead of WindowSizeClass.BREAKPOINTS_V1:
val currentWindowMetrics = WindowMetricsCalculator.getOrCreate() .computeCurrentWindowMetrics(LocalContext.current) val sizeClass = WindowSizeClass.BREAKPOINTS_V2 .computeWindowSizeClass(currentWindowMetrics)
if(sizeClass.isWidthAtLeastBreakpoint( WindowSizeClass.WIDTH_DP_LARGE_LOWER_BOUND)){ ... // Window is at least 1200 dp wide. }
Navigation 3 is the latest addition to the Jetpack collection. Navigation 3, which just reached its first stable release, is a powerful navigation library designed to work with Compose.
Navigation 3 is also a great tool for building adaptive layouts by allowing multiple destinations to be displayed at the same time and allowing seamless switching between those layouts.
This system for managing your app's UI flow is based on Scenes. A Scene is a layout that displays one or more destinations at the same time. A SceneStrategy determines whether it can create a Scene. Chaining SceneStrategy instances together allows you to create and display different scenes for different screen sizes and device configurations.
For out-of-the-box canonical layouts, like list-detail and supporting pane, you can use the Scenes from the Compose Material 3 Adaptive library (available in version 1.3 and above).
It's also easy to build your own custom Scenes by modifying the Scene recipes or starting from scratch. For example, let's consider a Scene that displays three panes side by side:
class ThreePaneScene<T : Any>( override val key: Any, override val previousEntries: List<NavEntry<T>>, val firstEntry: NavEntry<T>, val secondEntry: NavEntry<T>, val thirdEntry: NavEntry<T> ) : Scene<T> { override val entries: List<NavEntry<T>> = listOf(firstEntry, secondEntry, thirdEntry) override val content: @Composable (() -> Unit) = { Row(modifier = Modifier.fillMaxSize()) { Column(modifier = Modifier.weight(1f)) { firstEntry.Content() } Column(modifier = Modifier.weight(1f)) { secondEntry.Content() } Column(modifier = Modifier.weight(1f)) { thirdEntry.Content() } } }
class ThreePaneSceneStrategy<T : Any>(val windowSizeClass: WindowSizeClass) : SceneStrategy<T> { override fun SceneStrategyScope<T>.calculateScene(entries: List<NavEntry<T>>): Scene<T>? { if (windowSizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_LARGE_LOWER_BOUND)) { val lastThree = entries.takeLast(3) if (lastThree.size == 3 && lastThree.all { it.metadata.containsKey(MULTI_PANE_KEY) }) { val firstEntry = lastThree[0] val secondEntry = lastThree[1] val thirdEntry = lastThree[2] return ThreePaneScene( key = Triple(firstEntry.contentKey, secondEntry.contentKey, thirdEntry.contentKey), previousEntries = entries.dropLast(3), firstEntry = firstEntry, secondEntry = secondEntry, thirdEntry = thirdEntry ) } } return null } }
val strategy = ThreePaneSceneStrategy() then TwoPaneSceneStrategy() NavDisplay(..., sceneStrategy = strategy, entryProvider = entryProvider { entry<MyScreen>(metadata = mapOf(MULTI_PANE_KEY to true))) { ... } ... other entries... } )
If there isn't enough space to display three or two panes-both our custom scene strategies return null. In this case, NavDisplay falls back to displaying the last entry in the back stack in a single pane using SinglePaneScene.
By using scenes and strategies, you can add one, two, and three pane layouts to your app!
Checkout the documentation to learn more on how to create custom layouts using Scenes in Navigation 3.
Standalone adaptive layouts
If you need a standalone layout, the Compose Material 3 Adaptive library helps you create adaptive UIs like list-detail and supporting pane layouts that adapt themselves to window configurations automatically based on window size classes or device postures.
The good news is that the library is already up to date with the new breakpoints! Starting from version 1.2, the default pane scaffold directive functions support Large and Extra-large width window size classes.
You only need to opt-in by declaring in your Gradle build file that you want to use the new breakpoints:
currentWindowAdaptiveInfo(supportLargeAndXLargeWidth = true)
Getting started
Explore the connected display feature in the latest Android release. Get Android 16 QPR3 on a supported device, then connect it to an external monitor to start testing your app today!
Dive into the updated documentation on multi-display support and window management to learn more about implementing these best practices.
Feedback
Your feedback is crucial as we continue to refine the connected display desktop experience. Share your thoughts and report any issues through our official feedback channels.
We're committed to making Android a versatile platform that adapts to the many ways users want to interact with their apps and devices. The improvements to connected display support are another step in that direction, and we think your users will love the desktop experiences you'll build!
*Note: At the time the article is written, connected displays are supported on Pixel 8, 9, 10 series and on a wide array of Samsung devices, including S26, Fold7, Flip7, and Tab S11.
03 Mar 2026 6:00pm GMT
Go from prompt to working prototype with Android Studio Panda 2
Android Studio Panda 2 is now stable and ready for you to use in production. This release brings new agentic capabilities to Android Studio, enabling the agent to create an entire working application from scratch with the AI-powered New Project flow, and allowing the agent to automate the manual work of dependency updates.
Whether you're building your first prototype or maintaining a large, established codebase, these updates bring new efficiency to your workflow by enabling Gemini in Android Studio to help more than ever.
Here's a deep dive into what's new:
Create New Projects with AI
Say goodbye to boilerplate starter templates that just get you to the start line. With the AI-powered New Project flow, you can now build a working app prototype with just a single prompt.
The agent reduces the time you spend setting up dependencies, writing boilerplate code, and creating basic navigation, allowing you to focus on the creative aspects of app development. The AI-powered New Project flow allows you to describe exactly what you want to build - you can even upload images for style inspiration. The agent then creates a detailed project plan for your review.
When you're ready, the agent turns your plan into a first draft of your app using Android best practices, including Kotlin, Compose, and the latest stable libraries. Under your direction, it creates an autonomous generation loop: it generates the necessary code, builds the project, analyzes any build errors, and attempts to self-correct the code, looping until your project builds successfully. It then deploys your app to an Android Emulator and walks through each screen, verifying that the implementation works correctly and is true to your original request. Whether you need a simple single-screen layout, a multi-page app with navigation, or even an application integrated with Gemini APIs, the AI-powered New Project flow can handle it.
Getting Started
To use the agent to set up a project, do the following:
-
Start Android Studio.
-
Select New Project on the Welcome to Android Studio screen (or File > New > New Project from within a project)
- Select Create with AI.
- Type your prompt into the text entry field and click Next. For best results we recommend using a paid Gemini API key or third-party remote model.
5. Name your app and click Finish to start the generation process.
6. Validate the finished app using the project plan and by running your app in the Android Emulator or on an Android device.
For more details on the New Project flow, check out the official documentation.
Share What You Build
We want to hear from you and see the apps you're able to build using the New Project flow. Share your apps with us by using #AndroidStudio in your social posts. We'll be amplifying some of your submissions on our social channels.
Unlock more with your Gemini API key
While the agent works out-of-the-box using Android Studio's default no-cost model, providing your own Google AI Studio API key unlocks the full potential of the assistant. By connecting a paid Gemini API key, you get access to the fastest and latest models from Google. It also allows the New Project flow to access Nano Banana, our best model for image generation, in order to ideate on UI design - allowing the agent to create richer, higher fidelity application designs.
In the AI-powered New Project flow, this increased capability means larger context windows for more tailored generation, as well as superior code quality. Furthermore, because the Agent uses Nano Banana behind the scenes for enhanced design generation, your prototype doesn't just work well-it features visually appealing, modern UI layouts and looks professional from the get go.
Version Upgrade Assistant
Keeping your project dependencies up to date is time-consuming and often causes cascading build errors. You fix one issue by updating a dependency, only to introduce a new issue somewhere else.
The Version Upgrade Assistant in Android Studio just made that a problem of the past. You can now let AI do the heavy lifting of managing dependencies and boilerplate so you can focus on creating unique experiences for your users.
To use this feature, simply right-click in your version catalog, select AI, and then Update Dependencies.
You can also access the Version Upgrade Assistant from the Refactor menu-just choose Update all libraries with AI.
The agent runs multiple automated rounds-attempting builds, reading error messages, and adjusting versions-until the build succeeds. Instead of manually fighting through dependency conflicts, you can let the agent handle the iterative process of finding a stable configuration for you. Read the documentation for more information on Version Upgrade Assistant.
Gemini 3.1 Pro is available in Android Studio
We released Gemini 3.1 Pro preview, and it is even better than Gemini 3 Pro for reasoning and intelligence. You can access it in Android Studio by plugging in your Gemini API key. Put the new model to work on your toughest bugs, code completion, and UI logic. Let us know what you think of the new model.
Get started
Dive in and accelerate your development. Download Android Studio Panda 2 and start exploring these powerful new agentic features today.
03 Mar 2026 2:00pm GMT
02 Mar 2026
Android Developers Blog
Supercharge your Android development with 6 expert tips for Gemini in Android Studio

In January we announced Android Studio Otter 3 Feature Drop in stable, including Agent Mode enhancements and many other updates to provide more control and flexibility over using AI to help you build high quality Android apps. To help you get the most out of Gemini in Android Studio and all the new capabilities, we sat down with Google engineers and Google Developer Experts to gather their best practices for working with the latest features-including Agent mode and the New Project Assistant. Here are some useful insights to help you get the best out of your development:
-
Build apps from scratch with the New Project Assistant
The new Project Assistant-now available in the latest Canary builds-integrates Gemini with the Studio's New Project wizard. By simply providing prompts and (optionally) design mockups, you can generate entire applications from scratch, including scaffolding, architecture, and Jetpack Compose layouts.
Integrated with the Android Emulator, it can deploy your build and "walk through" the app, making sure it's functioning correctly and that the rendered screens actually match your vision. Additionally, you can use Agent Mode to then continue to work on the app and iterate, leveraging Gemini to refine your app to fit your vision.
Also, while this feature works with the default (no-cost) model, we highly recommend using this feature with an AI Studio API Key to access the latest models - like Gemini 3.1 Pro or 3.0 Flash - which excel at agentic workflows. Additionally, adding your API Key allows the New Project Assistant to use Nano Banana behind the scenes to help with ideating on UI design, improving the visual fidelity of the generated application! - Trevor Johns, Developer Relations Engineer.
Dialog for setting up a new project.
2. Ask the Agent to refine your code by providing it with 'intentional' contexts
When using Gemini Agents, the quality of the output is directly tied to the boundaries you set. Don't just ask it to "fix this code"- be very intentional with the context that you provide it and be specific about what you want (and what you don't). Improve the output by providing recent blogs or docs so the model can make accurate suggestions based on these.
Ask the Agent to simplify complex logic, or if it see's any fundamental problems with it, or even ask it to scan for security risks in areas where you feel uncertain. Being firm with your instructions-even telling the model "please do not invent things" in instances where you are using very new or experimental APIs-helps keep the AI focused on the outputs you are trying to achieve. - Alejandra Stamato, Android Google Developer Expert and Android Engineer at HubSpot.
3. Use documentation with Agent mode to provide context for new libraries
To prevent the model from hallucinating code for niche or brand-new libraries, leverage Android Studio's Agent tools, to have access to documentation: Search Android Docs and Fetch Android Docs. You can direct Gemini to search the Android Knowledge Base or specific documentation articles. The model can choose to use this if it thinks it's missing some information, which is good especially when you use niche API's, or one's which aren't as common.
If you are certain you want the model to consult the documentation and to make sure those tools are triggered, a good trick is to add something like 'search the official documentation' or 'check the docs' to your prompts. And for documentation on different libraries which aren't Android specific, install a MCP Server that lets you access documentation like Context7 (or something similar). - Jose Alcérreca, Android Developer Relations Engineer, Google.
4. Use AI to help build Agents.md files for using custom frameworks, libraries and design systems
To make sure Agent uses custom frameworks, libraries and design systems you have two options 1) In settings, Android Studio allows you to specify rules to be followed when Gemini is performing these actions for you. Or 2) Create Agents.md files in your application and specify how things should be done or act as guidance for when AI is performing a task, specific frameworks, design systems, or specific ways of doing things (such as the exact architecture, things to do or what not to do), in a standard bullet point way to give AI clear instructions.
Manage AGENTS.md files as context.
You can also use Agents.md file at the root of the project, and can have them in different modules (or even subdirectories) of your project as well! The more context you have or the more guidance available when you're working, that will be available for AI to access. If you get stuck creating these Agents.md files you can use AI to help build them, or give you foundations based on the projects you have and then edit them so you don't have to start from scratch. - Joe Birch, Android Google Developer Expert and Staff Engineer at Buffer.
5. Offload the tedious tasks to Agent and save yourself time
You can get Gemini in Android Studio agent to help you make tasks such as writing and reviewing faster. For example it can help writing commit messages, giving you a good summary which you can then review and save yourself time. Additionally, get it to write tests; under your direction the Agent can look at the other tests in your project and write a good test for you to run following best practices just by looking at them. Another good example of a tedious task is writing a new parser for a certain JSON format. Just give Gemini a few examples and it will get you started very quickly. - Diego Perez, Android Software Engineer, Google
6. Control what you are sharing with AI using simple opt-outs or commands, alongside paid models.
If you want to control what is shared with AI whilst on the no-cost plans, you can opt out some or all your code from model training by adding an AI exclusions file ('.aiexclude') to your project. This file uses glob pattern matching similar to a .gitignore file, specifying sensitive directories or files that should be hidden from the AI. You can place .aiexclude files anywhere within the project and its VCS roots to control which files AI features are allowed to access.
An example of an `.aiexclude` file in Android Studio.
Alternatively, in Android Studio settings, you can also opt out of context sharing either on a per project or per user basis (although this method limits the functionality of a number of features because the AI won't see your code).
Remember, paid plans never use your code for model training. This includes both users using an AI Studio API Key, and businesses who are subscribed to Gemini Code Assist. - Trevor Johns, Developer Relations Engineer.
Hear more from the Android team and Google Developer Experts about Gemini in Android Studio in our recent fireside chat and download Android Studio to get started.
02 Mar 2026 2:00pm GMT
26 Feb 2026
Android Developers Blog
The Second Beta of Android 17

Today we're releasing the second beta of Android 17, continuing our work to build a platform that prioritizes privacy, security, and refined performance. This update delivers a range of new capabilities, including the EyeDropper API and a privacy-preserving Contacts Picker. We're also adding advanced ranging, cross-device handoff APIs, and more.
This release continues the shift in our release cadence, following this annual major SDK release in Q2 with a minor SDK update.
User Experience & System UI
Bubbles
Bubbles is a windowing mode feature that offers a new floating UI experience separate from the messaging bubbles API. Users can create an app bubble on their phone, foldable, or tablet by long-pressing an app icon on the launcher. On large screens, there is a bubble bar as part of the taskbar where users can organize, move between, and move bubbles to and from anchored points on the screen.
You should follow the guidelines for supporting multi-window mode to ensure your apps work correctly as bubbles.
Bubbles aren't yet fully enabled in Beta 2. Look for them in a future build of Android 17.
A new system-level EyeDropper API allows your app to request a color from any pixel on the display without requiring sensitive screen capture permissions.

val eyeDropperLauncher = registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result -> if (result.resultCode == Activity.RESULT_OK) { val color = result.data?.getIntExtra(Intent.EXTRA_COLOR, Color.BLACK) // Use the picked color in your app } } fun launchColorPicker() { val intent = Intent(Intent.ACTION_OPEN_EYE_DROPPER) eyeDropperLauncher.launch(intent) }
Contacts Picker
A new system-level contacts picker via ACTION_PICK_CONTACTS grants temporary, session-based read access to only the specific data fields requested by the user, reducing the need for the broad READ_CONTACTS permissions. It also allows for selections from the device's personal or work profiles.

val contactPicker = rememberLauncherForActivityResult(StartActivityForResult()) {
if (it.resultCode == RESULT_OK) {
val uri = it.data?.data ?: return@rememberLauncherForActivityResult
// Handle result logic
processContactPickerResults(uri)
}
}
val dataFields = arrayListOf(Email.CONTENT_ITEM_TYPE, Phone.CONTENT_ITEM_TYPE)
val intent = Intent(ACTION_PICK_CONTACTS).apply {
putStringArrayListExtra(EXTRA_PICK_CONTACTS_REQUESTED_DATA_FIELDS, dataFields)
putExtra(EXTRA_ALLOW_MULTIPLE, true)
putExtra(EXTRA_PICK_CONTACTS_SELECTION_LIMIT, 5)
}
contactPicker.launch(intent)Easier pointer capture compatibility with touchpads
Previously, touchpads reported events in a very different way from mice when an app had captured the pointer, reporting the locations of fingers on the pad rather than the relative movements that would be reported by a mouse. This made it quite difficult to support touchpads properly in first-person games. Now, by default the system will recognize pointer movement and scrolling gestures when the touchpad is captured, and report them just like mouse events. You can still request the old, detailed finger location data by explicitly requesting capture in the new "absolute" mode.
// To request the new default relative mode (mouse-like events) // This is the same as requesting with View.POINTER_CAPTURE_MODE_RELATIVE view.requestPointerCapture() // To request the legacy absolute mode (raw touch coordinates) view.requestPointerCapture(View.POINTER_CAPTURE_MODE_ABSOLUTE)
Connectivity & Cross-Device
Cross-device app handoff
A new Handoff API allows you to specify application state to be resumed on another device, such as an Android tablet. When opted in, the system synchronizes state via CompanionDeviceManager and displays a handoff suggestion in the launcher of the user's nearby devices. This feature is designed to offer seamless task continuity, enabling users to pick up exactly where they left off in their workflow across their Android ecosystem. Critically, Handoff supports both native app-to-app transitions and app-to-web fallback, providing maximum flexibility and ensuring a complete experience even if the native app is not installed on the receiving device.
Advanced ranging APIs
We are adding support for 2 new ranging technologies -
-
UWB DL-TDOA which enables apps to use UWB for indoor navigation. This API surface is FIRA (Fine Ranging Consortium) 4.0 DL-TDOA spec compliant and enables privacy preserving indoor navigation (avoiding tracking of the device by the anchor).
-
Proximity Detection which enables apps to use the new ranging specification being adopted by WFA (WiFi Alliance). This technology provides improved reliability and accuracy compared to existing Wifi Aware based ranging specification.
Data plan enhancements
To optimize media quality, your app can now retrieve carrier-allocated maximum data rates for streaming applications using getStreamingAppMaxDownlinkKbps and getStreamingAppMaxUplinkKbps.Core Functionality, Privacy & Performance
Local Network Access
Android 17 introduces the ACCESS_LOCAL_NETWORK runtime permission to protect users from unauthorized local network access. Because this falls under the existing NEARBY_DEVICES permission group, users who have already granted other NEARBY_DEVICES permissions will not be prompted again. By declaring and requesting this permission, your app can discover and connect to devices on the local area network (LAN), such as smart home devices or casting receivers. This prevents malicious apps from exploiting unrestricted local network access for covert user tracking and fingerprinting. Apps targeting Android 17 or higher will now have two paths to maintain communication with LAN devices: adopt system-mediated, privacy-preserving device pickers to skip the permission prompt, or explicitly request this new permission at runtime to maintain local network communication.
Time zone offset change broadcast
Android now provides a reliable broadcast intent, ACTION_TIMEZONE_OFFSET_CHANGED, triggered when the system's time zone offset changes, such as during Daylight Saving Time transitions. This complements the existing broadcast intents ACTION_TIME_CHANGED and ACTION_TIMEZONE_CHANGED, which are triggered when the Unix timestamp changes and when the time zone ID changes, respectively.
NPU Management and Prioritization
Apps targeting Android 17 that need to directly access the NPU must declare FEATURE_NEURAL_PROCESSING_UNIT in their manifest to avoid being blocked from accessing the NPU. This includes apps that use the LiteRT NPU delegate, vendor-specific SDKs, as well as the deprecated NNAPI.
Core internationalization libraries have been updated to ICU 78, expanding support for new scripts, characters, and emoji blocks, and enabling direct formatting of time objects.
SMS OTP protection
Android is expanding its SMS OTP protection by automatically delaying access to SMS messages with OTP. Previously, the protection was primarily focused on the SMS Retriever format wherein the delivery of messages containing an SMS retriever hash is delayed for most apps for three hours. However, for certain apps like the default SMS app, etc and the app that corresponds to the hash are exempt from this delay. This update extends the protection to all SMS messages with OTP. For most apps, SMS messages containing an OTP will only be accessible after a delay of three hours to help prevent OTP hijacking. The SMS_RECEIVED_ACTION broadcast will be withheld and sms provider database queries will be filtered. The SMS message will be available to these apps after the delay.
Delayed access to WebOTP format SMS messages
If the app has the permission to read SMS messages but is not the intended recipient of the OTP (as determined by domain verification), the WebOTP format SMS message will only be accessible after three hours have elapsed. This change is designed to improve user security by ensuring that only apps associated with the domain mentioned in the message can programmatically read the verification code. This change applies to all apps regardless of their target API level.
Delayed access to standard SMS messages with OTP
Certain apps such as the default SMS, assistant app, along with connected device companion apps, etc will be exempt from this delay.
All apps that rely on reading SMS messages for OTP extraction should transition to using SMS Retriever or SMS User Consent APIs to ensure continued functionality.
The Android 17 schedule
We're going to be moving quickly from this Beta to our Platform Stability milestone, targeted for March. At this milestone, we'll deliver final SDK/NDK APIs. From that time forward, your app can target SDK 37 and publish to Google Play to help you complete your testing and collect user feedback in the several months before the general availability of Android 17.

A year of releases
We plan for Android 17 to continue to get updates in a series of quarterly releases. The upcoming release in Q2 is the only one where we introduce planned app breaking behavior changes. We plan to have a minor SDK release in Q4 with additional APIs and features.
%20(1).png)
Get started with Android 17
You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.
If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 2.
If you have Android 26Q1 Beta and would like to take the final stable release of 26Q1 and exit Beta, you need to ignore the over-the-air update to 26Q2 Beta 2 and wait for the release of 26Q1.
We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.
For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you're set up, here are some of the things you should do:
-
Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
-
Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it.
over-the-air for all later previews and Betas.
For complete information, visit the Android 17 developer site.
Join the conversation
As we move toward Platform Stability and the general availability of Android 17 later this year, your feedback remains our most valuable asset. Whether you're an early adopter on the Canary channel or an app developer testing on Beta 2, consider joining our communities and filing feedback. We're listening.26 Feb 2026 9:08pm GMT
25 Feb 2026
Android Developers Blog
The Intelligent OS: Making AI agents more helpful for Android apps

Posted by Matthew McCullough, VP of Product Management, Android Development
User expectations for AI on their devices are fundamentally shifting how they interact with their apps. Instead of opening apps to do tasks step-by-step, they're asking AI to do the heavy lifting for them. In this new interaction model, success is shifting from getting users to open your app, to successfully fulfilling their tasks and helping them get more done faster.
To help you evolve your apps for this agentic future, we're introducing early stage developer capabilities that bridge the gap between your apps and agentic apps and personalized assistants, such as Google Gemini. While we are in the early, beta stages of this journey, we're designing these features with privacy and security at their core as our first step in exploring this paradigm shift as an app ecosystem.
Empowering apps with AppFunctions
Android AppFunctions allows apps to expose data and functionality directly to AI agents and assistants. With the AppFunctions Jetpack library and platform APIs, developers can create self-describing functions that agentic apps can discover and execute via natural language. Mirroring how backend capabilities are declared via MCP cloud servers, AppFunctions provides an on-device solution for Android apps. Much like WebMCP, it executes these functions locally on the device rather than on a server.
The Samsung Gallery integration with Gemini on the Galaxy S26 series showcases AppFunctions in action. Instead of manually scrolling through photo albums, you can now simply ask Gemini to "Show me pictures of my cat from Samsung Gallery." Gemini takes the user query, intelligently identifies and triggers the right function, and presents the returned photos from Samsung Gallery directly in the Gemini app, so users never need to leave. This experience is multimodal and can be done via voice or text. Users can even use the returned photos in follow-up conversations, like sending them to friends in a text message.
Enabling agentic apps with intelligent UI automation
While AppFunctions provides a structured framework and more control for apps to communicate with AI agents and assistants, we know that not every interaction has a dedicated integration yet. We're also developing a UI automation framework for AI agents and assistants to intelligently execute generic tasks on users' installed apps, with user transparency and control built in. This is the platform doing the heavy lifting, so developers can get agentic reach with zero code. It's a low-effort way to extend their reach without a major engineering lift right now.
To get feedback as we refine this framework, we're starting with an early preview on the Galaxy S26 series and select Pixel 10 devices, where users will be able to delegate multi-step tasks to Gemini with just a long press of the power button. Launching as a beta feature in the Gemini app, this will support a curated selection of apps in the food delivery, grocery, and rideshare categories in the US and Korea to start. Whether users need to place a complex pizza order for their family members with particular tastes, coordinate a multi-stop rideshare with co-workers, or reorder their last grocery purchase, Gemini can help complete tasks using the context already available from your apps, without any developer work needed.
Users are in control while a task is being actioned in the background through UI automation. For any automation action, users have the option to monitor a task's progress via notifications or "live view" and can switch to manual control at any point to take over the experience. Gemini is also designed to alert users before completing sensitive tasks, such as making a purchase.
Looking ahead
In Android 17, we're looking to broaden these capabilities to reach even more users, developers, and device manufacturers.
We are currently building experiences with a small set of app developers, focusing on high-quality user experiences as the ecosystem evolves. We plan to share more details later this year on how you can use AppFunctions and UI automation to enable agentic integrations for your app. Stay tuned for updates.
25 Feb 2026 11:47pm GMT
17 Feb 2026
Android Developers Blog
Get ready for Google I/O May 19-20
Posted by The Google I/O Team
Google I/O returns May 19-20
Google I/O is back! Join us online as we share our latest AI breakthroughs and updates in products across the company, from Gemini to Android, Chrome, Cloud, and more.
Tune in to learn about agentic coding and the latest Gemini model updates. The event will feature keynote addresses from Google leaders, forward-looking panel discussions, and product demos designed to showcase the next frontier of technology.
Register now and tune in live
Visit io.google and register to receive updates about Google I/O. Kicking off May 19 at 10am PT, this year we'll be livestreaming keynotes, demos, and more sessions across two days. We'll also be bringing back the popular Dialogues sessions featuring big thinkers and bold leaders discussing how AI is shaping our future.
17 Feb 2026 8:00pm GMT
.png)
.png)
.png)
.gif)
%20(1).gif)



.gif)








.png)