15 Jun 2025
TalkAndroid
Board Kings Free Rolls – Updated Every Day!
Run out of rolls for Board Kings? Find links for free rolls right here, updated daily!
15 Jun 2025 3:48pm GMT
Coin Tales Free Spins – Updated Every Day!
Tired of running out of Coin Tales Free Spins? We update our links daily, so you won't have that problem again!
15 Jun 2025 3:47pm GMT
Avatar World Codes – June 2025 – Updated Daily
Find all the latest Avatar World Codes right here in this article! Read on for more!
15 Jun 2025 3:46pm GMT
Monopoly Go – Free Dice Links Today (Updated Daily)
If you keep on running out of dice, we have just the solution! Find all the latest Monopoly Go free dice links right here!
15 Jun 2025 3:45pm GMT
Family Island Free Energy Links (Updated Daily)
Tired of running out of energy on Family Island? We have all the latest Family Island Free Energy links right here, and we update these daily!
15 Jun 2025 3:44pm GMT
Crazy Fox Free Spins & Coins (Updated Daily)
If you need free coins and spins in Crazy Fox, look no further! We update our links daily to bring you the newest working links!
15 Jun 2025 3:42pm GMT
Match Masters Free Gifts, Coins, And Boosters (Updated Daily)
Tired of running out of boosters for Match Masters? Find new Match Masters free gifts, coins, and booster links right here! Updated Daily!
15 Jun 2025 3:40pm GMT
Solitaire Grand Harvest – Free Coins (Updated Daily)
Get Solitaire Grand Harvest free coins now, new links added daily. Only tested and working links, complete with a guide on how to redeem the links.
15 Jun 2025 3:38pm GMT
Dice Dreams Free Rolls – Updated Daily
Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.
15 Jun 2025 3:37pm GMT
Monopoly Go Events Schedule Today – Updated Daily
Current active events are Main Event - Quantum Coaster, Tournament - Supernova Smash, Special Event - Rebel Racers
15 Jun 2025 3:35pm GMT
Is ChatGPT about to kill the smartphone? OpenAI’s secret project revealed
The smartphone revolution could soon face its most formidable challenger yet: artificial intelligence. OpenAI's ChatGPT, already transforming how…
15 Jun 2025 3:30pm GMT
Anker Recalls One Million Power Banks Over Fire Risk
If your PowerCore 10000 power bank reads "Model: A1263", return it to Anker before it explodes.
15 Jun 2025 2:59pm GMT
5 Alternative Netflix Streamers To Replace Your Nerfed Fire Stick
Netflix leaving old Fire Sticks is the first sign to future-proof your streamer. Here are the top substitutes with Netflix support.
15 Jun 2025 1:19pm GMT
12 new releases on Netflix starting today… including a cult-favorite series rated 4/5
The highly anticipated third season of Ginny & Georgia makes its triumphant return to Netflix on June 5,…
15 Jun 2025 6:30am GMT
14 Jun 2025
TalkAndroid
Google Will Nerf Your Pixel 6a Battery to Stop It From Blowing Up
Seven years of software updates don't mean much on a Pixel because your phone battery becomes problematic after two.
14 Jun 2025 11:14pm GMT
Your phone might be vulnerable right now – here’s what Android just revealed
Google recently confirmed that cybercriminals and even police forces are exploiting critical Android device vulnerabilities. Android users face…
14 Jun 2025 3:30pm GMT
12 Jun 2025
Android Developers Blog
Upcoming changes to Wear OS watch faces
Posted by François Deschênes Product Manager - Wear OS
Today, we are announcing important changes to Wear OS watch face development that will affect how developers publish and update watch faces on Google Play. As part of our ongoing effort to enhance Wear OS app quality, we are moving towards supporting only the Watch Face Format and removing support for AndroidX / Wearable Support Library (WSL) watch faces.
We introduced Watch Face Format at Google I/O in 2023 to make it easier to create watch faces that are customizable and power-efficient. The Watch Face Format is a declarative XML format, so there is no executable code involved in creating a watch face, and there is no code embedded in the watch face APK.
What's changing?
Developers will need to migrate published watch faces to the Watch Face Format by January 14, 2026. Developers using Watch Face Studio to build watch faces will need to resubmit their watch faces to the Play Store using Watch Face Studio version 1.8.7 or above - see below for more details.
When are these changes coming?
Starting January 27, 2025 (already in effect):
- No new AndroidX or Wearable Support Library (WSL) watch faces (legacy watch faces) can be published on the Play Store. Developers can still publish updates to existing watch faces.
Starting January 14, 2026:
- Availability: Users will not be able to install legacy watch faces on any Wear OS devices from the Play Store. Legacy watch faces already installed on a Wear OS device will continue to work.
- Updates: Developers will not be able to publish updates for legacy watch faces to the Play Store.
- Monetization: The following won't be possible for legacy watch faces: one-off watch face purchases, in-app purchases, and subscriptions. Existing purchases and subscriptions will continue to work, but they will not renew, including auto-renewals.
What should developers do next?
To prepare for these changes and to continue publishing watch faces to the Play Store, developers using AndroidX or WSL to build watch faces must migrate their watch faces to the Watch Face Format and resubmit to the Play Store by January 14, 2026.
Developers using Watch Face Studio to build watch faces will need to resubmit their watch faces to the Play Store using Watch Face Studio version 1.8.7 or above:
- Be sure to republish for all Play tracks, including all testing tracks as well as production.
- Remove any bundles from these tracks that were created using Watch Face Studio versions prior to 1.8.7.
Benefits of the Watch Face Format
Watch Face Format was developed to support developers in creating watch faces. This format provides numerous advantages to both developers and end users:
- Simplified development: Streamlined workflows and visual design tools make building watch faces easier.
- Enhanced performance: Optimized for battery efficiency and smooth interactions.
- Increased security: Robust security features protect user data and privacy.
- Forward-compatible: Access to the latest features and capabilities of Wear OS.
Resources to help with migration
To get started migrating your watch faces to the Watch Face Format, check out the following developer guidance:
We encourage developers to begin the migration process as soon as possible to ensure a seamless transition and continued availability of your watch faces on Google Play.
We understand that this change requires effort. If you have further questions, please refer to the Wear OS community announcement. Please report any issues using the issue tracker.
12 Jun 2025 4:00pm GMT
11 Jun 2025
Android Developers Blog
Smoother app reviews with Play Policy Insights beta in Android Studio
Posted by Naheed Vora - Senior Product Manager, Android App Safety
Making it easier for you to build safer apps from the start
We understand you want clear Play policy guidance early in your development, so you can focus on building amazing experiences and prevent unexpected delays from disrupting launch plans. That's why we're making it easier to have smoother app publishing experiences, from the moment you start coding.
With Play Policy Insights beta in Android Studio, you'll get richer, in-context guidance on policies that may impact your app through lint warnings. You'll see policy summaries, dos and don'ts to avoid common pitfalls, and direct links to details.
We hope you caught an early demo at I/O. And now, you can check out Play Policy Insights beta in the Android Studio Narwhal Feature Drop Canary release.

How to use Play Policy Insights beta in Android Studio
Lint warnings will pop up as you code, like when you add a permission. For example, if you add an Android API that uses Photos and requires READ_MEDIA_IMAGES permission, then the Photos & Video Insights lint warning will appear under the respective API call line item in Android Studio.
You can also get these insights by going to Code > Inspect for Play Policy Insights and selecting the project scope to analyze. The scope can be set to the whole project, the current module or file, or a custom scope.

In addition to seeing these insights in Android Studio, you can also generate them as part of your Continuous Integration process by adding the following dependency to your project.
Kotlin
lintChecks("com.google.play.policy.insights:insights-lint:<version>")
Groovy
lintChecks 'com.google.play.policy.insights:insights-lint:<version>'
Share your feedback on Play Policy Insights beta
We're actively working on this feature and want your feedback to refine it before releasing it in the Stable channel of Android Studio later this year. Try it out, report issues, and stop by the Google Play Developer Help Community to share your questions and thoughts directly with our team.
Join us on June 16 when we answer your questions. We'd love to hear about:
- How will this change your current Android app development and Google Play Store submission workflow?
- Which was more helpful in addressing issues: lint warnings in the IDE or lint warnings from CI build?
- What was most helpful in the policy guidance, and what could be improved?
Developers have told us they like:
- Catching potential Google Play policy issues early, right in their code, so they can build more efficiently.
- Seeing potential Google Play policy issues and guidance all in one-place, reducing the need to dig through policy announcements and issue emails.
- Easily discussing potential issues with their team, now that everyone has shared information.
- Continuously checking for potential policy issues as they add new features, gaining confidence in a smoother launch.
For more, see our Google Play Help Center article or Android Studio preview release notes.
We hope features like this will help give you a better policy experience and more streamlined development.
11 Jun 2025 4:00pm GMT
10 Jun 2025
Android Developers Blog
Developer preview: Enhanced Android desktop experiences with connected displays
Posted by Francesco Romano - Developer Relations Engineer on Android, and Fahd Imtiaz - Product Manager, Android Developer
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.
Android has continued to evolve to enable users to be more productive on large screens.
Today, we're excited to share that connected displays support on compatible Android devices is now in developer preview with the Android 16 QPR1 Beta 2 release. As shown at Google I/O 2025, connected displays enable users to attach an external display to their Android device and transform a small screen device into a powerful tool with a large screen. This evolution gives users the ability to move apps beyond a single screen to unlock Android's full productivity potential on external displays.
The connected display update builds on our desktop windowing experience, a capability we previewed last year. Desktop windowing is set to launch later this year for users on compatible tablets running Android 16. Desktop windowing enables users to run multiple apps simultaneously and resize windows for optimal multitasking. This new windowing capability works seamlessly with split screen and other multitasking features users already love on Android and doesn't require switching to a special mode.
Google and Samsung have collaborated to bring a more seamless and powerful desktop windowing experience to large screen devices and phones with connected displays in Android 16 across the Android ecosystem. These advancements will enhance Samsung DeX, and also extend to other Android devices.
For developers, connected displays and desktop windowing present new opportunities for building more engaging and more productive app experiences that seamlessly adapt across form factors. You can try out these features today on your connected display with the Android 16 QPR1 Beta 2 on select Pixel devices.
What's new in connected displays support?
When a supported Android phone or foldable is connected to an external display through a DisplayPort connection, a new desktop session starts on the connected display. The phone and the external display operate independently, and apps are specific to the display on which they're running.
The experience on the connected display is similar to the experience on a desktop, including a task bar that shows running apps and lets users pin apps for quick access. Users are able to run multiple apps side by side simultaneously in freely resizable windows on the connected display.

When a desktop windowing enabled device (like a tablet) is connected to an external display, the desktop session is extended across both displays, unlocking an even more expansive workspace. The two displays then function as one continuous system, allowing app windows, content, and the cursor to move freely between the displays.

A cornerstone of this effort is the evolution of desktop windowing, which is stable in Android 16 and is packed with improvements and new capabilities.
Desktop windowing stable release
We've made substantial improvements in the stability and performance of desktop windowing in Android 16. This means users will encounter a smoother, more reliable experience when managing app windows on connected displays. Beyond general stability improvements, we're introducing several new features:
- Flexible window tiling: Multitasking gets a boost with more intuitive window tiling options. Users can more easily arrange multiple app windows side by side or in various configurations, making it simpler to work across different applications simultaneously on a large screen.
- Multiple desktops: Users can set up multiple desktop sessions to match their distinct productivity requirements and switch between the desktops using keyboard shortcuts, trackpad gestures, and Overview.
- Enhanced app compatibility treatments: New compatibility treatments ensure that even legacy apps behave more predictably and look better on external displays by default. This reduces the burden on developers while providing a better out-of-the-box experience for users.
- Multi-instance management: Users can manage multiple instances of supporting applications (for example, Chrome or, Keep) through the app header button or taskbar context menu. This allows for quick switching between different instances of the same app.
- Desktop persistence: Android can now better maintain window sizes, positions, and states across different desktops. This means users can set up their preferred workspace and have it restored across sessions, offering a more consistent and efficient workflow.
Best practices for optimal app experiences on connected displays
With the introduction of connected display support in Android, it's important to ensure your apps take full advantage of the new display capabilities. To help you build apps that shine in this enhanced environment, here are some key development practices to follow:
Build apps optimized for desktop
- Design for any window size: With phones now connecting to external displays, your mobile app can run in a window of almost any size and aspect ratio. This means the app window can be as big as the screen of the connected display but also flex to fit a smaller window. In desktop windowing, the minimum window size is 386 x 352 dp, which is smaller than most phones. This fundamentally changes how you need to think about UI. With orientation and resizability changes in Android 16, it becomes even more critical for you to update your apps to support resizability and portrait and landscape orientations for an optimal experience with desktop windowing and connected displays. Make sure your app supports any window size by following the best practices on adaptive development.
- Implement features for top productivity: You now have all the tools necessary to build mobile apps that match desktop, so start adding features to boost users productivity! Allow users to open multiple instances of the same app, which is invaluable for tasks like comparing documents, managing different conversations, or viewing multiple files simultaneously. Support data sharing with drag and drop, and maintain user flow across configuration changes by implementing a robust state management system.
Handle dynamic display changes
- Don't assume a constant Display object: The Display object associated with your app's context can change when an app window is moved to an external display or if the display configuration changes. Your app should gracefully handle configuration change events and query display metrics dynamically rather than caching them.
- Account for density configuration changes: External displays can have vastly different pixel densities than the primary device screen. Ensure your layouts and resources adapt correctly to these changes to maintain UI clarity and usability. Use density-independent pixels (dp) for layouts, provide density-specific resources, and ensure your UI scales appropriately.
Go beyond just the screen
- Correctly support external peripherals: When users connect to an external monitor, they often create a more desktop-like environment. This frequently involves using external keyboards, mice, trackpads, webcams, microphones, and speakers. If your app uses camera or microphone input, the app should be able to detect and utilize peripherals connected through the external display or a docking station.
- Handle keyboard actions: Desktop users rely heavily on keyboard shortcuts for efficiency. Implement standard shortcuts (for example, Ctrl+C, Ctrl+V, Ctrl+Z) and consider app-specific shortcuts that make sense in a windowed environment. Make sure your app supports keyboard navigation.
- Support mouse interactions: Beyond simple clicks, ensure your app responds correctly to mouse hover events (for example, for tooltips or visual feedback), right-clicks (for contextual menus), and precise scrolling. Consider implementing custom pointers to indicate different actions.
Getting started
Explore the connected displays and enhanced desktop windowing features in the latest Android Beta. Get Android 16 QPR1 Beta 2 on a supported Pixel device (Pixel 8 and Pixel 9 series) to start testing your app today. Then enable desktop experience features in the developer settings.
Support for connected displays in the Android Emulator is coming soon, so stay tuned for updates!
Dive into the updated documentation on multi-display support and window management to learn more about implementing these best practices.
Feedback
Your feedback is crucial as we continue to refine these experiences. Please share your thoughts and report any issues through our official feedback channels.
We're committed to making Android a versatile platform that adapts to the many ways users want to interact with their apps and devices. The improvements to connected display support are another step in that direction, and we can't wait to see the amazing experiences you'll build!
10 Jun 2025 6:02pm GMT
Top 3 updates for building excellent, adaptive apps at Google I/O ‘25
Posted by Mozart Louis - Developer Relations Engineer
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.
Google I/O 2025 brought exciting advancements to Android, equipping you with essential knowledge and powerful tools you need to build outstanding, user-friendly applications that stand out.
If you missed any of the key #GoogleIO25 updates and just saw the release of Android 16 or you're ready to dive into building excellent adaptive apps, our playlist is for you. Learn how to craft engaging experiences with Live Updates in Android 16, capture video effortlessly with CameraX, process it efficiently using Media3's editing tools, and engage users across diverse platforms like XR, Android for Cars, Android TV, and Desktop.
Check out the Google I/O playlist for all the session details.
Here are three key announcements directly influencing how you can craft deeply engaging experiences and truly connect with your users:
#1: Build adaptively to unlock 500 million devices
In today's diverse device ecosystem, users expect their favorite applications to function seamlessly across various form factors, including phones, tablets, Chromebooks, automobiles, and emerging XR glasses and headsets. Our recommended approach for developing applications that excel on each of these surfaces is to create a single, adaptive application. This strategy avoids the need to rebuild the application for every screen size, shape, or input method, ensuring a consistent and high-quality user experience across all devices.
The talk emphasizes that you don't need to rebuild apps for each form factor. Instead, small, iterative changes can unlock an app's potential.
Here are some resources we encourage you to use in your apps:
New feature support in Jetpack Compose Adaptive Libraries
- We're continuing to make it as easy as possible to build adaptively with Jetpack Compose Adaptive Libraries. with new features in 1.1 like pane expansion and predictive back. By utilizing canonical layout patterns such as List Detail or Supporting Pane layouts and integrating your app code, your application will automatically adjust and reflow when resized.
Navigation 3
- The alpha release of the Navigation 3 library now supports displaying multiple panes. This eliminates the need to alter your navigation destination setup for separate list and detail views. Instead, you can adjust the setup to concurrently render multiple destinations when sufficient screen space is available.
Updates to Window Manager Library
- AndroidX.window 1.5 introduces two new window size classes for expanded widths, facilitating better layout adaptation for large tablets and desktops. A width of 1600dp or more is now categorized as "extra large," while widths between 1200dp and 1600dp are classified as "large." These subdivisions offer more granularity for developers to optimize their applications for a wider range of window sizes.
Support all orientations and be resizable
- In Android 16 important changes are coming, affecting orientation, aspect ratio, and resizability. Apps targeting SDK 36 will need to support all orientations and be resizable.
Extend to Android XR
- We are making it easier for you to build for XR with the Android XR SDK in developer preview 2, which features new Material XR components, a fully integrated Emulator withinAndroid Studio and spatial videos support for your Play Store listings.
Upgrade your Wear OS apps to Material 3 Design
- Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. You can upgrade your app and Tiles to Material 3 Expressive by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles.
You should build a single, adaptive mobile app that brings the best experiences to all Android surfaces. By building adaptive apps, you meet users where they are today and in the future, enhancing user engagement and app discoverability. This approach represents a strategic business decision that optimizes an app's long-term success.
#2: Enhance your app's performance optimization
Get ready to take your app's performance to the next level! Google I/O 2025, brought an inside look at cutting-edge tools and techniques to boost user satisfaction, enhance technical performance metrics, and drive those all-important key performance indicators. Imagine an end-to-end workflow that streamlines performance optimization.
Redesigned UiAutomator API
- To make benchmarking reliable and reproducible, there's the brand new UiAutomator API. Write robust test code and run it on your local devices or in Firebase Test Lab, ensuring consistent results every time.
Macrobenchmarks
- Once your tests are in place, it's time to measure and understand. Macrobenchmarks give you the hard data, while App Startup Insights provide actionable recommendations for improvement. Plus, you can get a quick snapshot of your app's health with the App Performance Score via DAC. These tools combined give you a comprehensive view of your app's performance and where to focus your efforts.
R8, More than code shrinking and obfuscation
- You might know R8 as a code shrinking tool, but it's capable of so much more! The talk dives into R8's capabilities using the "Androidify" sample app. You'll see how to apply R8, troubleshoot any issues (like crashes!), and configure it for optimal performance. It'll also be shown how library developers can include "consumer Keep rules" so that their important code is not touched when used in an application.
#3: Build Richer Image and Video Experiences
In today's digital landscape, users increasingly expect seamless content creation capabilities within their apps. To meet this demand, developers require robust tools for building excellent camera and media experiences.
Media3Effects in CameraX Preview
- At Google I/O, developers delve into practical strategies for capturing high-quality video using CameraX, while simultaneously leveraging the Media3Effects on the preview.
Google Low-Light Boost
- Google Low Light Boost in Google Play services enables real-time dynamic camera brightness adjustment in low light, even without device support for Low Light Boost AE Mode.
New Camera & Media Samples!
- For Google I/O 2025, The Camera & Media team created new samples and demos for building excellent media and camera experiences on Android. It emphasizes future-proofing apps using Jetpack libraries like Media3 Transformer for advanced video editing and Compose for adaptive UIs, including XR. Get more information about incrementally adding premium features with CameraX, utilizing Media3 for AI-powered functionalities such as video summarization and HDR thumbnails, and employing specialized APIs like Oboe for efficient audio playback. We have also updated CameraX samples to fully use Compose instead of the View based system.
Learn more about how CameraX & Media3 can accelerate your development of camera and media related features.
Learn how to build adaptive apps
Want to learn more about building excellent, adaptive apps? Watch this playlist to learn more about all the session details.
10 Jun 2025 6:01pm GMT
A product manager's guide to adapting Android apps across devices
Posted by Fahd Imtiaz, Product Manager, Android Developer Experience
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.
With new form factors emerging continually, the Android ecosystem is more dynamic than ever.
From phones and foldables to tablets, Chromebooks, TVs, cars, Wear and XR, Android users expect their apps to run seamlessly across an increasingly diverse range of form factors. Yet, many Android apps fall short of these expectations as they are built with UI constraints such as being locked to a single orientation or restricted in resizability.
With this in mind, Android 16 introduced API changes for apps targeting SDK level 36 to ignore orientation and resizability restrictions starting with large screen devices, shifting toward a unified model where adaptive apps are the norm. This is the moment to move ahead. Adaptive apps aren't just the future of Android, they're the expectation for your app to stand out across Android form factors.
Why you should prioritize adaptive now

Prioritizing optimizations to make your app adaptive isn't just about keeping up with the orientation and resizability API changes in Android 16 for apps targeting SDK 36. Adaptive apps unlock tangible benefits across user experience, development efficiency, and market reach.
- Mobile apps can now reach users on over 500 million active large screen devices: Mobile apps run on foldables, tablets, Chromebooks, and even compatible cars, with minimal changes. Android 16 will introduce significant advancements in desktop windowing for a true desktop-like experience on large screens, including connected displays. And Android XR opens a new dimension, allowing your existing apps to be available in immersive environments. The user expectation is clear: a consistent, high-quality experience that intelligently adapts to any screen - be it a foldable, a tablet with a keyboard, or a movable, resizable window on a Chromebook.
- "The new baseline" with orientation and resizability API changes in Android 16: We believe mobile apps are undergoing a shift to have UI adapt responsively to any screen size, just like websites. Android 16 will ignore app-defined restrictions like fixed orientation (portrait-only) and non-resizable windows, beginning with large screens (smallest width of the device is >= 600dp) including tablets and inner displays on foldables. For most apps, it's key to helping them stretch to any screen size. In some cases if your app isn't adaptive, it could deliver a broken user experience on these screens. This moves adaptive design from a nice-to-have to a foundational requirement.

- Increase user reach and app discoverability in Play: Adaptive apps are better positioned to be ranked higher in Play, and featured in editorial articles across form factors, reaching a wider audience across Play search and homepages. Additionally, Google Play Store surfaces ratings and reviews across all form factors. If your app is not optimized, a potential user's first impression might be tainted by a 1-star review complaining about a stretched UI on a device they don't even own yet. Users are also more likely to engage with apps that provide a great experience across their devices.
- Increased engagement on large screens: Users on large screen devices often have different interaction patterns. On large screens, users may engage for longer sessions, perform more complex tasks, and consume more content.
-
Concepts saw a 70% increase in user engagement on large screens after optimizing.
Usage for 6 major media streaming apps in the US was up to 3x more for tablet and phone users, as compared to phone only users.
- More accessible app experiences: According to the World Bank, 15% of the world's population has some type of disability. People with disabilities depend on apps and services that support accessibility to communicate, learn, and work. Matching the user's preferred orientation improves the accessibility of applications, helping to create an inclusive experience for all.
Today, most apps are building for smartphones only

"...looking at the number of users, the ROI does not justify the investment".
That's a frequent pushback from product managers and decision-makers, and if you're just looking at top-line analytics comparing the number of tablet sessions to smartphone sessions, it might seem like a closed case.
While top-line analytics might show lower session numbers on tablets compared to smartphones, concluding that large screens aren't worth the effort based solely on current volume can be a trap, causing you to miss out on valuable engagement and future opportunities.
Let's take a deeper look into why:
1. The user experience 'chicken and egg' loop: Is it possible that the low usage is a symptom rather than the root cause? Users are quick to abandon apps that feel clunky or broken. If your app on large screens is a stretched-out phone interface, the app likely provides a negative user experience. The lack of users might reflect the lack of a good experience, not always necessarily lack of potential users.
2. Beyond user volume, look at user engagement: Don't just count users, analyze their worth. Users interact with apps on large screens differently. The large screen often leads to longer sessions and more immersive experiences. As mentioned above, usage data shows that engagement time increases significantly for users who interact with apps on both their phone and tablet, as compared to phone only users.
3. Market evolution: The Android device ecosystem is continuing to evolve. With the rise of foldables, upcoming connected displays support in Android 16, and form factors like XR and Android Auto, adaptive design is now more critical than ever. Building for a specific screen size creates technical debt, and may slow your development velocity and compromise the product quality in the long run.
Okay, I am convinced. Where do I start?

For organizations ready to move forward, Android offers many resources and developer tools to optimize apps to be adaptive. See below for how to get started:
1.Check how your app looks on large screens today: Begin by looking at your app's current state on tablets, foldables (in different postures), Chromebooks, and environments like desktop windowing. Confirm if your app is available on these devices or if you are unintentionally leaving out these users by requiring unnecessary features within your app.
2. Address common UI issues: Assess what feels awkward in your app UI today. We have a lot of guidance available on how you can easily translate your mobile app to other screens.
a. Check the Large screens design gallery for inspiration and understanding how your app UI can evolve across devices using proven solutions to common UI challenges.
b. Start with quick wins. For example, prevent buttons from stretching to the full screen width, or switch to a vertical navigation bar on large screens to improve ergonomics.
c. Identify patterns where canonical layouts (e.g. list-detail) could solve any UI awkwardness you identified. Could a list-detail view improve your app's navigation? Would a supporting pane on the side make better use of the extra space than a bottom sheet?
3. Optimize your app incrementally, screen by screen: It may be helpful to prioritize how you approach optimization because not everything needs to be perfectly adaptive on day one. Incrementally improve your app based on what matters most - it's not all or nothing.
a. Start with the foundations. Check out the large screen app quality guidelines which tier and prioritize the fixes that are most critical to users. Remove orientation restrictions to support portrait and landscape, and ensure support for resizability (for when users are in split screen), and prevent major stretching of buttons, text fields, and images. These foundational fixes are critical, especially with API changes in Android 16 that will make these aspects even more important.
b. Implement adaptive layout optimizations with a focus on core user journeys or screens first.
i. Identify screens where optimizations (for example a two-pane layout) offer the biggest UX win
ii. And then proceed to screens or parts of the app that are not as often used on large screens
c. Support input methods beyond touch, including keyboard, mouse, trackpad, and stylus input. With new form factors and connected displays support, this sets users up to interact with your UI seamlessly.
d. Add differentiating hero user experiences like support for tabletop mode or dual-screen mode on foldables. This can happen on a per-use-case basis - for example, tabletop mode is great for watching videos, and dual screen mode is great for video calls.
While there's an upfront investment in adopting adaptive principles (using tools like Jetpack Compose and window size classes), the long-term payoff may be significant. By designing and building features once, and letting them adapt across screen sizes, the benefits outweigh the cost of creating multiple bespoke layouts. Check out the adaptive apps developer guidance for more.
Unlock your app's potential with adaptive app design
The message for my fellow product managers, decision-makers, and businesses is clear: adaptive design will uplevel your app for high-quality Android experiences in 2025 and beyond. An adaptive, responsive UI is the scalable way to support the many devices in Android without developing on a per-form factor basis. If you ignore the diverse device ecosystem of foldables, tablets, Chromebooks, and emerging form factors like XR and cars, your business is accepting hidden costs from negative user reviews, lower discovery in Play, increased technical debt, and missed opportunities for increased user engagement and user acquisition.
Maximize your apps' impact and unlock new user experiences. Learn more about building adaptive apps today.
10 Jun 2025 6:01pm GMT
Android 16 is here
Posted by Matthew McCullough - VP of Product Management, Android Developer
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.
Today we're releasing Android 16 and making it available on most supported Pixel devices. Look for new devices running Android 16 in the coming months.
This also marks the availability of the source code at the Android Open Source Project (AOSP). You can examine the source code for a deeper understanding of how Android works, and our focus on compatibility means that you can leverage your app development skills in Android Studio with Jetpack Compose to create applications that thrive across the entire ecosystem.
Major and minor SDK releases
With Android 16, we've added the concept of a minor SDK release to allow us to iterate our APIs more quickly, reflecting the rapid pace of the innovation Android is bringing to apps and devices.

We plan to have another release in Q4 of 2025 which also will include new developer APIs. Today's major release will be the only release in 2025 to include planned app-impacting behavior changes. In addition to new developer APIs, the Q4 minor release will pick up feature updates, optimizations, and bug fixes.
We'll continue to have quarterly Android releases. The Q3 update in-between the API releases is providing much of the new visual polish associated with Material Expressive, and you can get the Q3 beta today on your supported Pixel device.
Camera and media APIs to empower creators
Android 16 enhances support for professional camera users, allowing for night mode scene detection, hybrid auto exposure, and precise color temperature adjustments. It's easier than ever to capture motion photos with new Intent actions, and we're continuing to improve UltraHDR images, with support for HEIC encoding and new parameters from the ISO 21496-1 draft standard. Support for the Advanced Professional Video (APV) codec improves Android's place in professional recording and post-production workflows, with perceptually lossless video quality that survives multiple decodings/re-encodings without severe visual quality degradation.
Also, Android's photo picker can now be embedded in your view hierarchy, and users will appreciate the ability to search cloud media.
More consistent, beautiful apps
Android 16 introduces changes to improve the consistency and visual appearance of apps, laying the foundation for the upcoming Material 3 Expressive changes. Apps targeting Android 16 can no longer opt-out of going edge-to-edge, and ignores the elegantTextHeight attribute to ensure proper spacing in Arabic, Lao, Myanmar, Tamil, Gujarati, Kannada, Malayalam, Odia, Telugu or Thai.
Adaptive Android apps
With Android apps now running on a variety of devices and more windowing modes on large screens, developers should build Android apps that adapt to any screen and window size, regardless of device orientation. For apps targeting Android 16 (API level 36), Android 16 includes changes to how the system manages orientation, resizability, and aspect ratio restrictions. On displays with smallest width >= 600dp, the restrictions no longer apply and apps will fill the entire display window. You should check your apps to ensure your existing UIs scale seamlessly, working well across portrait and landscape aspect ratios. We're providing frameworks, tools, and libraries to help.

You can test these overrides without targeting using the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag. Read more about changes to orientation and resizability APIs in Android 16.
Predictive back by default and more
Apps targeting Android 16 will have system animations for back-to-home, cross-task, and cross-activity by default. In addition, Android 16 extends predictive back navigation to three-button navigation, meaning that users long-pressing the back button will see a glimpse of the previous screen before navigating back.
To make it easier to get the back-to-home animation, Android 16 adds support for the onBackInvokedCallback with the new PRIORITY_SYSTEM_NAVIGATION_OBSERVER. Android 16 additionally adds the finishAndRemoveTaskCallback and moveTaskToBackCallback for custom back stack behavior with predictive back.
Consistent progress notifications
Android 16 introduces Notification.ProgressStyle, which lets you create progress-centric notifications that can denote states and milestones in a user journey using points and segments. Key use cases include rideshare, delivery, and navigation. It's the basis for Live Updates, which will be fully realized in an upcoming Android 16 update.

Custom AGSL graphical effects
Android 16 adds RuntimeColorFilter and RuntimeXfermode, allowing you to author complex effects like Threshold, Sepia, and Hue Saturation in AGSL and apply them to draw calls.
Help to create better performing, more efficient apps and games
From APIs to help you understand app performance, to platform changes designed to increase efficiency, Android 16 is focused on making sure your apps perform well. Android 16 introduces system-triggered profiling to ProfilingManager, ensures at most one missed execution of scheduleAtFixedRate is immediately executed when the app returns to a valid lifecycle for better efficiency, introduces hasArrSupport and getSuggestedFrameRate(int) to make it easier for your apps to take advantage of adaptive display refresh rates, and introduces the getCpuHeadroom and getGpuHeadroom APIs along with CpuHeadroomParams and GpuHeadroomParams in SystemHealthManager to provide games and resource-intensive apps estimates of available GPU and CPU resources on supported devices.
JobScheduler updates
JobScheduler.getPendingJobReasons in Android 16 returns multiple reasons why a job is pending, due to both explicit constraints you set and implicit constraints set by the system. The new JobScheduler.getPendingJobReasonsHistory returns the list of the most recent pending job reason changes, allowing you to better tune the way your app works in the background.
Android 16 is making adjustments for regular and expedited job runtime quota based on which apps standby bucket the app is in, whether the job starts execution while the app is in a top state, and whether the job is executing while the app is running a Foreground Service.
To detect (and then reduce) abandoned jobs, apps should use the new STOP_REASON_TIMEOUT_ABANDONED job stop reason that the system assigns for abandoned jobs, instead of STOP_REASON_TIMEOUT.
16KB page sizes
Android 15 introduced support for 16KB page sizes to improve the performance of app launches, system boot-ups, and camera starts, while reducing battery usage. Android 16 adds a 16 KB page size compatibility mode, which, combined with new Google Play technical requirements, brings Android closer to having devices shipping with this important change. You can validate if your app needs updating using the 16KB page size checks & APK Analyzer in the latest version of Android Studio.
ART internal changes
Android 16 includes the latest updates to the Android Runtime (ART) that improve the Android Runtime's (ART's) performance and provide support for additional language features. These improvements are also available to over a billion devices running Android 12 (API level 31) and higher through Google Play System updates. Apps and libraries that rely on internal non-SDK ART structures may not continue to work correctly with these changes.
Privacy and security
Android 16 continues our mission to improve security and ensure user privacy. It includes Improved security against Intent redirection attacks, makes MediaStore.getVersion unique to each app, adds an API that allows apps to share Android Keystore keys, incorporates the latest version of the Privacy Sandbox on Android, introduces a new behavior during the companion device pairing flow to protect the user's location privacy, and allows a user to easily select from and limit access to app-owned shared media in the photo picker.
Local network permission testing
Android 16 allows your app to test the upcoming local network permission feature, which will require your app to be granted NEARBY_WIFI_DEVICES permission. This change will be enforced in a future Android major release.
An Android built for everyone
Android 16 adds features such as Auracast broadcast audio with compatible LE Audio hearing aids, Accessibility changes such as extending TtsSpan with TYPE_DURATION, a new list-based API within AccessibilityNodeInfo, improved support for expandable elements using setExpandedState, RANGE_TYPE_INDETERMINATE for indeterminate ProgressBar widgets, AccessibilityNodeInfo getChecked and setChecked(int) methods that support a "partially checked" state, setSupplementalDescription so you can provide text for a ViewGroup without overriding information from its children, and setFieldRequired so apps can tell an accessibility service that input to a form field is required.
Outline text for maximum text contrast
Android 16 introduces outline text, replacing high contrast text, which draws a larger contrasting area around text to greatly improve legibility, along with new AccessibilityManager APIs to allow your apps to check or register a listener to see if this mode is enabled.

Get your apps, libraries, tools, and game engines ready!
If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates to your SDK are needed to fully support Android 16.
Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 16. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply, even if you aren't yet targeting Android 16:
- JobScheduler: JobScheduler quotas are enforced more strictly in Android 16; enforcement will occur if a job executes while the app is on top, when a foreground service is running, or in the active standby bucket. setImportantWhileForeground is now a no-op. The new stop reason STOP_REASON_TIMEOUT_ABANDONED occurs when we detect that the app can no longer stop the job.
- Broadcasts: Ordered broadcasts using priorities only work within the same process. Use another IPC if you need cross-process ordering.
- ART: If you use reflection, JNI, or any other means to access Android internals, your app might break. This is never a best practice. Test thoroughly.
- Intents: Android 16 has stronger security against Intent redirection attacks. Test your Intent handling, and only opt-out of the protections if absolutely necessary.
- 16KB Page Size: If your app isn't 16KB-page-size ready, you can use the new compatibility mode flag, but we recommend migrating to 16KB for best performance.
- Accessibility: announceForAccessibility is deprecated; use the recommended alternatives. Make sure to test with the new outline text feature.
- Bluetooth: Android 16 improves Bluetooth bond loss handling that impacts the way re-pairing occurs.
Other changes that will be impactful once your app targets Android 16:
- User Experience: Changes include the removal of edge-to-edge opt-out, required migration or opt-out for predictive back, and the disabling of elegant font APIs.
- Core Functionality: Optimizations have been made to fixed-rate work scheduling.
- Large Screen Devices: Orientation, resizability, and aspect ratio restrictions will be ignored. Ensure your layouts support all orientations across a variety of aspect ratios to adapt to different surfaces.
- Health and Fitness: Changes have been implemented for health and fitness permissions.
Get your app ready for the future:
- Local network protection: Consider testing your app with the upcoming Local Network Protection feature. It will give users more control over which apps can access devices on their local network in a future Android major release.
Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.
Once you've published the Android 16-compatible version of your app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 16 and use the compatibility framework to help quickly detect issues.
Get started with Android 16
Your Pixel device should get Android 16 shortly if you haven't already been on the Android Beta. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 4.1 and have not yet taken an Android 16 QPR1 beta, you can opt out of the program and you will then be offered the release version of Android 16 over the air.
For the best development experience with Android 16, we recommend that you use the latest Canary build of Android Studio Narwhal. Once you're set up, here are some of the things you should do:
- Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or Android Emulator running Android 16 and extensively test it.
Thank you again to everyone who participated in our Android developer preview and beta program. We're looking forward to seeing how your apps take advantage of the updates in Android 16, and have plans to bring you updates in a fast-paced release cadence going forward.
For complete information on Android 16 please visit the Android 16 developer site.
10 Jun 2025 6:00pm GMT
20 May 2025
Android Developers Blog
Announcing Kotlin Multiplatform Shared Module Template
Posted by Ben Trengrove - Developer Relations Engineer, Matt Dyor - Product Manager
To empower Android developers, we're excited to announce Android Studio's new Kotlin Multiplatform (KMP) Shared Module Template. This template was specifically designed to allow developers to use a single codebase and apply business logic across platforms. More specifically, developers will be able to add shared modules to existing Android apps and share the business logic across their Android and iOS applications.
This makes it easier for Android developers to craft, maintain, and most importantly, own the business logic. The KMP Shared Module Template is available within Android Studio when you create a new module within a project.

A single code base for business logic
Most developers have grown accustomed to maintaining different code bases, platform to platform. In the past, whenever there's an update to the business logic, it must be carefully updated in each codebase. But with the KMP Shared Module Template:
- Developers can write once and publish the business logic to wherever they need it.
- Engineering teams can do more faster.
- User experiences are more consistent across the entire audience, regardless of platform or form factor.
- Releases are better coordinated and launched with fewer errors.
Customers and developer teams who adopt KMP Shared Module Templates should expect to achieve greater ROI from mobile teams who can turn their attention towards delighting their users more and worrying about inconsistent code less.
KMP enthusiasm
The Android developer community remains very excited about KMP, especially after Google I/O 2024 where Google announced official support for shared logic across Android and iOS. We have seen continued momentum and enthusiasm from the community. For example, there are now over 1,500 KMP libraries listed on JetBrains' klibs.io.
Our customers are excited because KMP has made Android developers more productive. Consistently, Android developers have said that they want solutions that allow them to share code more easily and they want tools which boost productivity. This is why we recommend KMP; KMP simultaneously delivers a great experience for Android users while boosting ROI for the app makers. The KMP Shared Module Template is the latest step towards a developer ecosystem where user experience is consistent and applications are updated seamlessly.
Large scale KMP adoptions
This KMP Shared Module Template is new, but KMP more broadly is a maturing technology with several large-scale migrations underway. In fact, KMP has matured enough to support mission critical applications at Google. Google Docs, for example, is now running KMP in production on iOS with runtime performance on par or better than before. Beyond Google, Stone's 130 mobile developers are sharing over 50% of their code, allowing existing mobile teams to ship features approximately 40% faster to both Android and iOS.
KMP was designed for Android development
As always, we've designed the Shared Module Template with the needs of Android developer teams in mind. Making the KMP Shared Module Template part of the native Android Studio experience allows developers to efficiently add a shared module to an existing Android application and immediately start building shared business logic that leverages several KMP-ready Jetpack libraries including Room, SQLite, and DataStore to name just a few.
Come check it out at KotlinConf
Releasing Android Studio's KMP Shared Module Template marks a significant step toward empowering Android development teams to innovate faster, to efficiently manage business logic, and to build high-quality applications with greater confidence. It means that Android developers can be responsible for the code that drives the business logic for every app across Android and iOS. We're excited to bring Shared Module Template to KotlinConf in Copenhagen, May 21 - 23.

Get started with KMP Shared Module Template
To get started, you'll need the latest edition of Android Studio. In your Android project, the Shared Module Template is available within Android Studio when you create a new module. Click on "File" then "New" then "New Module" and finally "Kotlin Multiplatform Shared Module" and you are ready to add a KMP Shared Module to your Android app.
We appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue. Remember to also follow us on X, LinkedIn, Blog, or YouTube for more Android development updates!
20 May 2025 10:00pm GMT
16 things to know for Android developers at Google I/O 2025
Posted by Matthew McCullough - VP of Product Management, Android Developer
Today at Google I/O, we announced the many ways we're helping you build excellent, adaptive experiences, and helping you stay more productive through updates to our tooling that put AI at your fingertips and throughout your development lifecycle. Here's a recap of 16 of our favorite announcements for Android developers; you can also see what was announced last week in The Android Show: I/O Edition. And stay tuned over the next two days as we dive into all of the topics in more detail!
Building AI into your Apps
1: Building intelligent apps with Generative AI
Generative AI enhances apps' experience by making them intelligent, personalized and agentic. This year, we announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks like summarization, proofreading, rewrite, and image description. We also provided capabilities for developers to harness more powerful models such as Gemini Pro, Gemini Flash, and Imagen via Firebase AI Logic for more complex use cases like image generation and processing extensive data across modalities, including bringing AI to life in Android XR, and a new AI sample app, Androidify, that showcases how these APIs can transform your selfies into unique Android robots! To start building intelligent experiences by leveraging these new capabilities, explore the developer documentation, sample apps, and watch the overview session to choose the right solution for your app.
New experiences across devices
2: One app, every screen: think adaptive and unlock 500 million screens
Mobile Android apps form the foundation across phones, foldables, tablets and ChromeOS, and this year we're helping you bring them to cars and XR and expanding usages with desktop windowing and connected displays. This expansion means tapping into an ecosystem of 500 million devices - a significant opportunity to engage more users when you think adaptive, building a single mobile app that works across form factors. Resources, including Compose Layouts library and Jetpack Navigation updates, help make building these dynamic experiences easier than before. You can see how Peacock, NBCUniveral's streaming service (available in the US) is building adaptively to meet users where they are.
3: Material 3 Expressive: design for intuition and emotion
The new Material 3 Expressive update provides tools to enhance your product's appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. Check out the I/O talk to learn more about expressive design and how it inspires emotion, clearly guides users toward their goals, and offers a flexible and personalized experience.

4: Smarter widgets, engaging live updates
Measure the return on investment of your widgets (available soon) and easily create personalized widget previews with Glance 1.2. Promoted Live Updates notify users of important ongoing notifications and come with a new Progress Style standardized template.

5: Enhanced Camera & Media: low light boost and battery savings
This year's I/O introduces several camera and media enhancements. These include a software low light boost for improved photography in dim lighting and native PCM offload, allowing the DSP to handle more audio playback processing, thus conserving user battery. Explore our detailed sessions on built-in effects within CameraX and Media3 for further information.
6: Build next-gen app experiences for Cars
We're launching expanded opportunities for developers to build in-car experiences, including new Gemini integrations, support for more app categories like Games and Video, and enhanced capabilities for media and communication apps via the Car App Library and new APIs. Alongside updated car app quality tiers and simplified distribution, we'll soon be providing improved testing tools like Android Automotive OS on Pixel Tablet and Firebase Test Lab access to help you bring your innovative apps to cars. Learn more from our technical session and blog post on new in-car app experiences.
7: Build for Android XR's expanding ecosystem with Developer Preview 2 of the SDK
We announced Android XR in December, and today at Google I/O we shared a bunch of updates coming to the platform including Developer Preview 2 of the Android XR SDK plus an expanding ecosystem of devices: in addition to the first Android XR headset, Samsung's Project Moohan, you'll also see more devices including a new portable Android XR device from our partners at XREAL. There's lots more to cover for Android XR: Watch the Compose and AI on Android XR session, and the Building differentiated apps for Android XR with 3D content session, and learn more about building for Android XR.

8: Express yourself on Wear OS: meet Material Expressive on Wear OS 6
This year we are launching Wear OS 6: the most powerful and expressive version of Wear OS. Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. Developers gain access to Material 3 Expressive on Wear OS by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles. Get started with Material 3 libraries and other updates on Wear.

9: Engage users on Google TV with excellent TV apps
You can leverage more resources within Compose's core and Material libraries with the stable release of Compose for TV, empowering you to build excellent adaptive UIs across your apps. We're also thrilled to share exciting platform updates and developer tools designed to boost app engagement, including bringing Gemini capabilities to TV in the fall, opening enrollment for our Video Discovery API, and more.
Developer productivity
10: Build beautiful apps faster with Jetpack Compose
Compose is our big bet for UI development. The latest stable BOM release provides the features, performance, stability, and libraries that you need to build beautiful adaptive apps faster, so you can focus on what makes your app valuable to users.

11: Kotlin Multiplatform: new Shared Template lets you build across platforms, easily
Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. We've released a new Android Studio KMP shared module template, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help developers who are looking to get started with KMP. Shared module templates make it easier for developers to craft, maintain, and own the business logic. Read more on what's new in Android's Kotlin Multiplatform.
12: Gemini in Android Studio: AI Agents to help you work
Gemini in Android Studio is the AI-powered coding companion that makes Android developers more productive at every stage of the dev lifecycle. In March, we introduced Image to Code to bridge the gap between UX teams and software engineers by intelligently converting design mockups into working Compose UI code. And today, we previewed new agentic AI experiences, Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier to build and test code. You can read more about these updates in What's new in Android development tools.
13: Android Studio: smarter with Gemini
In this latest release, we're empowering devs with AI-driven tools like Gemini in Android Studio, streamlining UI creation, making testing easier, and ensuring apps are future-proofed in our ever-evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in a dynamic mobile landscape. To take advantage, upgrade to the latest Studio release. You can read more about these innovations in What's new in Android development tools.

And the latest on driving business growth
14: What's new in Google Play
Get ready for exciting updates from Play designed to boost your discovery, engagement and revenue! Learn how we're continuing to become a content-rich destination with enhanced personalization and fresh ways to showcase your apps and content. Plus, explore powerful new subscription features designed to streamline checkout and reduce churn. Read I/O 2025: What's new in Google Play to learn more.

15: Start migrating to Play Games Services v2 today
Play Games Services (PGS) connects over 2 billion gamer profiles on Play, powering cross-device gameplay, personalized gaming content and rewards for your players throughout the gaming journey. We are moving PGS v1 features to v2 with more advanced features and an easier integration path. Learn more about the migration timeline and new features.
16: And of course, Android 16
We unpacked some of the latest features coming to users in Android 16, which we've been previewing with you for the last few months. If you haven't already, make sure to test your apps with the latest Beta of Android 16. Android 16 includes Live Updates, professional media and camera features, desktop windowing and connected displays, major accessibility enhancements and much more.
Check out all of the Android and Play content at Google I/O
This was just a preview of some of the cool updates for Android developers at Google I/O, but stay tuned to Google I/O over the next two days as we dive into a range of Android developer topics in more detail. You can check out the What's New in Android and the full Android track of sessions, and whether you're joining in person or around the world, we can't wait to engage with you!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
20 May 2025 6:03pm GMT
What’s new in Wear OS 6
Posted by Chiara Chiappini - Developer Relations Engineer
This year, we're excited to introduce Wear OS 6: the most power-efficient and expressive version of Wear OS yet.
Wear OS 6 introduces the new design system we call Material 3 Expressive. It features a major refresh with visual and motion components designed to give users an experience with more personalization. The new design offers a great level of expression to meet user demand for experiences that are modern, relevant, and distinct. Material 3 Expressive is coming to Wear OS, Android, and all your favorite Google apps on these devices later this year.
The good news is that you don't need to compromise battery for beauty: thanks to Wear OS platform optimizations, watches updating from Wear OS 5 to Wear OS 6 can see up to 10% improvement in battery life.1
Wear OS 6 developer preview
Today we're releasing the Developer Preview of Wear OS 6, the next version of Google's smartwatch platform, based on Android 16.
Wear OS 6 brings a number of developer-facing changes, such as refining the always-on display experience. Check out what's changed and try the new Wear OS 6 emulator to test your app for compatibility with the new platform version.
Material 3 Expressive on Wear OS

Material 3 Expressive for the watch is fully optimized for the round display. We recommend developers embrace the new design system in their apps and tiles. To help you adopt Material 3 Expressive in your app, we have begun releasing new design guidance for Wear OS, along with corresponding Figma design kits.
As a developer, you can get access the Material 3 Expressive on Wear OS using new Jetpack libraries:
- Wear Compose Material 3 that provides components for apps.
- Wear ProtoLayout Material 3 that provides components and layouts for tiles.
These two libraries provide implementations for the components catalog that adheres to the Material 3 Expressive design language.
Make it personal with richer color schemes using themes

The Wear Compose Material 3 and Wear Protolayout Material 3 libraries provide updated and extended color schemes, typography, and shapes to bring both depth and variety to your designs. Additionally, your tiles now align with the system font by default (on Wear OS 6+ devices), offering a more cohesive experience on the watch.
Both libraries introduce dynamic color theming, which automatically generates a color theme for your app or tile to match the colors of the watch face of Pixel watches.
Make it more glanceable with new tile components
Tiles now support a new framework and a set of components that embrace the watch's circular form factor. These components make tiles more consistent and glanceable, so users can more easily take swift action on the information included in them.
We've introduced a 3-slot tile layout to improve visual consistency in the Tiles carousel. This layout includes a title slot, a main content slot, and a bottom slot, designed to work across a range of different screen sizes:

Highlight user actions and key information with components optimized for round screen
The new Wear OS Material 3 components automatically adapt to larger screen sizes, building on the Large Display support added as part of Wear OS 5. Additionally, components such as Buttons and Lists support shape morphing on apps.
The following sections highlight some of the most exciting changes to these components.
Embrace the round screen with the Edge Hugging Button
We introduced a new EdgeButton for apps and tiles with an iconic design pattern that maximizes the space within the circular form factor, hugs the edge of the screen, and comes in 4 standard sizes.

Fluid navigation through lists using new indicators
The new TransformingLazyColumn from the Foundation library makes expressive motion easy with motion that fluidly traces the edges of the display. Developers can customize the collapsing behavior of the list when scrolling to the top, bottom and both sides of the screen. For example, components like Cards can scale down as they are closer to the top of the screen.

Material 3 Expressive also includes a ScrollIndicator that features a new visual and motion design to make it easier for users to visualize their progress through a list. The ScrollIndicator is displayed by default when you use a TransformingLazyColumn and ScreenScaffold.

Lastly, you can now use segments with the new ProgressIndicator, which is now available as a full-screen component for apps and as a small-size component for both apps and tiles.

To learn more about the new features and see the full list of updates, see the release notes of the latest beta release of the Wear Compose and Wear Protolayout libraries. Check out the migration guidance for apps and tiles on how to upgrade your existing apps, or try one of our codelabs if you want to start developing using Material 3 Expressive design.
Watch Faces
With Wear OS 6 we are launching updates for watch face developers:
- New options for customizing the appearance of your watch face using version 4 of Watch Face Format, such as animated state transitions from ambient to interactive and photo watch faces.
- A new API for building watch face marketplaces.
Learn more about what's new in Watch Face updates.
Look for more information about the general availability of Wear OS 6 later this year.
Library updates
ProtoLayout
Since our last major release, we've improved capabilities and the developer experience of the Tiles and ProtoLayout libraries to address feedback we received from developers. Some of these enhancements include:
- New Kotlin-only protolayout-material3 library adds support for enhanced visuals: Lottie animations (in addition to the existing animation capabilities), more gradient types, and new arc line styles.
- Developers can now write more idiomatic Kotlin, with APIs refined to better align with Jetpack Compose, including type-safe builders and an improved modifier syntax.
The example below shows how to display a layout with a text on a Tile using new enhancements:
// returns a LayoutElement for use in onTileRequest() materialScope(context, requestParams.deviceConfiguration) { primaryLayout( mainSlot = { text( text = "Hello, World!".layoutString, typography = BODY_LARGE, ) } ) }
For more information, see the migration instructions.
Credential Manager for Wear OS
The CredentialManager API is now available on Wear OS, starting with Google Pixel Watch devices running Wear OS 5.1. It introduces passkeys to Wear OS with a platform-standard authentication UI that is consistent with the experience on mobile.
The Credential Manager Jetpack library provides developers with a unified API that simplifies and centralizes their authentication implementation. Developers with an existing implementation on another form factor can use the same CredentialManager code, and most of the same supporting code to fulfill their Wear OS authentication workflow.
Credential Manager provides integration points for passkeys, passwords, and Sign in With Google, while also allowing you to keep your other authentication solutions as backups.
Users will benefit from a consistent, platform-standard authentication UI; the introduction of passkeys and other passwordless authentication methods, and the ability to authenticate without their phone nearby.
Check out the Authentication on Wear OS guidance to learn more.
Richer Wear Media Controls

Devices that run Wear OS 5.1 or later support enhanced media controls. Users who listen to media content on phones and watches can now benefit from the following new media control features on their watch:
- They can fast-forward and rewind while listening to podcasts.
- They can access the playlist and controls such as shuffle, like, and repeat through a new menu.
Developers with an existing implementation of action buttons and playlist can benefit from this feature without additional effort. Check out how users will get more controls from your media app on a Google Pixel Watch device.
Start building for Wear OS 6 now
With these updates, there's never been a better time to develop an app on Wear OS. These technical resources are a great place to learn more how to get started:
Earlier this year, we expanded our smartwatch offerings with Galaxy Watch for Kids, a unique, phone-free experience designed specifically for children. This launch gives families a new way to stay connected, allowing children to explore Wear OS independently with a dedicated smartwatch. Consult our developer guidance to create a Wear OS app for kids.
We're looking forward to seeing the experiences that you build on Wear OS!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
1 Actual battery performance varies.
20 May 2025 6:02pm GMT
What’s new in Watch Faces
Posted by Garan Jenkin - Developer Relations Engineer
Wear OS has a thriving watch face ecosystem featuring a variety of designs that also aims to minimize battery impact. Developers have embraced the simplicity of creating watch faces using Watch Face Format - in the last year, the number of published watch faces using Watch Face Format has grown by over 180%*.
Today, we're continuing our investment and announcing version 4 of the Watch Face Format, available as part of Wear OS 6. These updates allow developers to express even greater levels of creativity through the new features we've added. And we're supporting marketplaces, which gives flexibility and control to developers and more choice for users.
In this blog post we'll cover key new features, check out the documentation for more details of changes introduced in recent versions.
Supporting marketplaces with Watch Face Push
We're also announcing a completely new API, the Watch Face Push API, aimed at developers who want to create their own watch face marketplaces.
Watch Face Push, available on devices running Wear OS 6 and above, works exclusively with watch faces that use the Watch Face Format watch faces.
We've partnered with well-known watch face developers - including Facer, TIMEFLIK, WatchMaker, Pujie, and Recreative - in designing this new API. We're excited that all of these developers will be bringing their unique watch face experiences to Wear OS 6 using Watch Face Push.

Watch faces managed and deployed using Watch Face Push are all written using Watch Face Format. Developers publish these watch faces in the same way as publishing through Google Play, though there are some additional checks the developer must make which are described in the Watch Face Push guidance.

The Watch Face Push API covers only the watch part of this typical marketplace system diagram - as the app developer, you have control and responsibility for the phone app and cloud components, as well as for building the Wear OS app using Watch Face Push. You're also in control of the phone-watch communications, for which we recommend using the Data Layer APIs.
Adding Watch Face Push to your project
To start using Watch Face Push on Wear OS 6, include the following dependency in your Wear OS app:
// Ensure latest version is used by checking the repository implementation("androidx.wear.watchface:watchface-push:1.3.0-alpha07")
Declare the necessary permission in your AndroidManifest.xml:
<uses-permission android:name="com.google.wear.permission.PUSH_WATCH_FACES" />
Obtain a Watch Face Push client:
val manager = WatchFacePushManagerFactory.createWatchFacePushManager(context)
You're now ready to start using the Watch Face Push API, for example to list the watch faces you have already installed, or add a new watch face:
// List existing watch faces, installed by this app val listResponse = manager.listWatchFaces() // Add a watch face manager.addWatchFace(watchFaceFileDescriptor, validationToken)
Understanding Watch Face Push
While the basics of the Watch Face Push API are easy to understand and access through the WatchFacePushManager interface, it's important to consider several other factors when working with the API in practice to build an effective marketplace app, including:
- How to build watch faces for use with Watch Face Push - Watch faces deployed using Watch Face Push require an additional validation step to be performed by the developer. Learn more about how to build watch faces for use with Watch Face Push, and to integrate Watch Face Push into your application.
- Watch Face Slots - Each Watch Face Push-based application is able to install a limited number of watch faces at any given time, represented by a Slot. Learn more about how to work with and manage slots.
- Default watch faces - The API allows for a default watch face to be installed when the app is installed. Learn more about how to build and include this default watch face.
- Setting active watch faces - Through an additional permission, the app can set the active watch face. Learn about how to integrate this feature, as well as how to handle the different permission scenarios.
To learn more about using Watch Face Push, see the guidance and reference documentation.
Updates to Watch Face Format
Photos
Available from Watch Face Format v4
The new Photos element allows the watch face to contain user-selectable photos. The element supports both individual photos and a gallery of photos. For a gallery of photos, developers can choose whether the photos advance automatically or when the user taps the watch face.

The user is able to select the photos of their choice through the companion app, making this a great way to include true personalization in your watch face. To use this feature, first add the necessary configuration:
<UserConfigurations> <PhotosConfiguration id="myPhoto" configType="SINGLE"/> </UserConfigurations>
Then use the Photos element within any PartImage, in the same way as you would for an Image element:
<PartImage ...> <Photos source="[CONFIGURATION.myPhoto]" defaultImageResource="placeholder_photo"/> </PartImage>
For details on how to support multiple photos, and how to configure the different change behaviors, refer to the Photos section of the guidance and reference, as well as the GitHub samples.
Transitions
Available from Watch Face Format v4
Watch Face Format now supports transitions when exiting and entering ambient mode.

This is achieved through the existing Variant tag. For example, the hours and minutes in the above watch face are animated as follows:
<DigitalClock ...> <Variant mode="AMBIENT" target="x" value="100" interpolation="OVERSHOOT" /> <!-- Rest of "hh:mm" clock definition here --> </DigitalClock>
By default, the animation takes the full extent of allowed time for the transition. The new interpolation attribute controls the animation effect - in this case the use of OVERSHOOT adds a playful experience.
The seconds are implemented in a separate DigitalClock element, which shows the use of the new duration attribute:
<DigitalClock ...> <Variant mode="AMBIENT" target="alpha" value="0" duration="0.5"/> <!-- Rest of "ss" clock definition here --> </DigitalClock>
The duration attribute takes a value between 0.0 and 1.0, with 1.0 representing the full extent of the allowed time. In this example, by using a value of 0.5, the seconds animation is quicker - taking half the allowed time, in comparison to the hours and minutes, which take the entire transition period.
For more details on using transitions, see the guidance documentation, as well as the reference documentation for Variant.
Color Transforms
Available from Watch Face Format v4
We've extended the usefulness of the Transform element by allowing color to be transformed on the majority of elements where it is an attribute, and also allowing tintColor to be transformed on Group and Part* elements such as PartDraw and PartText.
The main exceptions to this addition are the clock elements, DigitalClock and AnalogClock, and also ComplicationSlot, which do not currently support Transform.
In addition to extending the list of transformable attributes to include colors, we've also added a handful of useful functions for manipulating color:
To see these in action, let's consider an example.
The Weather data source provides the current UV index through [WEATHER.UV_INDEX]. When representing the UV index, these values are typically also assigned a color:

We want to represent this information as an Arc, not only showing the value, but also using the appropriate color. We can achieve this as follows:
<Arc centerX="0" centerY="0" height="420" width="420" startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE"> <Transform target="endAngle" value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" /> <Stroke thickness="20" color="#ffffff" cap="ROUND"> <Transform target="color" value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" /> </Stroke> </Arc>
Let's break this down:
- The first Transform restricts the UV index to the range 0.0 to 11.0 and adjusts the sweep of the Arc according to that value.
- The second Transform uses the new extractColorFromWeightedColors function.
-
- The first argument is our list of colors
- The second argument is a list of weights - you can see from the chart above that green covers 3 values, whereas orange only covers 2, so we use weights to represent this.
- The third argument is whether or not to interpolate the color values. In this case we want to stick strictly to the color convention for UV index, so this is false.
- Finally in the fourth argument we coerce the UV value into the range 0.0 to 1.0, which is used as an index into our weighted colors.
The result looks like this:

As well as being able to provide raw colors and weights to these functions, they can also be used with values from complications, such as HR, temperature or steps goal. For example, to use the color range specified in a goal complication:
<Transform target="color" value="extractColorFromColors( [COMPLICATION.GOAL_PROGRESS_COLORS], [COMPLICATION.GOAL_PROGRESS_COLOR_INTERPOLATE], [COMPLICATION.GOAL_PROGRESS_VALUE] / [COMPLICATION.GOAL_PROGRESS_TARGET_VALUE] )"/>
Introducing the Reference element
Available from Watch Face Format v4
The new Reference element allows you to refer to any transformable attribute from one part of your watch face scene in other parts of the scene tree.
In our UV index example above, we'd also like the text labels to use the same color scheme.
We could perform the same color transform calculation as on our Arc, using [WEATHER.UV_INDEX], but this is duplicative work which could lead to inconsistencies, for example if we change the exact color hues in one place but not the other.
Returning to the Arc definition, let's create a Reference to the color:
<Arc centerX="0" centerY="0" height="420" width="420" startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE"> <Transform target="endAngle" value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" /> <Stroke thickness="20" color="#ffffff" cap="ROUND"> <Reference source="color" name="uv_color" defaultValue="#ffffff" /> <Transform target="color" value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" /> </Stroke> </Arc>
The color of the Arc is calculated from the relatively complex extractColorFromWeightedColors function. To avoid repeating this elsewhere in our watch face, we have added a Reference element, which takes as its source the Stroke color.
Let's now look at how we can consume this value in a PartText elsewhere in the watch face. We gave the Reference the name uv_color, so we can simply refer to this in any expression:
<PartText x="0" y="225" width="450" height="225"> <TextCircular centerX="225" centerY="0" width="420" height="420" startAngle="120" endAngle="90" align="START" direction="COUNTER_CLOCKWISE"> <Font family="SYNC_TO_DEVICE" size="24"> <Transform target="color" value="[REFERENCE.uv_color]" /> <Template>%d<Parameter expression="[WEATHER.UV_INDEX]" /></Template> </Font> </TextCircular> </PartText> <!-- Similar PartText here for the "UV:" label -->
As a result, the color of the Arc and the UV numeric value are now coordinated:

For more details on how to use the Reference element, refer to the Reference guidance.
Text autosizing
Available from Watch Face Format v3
Sometimes the exact length of the text to be shown on the watch face can vary, and as a developer you want to balance being able to display text that is both legible, but also complete.
Auto-sizing text can help solve this problem, and can be enabled through the isAutoSize attribute introduced to the Text element:
<Text align="CENTER" isAutoSize="true">
Having set this attribute, text will then automatically fit the available space, starting at the maximum size specified in your Font element, and with a minimum size of 12.
As an example, step count could range from tens or hundreds through to many thousands, and the new isAutoSize attribute enables best use of the available space for every possible value:

For more details on isAutoSize, see the Text reference.
Android Studio support
For developers working in Android Studio, we've added support to make working with Watch Face Format easier, including:
- Run configuration support
- Auto-complete and resource reference
- Lint checking
This is available from Android Studio Canary version 2025.1.1 Canary 10.
Learn More
To learn more about building watch faces, please take a look at the following resources:
We've also recently launched a codelab for Watch Face Format and have updated samples on GitHub to showcase new features. The issue tracker is available for providing feedback.
We're excited to see the watch face experiences that you create and share!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
* Google Play data for period 2024-03-24 to 2025-03-23
20 May 2025 6:01pm GMT
What's New in Jetpack Compose
Posted by Nick Butcher - Product Manager
At Google I/O 2025, we announced a host of features, performance, stability, libraries, and tools updates for Jetpack Compose, our recommended Android UI toolkit. With Compose you can build excellent apps that work across devices. Compose has matured a lot since it was first announced (at Google I/O 2019!) and we're now seeing 60% of the top 1,000 apps in the Play Store such as MAX and Google Drive use and love it.
New Features
Since I/O last year, Compose Bill of Materials (BOM) version 2025.05.01 adds new features such as:
- Autofill support that lets users automatically insert previously entered personal information into text fields.
- Auto-sizing text to smoothly adapt text size to a parent container size.
- Visibility tracking for when you need high-performance information on a composable's position in its root container, screen, or window.
- Animate bounds modifier for beautiful automatic animations of a Composable's position and size within a LookaheadScope.
- Accessibility checks in tests that let you build a more accessible app UI through automated a11y testing.
LookaheadScope { Box( Modifier .animateBounds(this@LookaheadScope) .width(if(inRow) 100.dp else 150.dp) .background(..) .border(..) ) }

For more details on these features, read What's new in the Jetpack Compose April '25 release and check out these talks from Google I/O:
If you're looking to try out new Compose functionality, the alpha BOM offers new features that we're working on including:
- Pausable Composition (see below)
- Updates to LazyLayout prefetch
- Context Menus
- New modifiers: onFirstVisible, onVisbilityChanged, contentType
- New Lint checks for frequently changing values and elements that should be remembered in composition
Please try out the alpha features and provide feedback to help shape the future of Compose.
Material Expressive
At Google I/O, we unveiled Material Expressive, Material Design's latest evolution that helps you make your products even more engaging and easier to use. It's a comprehensive addition of new components, styles, motion and customization options that help you to build beautiful rich UIs. The Material3 library in the latest alpha BOM contains many of the new expressive components for you to try out.

Learn more to start building with Material Expressive.
Adaptive layouts library
Developing adaptive apps across form factors including phones, foldables, tablets, desktop, cars and Android XR is now easier with the latest enhancements to the Compose adaptive layouts library. The stable 1.1 release adds support for predictive back gestures for smoother transitions and pane expansion for more flexible two pane layouts on larger screens. Furthermore, the 1.2 (alpha) release adds more flexibility for how panes are displayed, adding strategies for reflowing and levitating.

Learn more about building adaptive android apps with Compose.
Performance
With each release of Jetpack Compose, we continue to prioritize performance improvements. The latest stable release includes significant rewrites and improvements to multiple sub-systems including semantics, focus and text optimizations. Best of all these are available to you simply by upgrading your Compose dependency; no code changes required.

We continue to work on further performance improvements, notable changes in the latest alpha BOM include:
- Pausable Composition allows compositions to be paused, and their work split up over several frames.
- Background text prefetch enables text layout caches to be pre-warmed on a background thread, enabling faster text layout.
- LazyLayout prefetch improvements enabling lazy layouts to be smarter about how much content to prefetch, taking advantage of pausable composition.
Together these improvements eliminate nearly all jank in an internal benchmark.
Stability
We've heard from you that upgrading your Compose dependency can be challenging, encountering bugs or behaviour changes that prevent you from staying on the latest version. We've invested significantly in improving the stability of Compose, working closely with the many Google app teams building with Compose to detect and prevent issues before they even make it to a release.
Google apps develop against and release with snapshot builds of Compose; as such, Compose is tested against the hundreds of thousands of Google app tests and any Compose issues are immediately actioned by our team. We have recently invested in increasing the cadence of updating these snapshots and now update them daily from Compose tip-of-tree, which means we're receiving feedback faster, and are able to resolve issues long before they reach a public release of the library.
Jetpack Compose also relies on @Experimental annotations to mark APIs that are subject to change. We heard your feedback that some APIs have remained experimental for a long time, reducing your confidence in the stability of Compose. We have invested in stabilizing experimental APIs to provide you a more solid API surface, and reduced the number of experimental APIs by 32% in the last year.
We have also heard that it can be hard to debug Compose crashes when your own code does not appear in the stack trace. In the latest alpha BOM, we have added a new opt-in feature to provide more diagnostic information. Note that this does not currently work with minified builds and comes at a performance cost, so we recommend only using this feature in debug builds.
class App : Application() { override fun onCreate() { // Enable only for debug flavor to avoid perf impact in release Composer.setDiagnosticStackTraceEnabled(BuildConfig.DEBUG) } }
Libraries
We know that to build great apps, you need Compose integration in the libraries that interact with your app's UI.
A core library that powers any Compose app is Navigation. You told us that you often encountered limitations when managing state hoisting and directly manipulating the back stack with the current Compose Navigation solution. We went back to the drawing-board and completely reimagined how a navigation library should integrate with the Compose mental model. We're excited to introduce Navigation 3, a new artifact designed to empower you with greater control and simplify complex navigation flows.
We're also investing in Compose support for CameraX and Media3, making it easier to integrate camera capture and video playback into your UI with Compose idiomatic components.
@Composable private fun VideoPlayer( player: Player?, // from media3 modifier: Modifier = Modifier ) { Box(modifier) { PlayerSurface(player) // from media3-ui-compose player?.let { // custom play-pause button UI val playPauseButtonState = rememberPlayPauseButtonState(it) // from media3-ui-compose MyPlayPauseButton(playPauseButtonState, Modifier.align(BottomEnd).padding(16.dp)) } } }
To learn more, see the media3 Compose documentation and the CameraX samples.
Tools
We continue to improve the Android Studio tools for creating Compose UIs. The latest Narwhal canary includes:
- Resizable Previews instantly show you how your Compose UI adapts to different window sizes
- Preview navigation improvements using clickable names and components
- Studio Labs 🧪: Compose preview generation with Gemini quickly generate a preview
- Studio Labs 🧪: Transform UI with Gemini change your UI with natural language, directly from preview.
- Studio Labs 🧪: Image attachment in Gemini generate Compose code from images.
For more information read What's new in Android development tools.

New Compose Lint checks
The Compose alpha BOM introduces two new annotations and associated lint checks to help you to write correct and performant Compose code. The @FrequentlyChangingValue annotation and FrequentlyChangedStateReadInComposition lint check warns in situations where function calls or property reads in composition might cause frequent recompositions. For example, frequent recompositions might happen when reading scroll position values or animating values. The @RememberInComposition annotation and RememberInCompositionDetector lint check warns in situations where constructors, functions, and property getters are called directly inside composition (e.g. the TextFieldState constructor) without being remembered.
Happy Composing
We continue to invest in providing the features, performance, stability, libraries and tools that you need to build excellent apps. We value your input so please share feedback on our latest updates or what you'd like to see next.
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
20 May 2025 6:00pm GMT
Updates to the Android XR SDK: Introducing Developer Preview 2
Posted by Matthew McCullough - VP of Product Management, Android Developer
Since launching the Android XR SDK Developer Preview alongside Samsung, Qualcomm, and Unity last year, we've been blown away by all of the excitement we've been hearing from the broader Android community. Whether it's through coding live-streams or local Google Developer Group talks, it's been an outstanding experience participating in the community to build the future of XR together, and we're just getting started.
Today we're excited to share an update to the Android XR SDK: Developer Preview 2, packed with new features and improvements to help you develop helpful and delightful immersive experiences with familiar Android APIs, tools and open standards created for XR.
At Google I/O, we have two technical sessions related to Android XR. The first is Building differentiated apps for Android XR with 3D content, which covers many features present in Jetpack SceneCore and ARCore for Jetpack XR. The future is now, with Compose and AI on Android XR covers creating XR-differentiated UI and our vision on the intersection of XR with cutting-edge AI capabilities.

What's new in Developer Preview 2
Since the release of Developer Preview 1, we've been focused on making the APIs easier to use and adding new immersive Android XR features. Your feedback has helped us shape the development of the tools, SDKs, and the platform itself.
With the Jetpack XR SDK, you can now play back 180° and 360° videos, which can be stereoscopic by encoding with the MV-HEVC specification or by encoding view-frames adjacently. The MV-HEVC standard is optimized and designed for stereoscopic video, allowing your app to efficiently play back immersive videos at great quality. Apps built with Jetpack Compose for XR can use the SpatialExternalSurface composable to render media, including stereoscopic videos.
Using Jetpack Compose for XR, you can now also define layouts that adapt to different XR display configurations. For example, use a SubspaceModifier to specify the size of a Subspace as a percentage of the device's recommended viewing size, so a panel effortlessly fills the space it's positioned in.
Material Design for XR now supports more component overrides for TopAppBar, AlertDialog, and ListDetailPaneScaffold, helping your large-screen enabled apps that use Material Design effortlessly adapt to the new world of XR.

In ARCore for Jetpack XR, you can now track hands after requesting the appropriate permissions. Hands are a collection of 26 posed hand joints that can be used to detect hand gestures and bring a whole new level of interaction to your Android XR apps:

For more guidance on developing apps for Android XR, check out our Android XR Fundamentals codelab, the updates to our Hello Android XR sample project, and a new version of JetStream with Android XR support.
The Android XR Emulator has also received updates to stability, support for AMD GPUs, and is now fully integrated within the Android Studio UI.

Developers using Unity have already successfully created and ported existing games and apps to Android XR. Today, you can upgrade to the Pre-Release version 2 of the Unity OpenXR: Android XR package! This update adds many performance improvements such as support for Dynamic Refresh Rate, which optimizes your app's performance and power consumption. Shaders made with Shader Graph now support SpaceWarp, making it easier to use SpaceWarp to reduce compute load on the device. Hand meshes are now exposed with occlusion, which enables realistic hand visualization.
Check out Unity's improved Mixed Reality template for Android XR, which now includes support for occlusion and persistent anchors.
We recently launched Android XR Samples for Unity, which demonstrate capabilities on the Android XR platform such as hand tracking, plane tracking, face tracking, and passthrough.

The Firebase AI Logic for Unity is now in public preview! This makes it easy for you to integrate gen AI into your apps, enabling the creation of AI-powered experiences with Gemini and Android XR. The Firebase AI Logic fully supports Gemini's capabilities, including multimodal input and output, and bi-directional streaming for immersive conversational interfaces. Built with production readiness in mind, Firebase AI Logic is integrated with core Firebase services like App Check, Remote Config, and Cloud Storage for enhanced security, configurability, and data management. Learn more about this on the Firebase blog or go straight to the Gemini API using Vertex AI in Firebase SDK documentation to get started.
Continuing to build the future together
Our commitment to open standards continues with the glTF Interactivity specification, in collaboration with the Khronos Group. which will be supported in glTF models rendered by Jetpack XR later this year. Models using the glTF Interactivity specification are self-contained interactive assets that can have many pre-programmed behaviors, like rotating objects on a button press or changing the color of a material over time.
Android XR will be available first on Samsung's Project Moohan, launching later this year. Soon after, our partners at XREAL will release the next Android XR device. Codenamed Project Aura, it's a portable and tethered device that gives users access to their favorite Android apps, including those that have been built for XR. It will launch as a developer edition, specifically for you to begin creating and experimenting. The best news? With the familiar tools you use to build Android apps today, you can build for these devices too.

The Google Play Store is also getting ready for Android XR. It will list supported 2D Android apps on the Android XR Play Store when it launches later this year. If you are working on an Android XR differentiated app, you can get it ready for the big launch and be one of the first differentiated apps on the Android XR Play Store:
- Install and test your existing app in the Android XR Emulator
- Learn how to package and distribute apps for Android XR
- New! Make your XR app stand out from others on Play Store with preview assets such as stereoscopic 180° or 360° videos, as well as screenshots, app description, and non-spatial video.
And we know many of you are excited for the future of Android XR on glasses. We are shaping the developer experience now and will share more details on how you can participate later this year.
To get started creating and developing for Android XR, check out developer.android.com/develop/xr where you will find all of the tools, libraries, and resources you need to work with the Android XR SDK. In particular, try out our samples and codelabs.
We welcome your feedback, suggestions, and ideas as you're helping shape Android XR. Your passion, expertise, and bold ideas are vital as we continue to develop Android XR together. We look forward to seeing your XR-differentiated apps when Android XR devices launch later this year!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
20 May 2025 5:59pm GMT
Peacock built adaptively on Android to deliver great experiences across screens
Posted by Sa-ryong Kang and Miguel Montemayor - Developer Relations Engineers
Peacock is NBCUniversal's streaming service app available in the US, offering culture-defining entertainment including live sports, exclusive original content, TV shows, and blockbuster movies. The app continues to evolve, becoming more than just a platform to watch content, but a hub of entertainment.
Today's users are consuming entertainment on an increasingly wider array of device sizes and types, and in particular are moving towards mobile devices. Peacock has adopted Jetpack Compose to help with its journey in adapting to more screens and meeting users where they are.
Adapting to more flexible form factors
The Peacock development team is focused on bringing the best experience to users, no matter what device they're using or when they want to consume content. With an emerging trend from app users to watch more on mobile devices and large screens like foldables, the Peacock app needs to be able to adapt to different screen sizes. As more devices are introduced, the team needed to explore new solutions that make the most out of each unique display permutation.
The goal was to have the Peacock app to adapt to these new displays while continually offering high-quality entertainment without interruptions, like the stream reloading or visual errors. While thinking ahead, they also wanted to prepare and build a solution that was ready for Android XR as the entertainment landscape is shifting towards including more immersive experiences.

Building a future-proof experience with Jetpack Compose
In order to build a scalable solution that would help the Peacock app continue to evolve, the app was migrated to Jetpack Compose, Android's toolkit for building scalable UI. One of the essential tools they used was the WindowSizeClass API, which helps developers create and test UI layouts for different size ranges. This API then allows the app to seamlessly switch between pre-set layouts as it reaches established viewport breakpoints for different window sizes.
The API was used in conjunction with Kotlin Coroutines and Flows to keep the UI state responsive as the window size changed. To test their work and fine tune edge case devices, Peacock used the Android Studio emulator to simulate a wide range of Android-based devices.
Jetpack Compose allowed the team to build adaptively, so now the Peacock app responds to a wide variety of screens while offering a seamless experience to Android users. "The app feels more native, more fluid, and more intuitive across all form factors," said Diego Valente, Head of Mobile, Peacock and Global Streaming. "That means users can start watching on a smaller screen and continue instantly on a larger one when they unfold the device-no reloads, no friction. It just works."
Preparing for immersive entertainment experiences
In building adaptive apps on Android, John Jelley, Senior Vice President, Product & UX, Peacock and Global Streaming, says Peacock has also laid the groundwork to quickly adapt to the Android XR platform: "Android XR builds on the same large screen principles, our investment here naturally extends to those emerging experiences with less developmental work."
The team is excited about the prospect of features unlocked by Android XR, like Multiview for sports and TV, which enables users to watch multiple games or camera angles at once. By tailoring spatial windows to the user's environment, the app could offer new ways for users to interact with contextual metadata like sports stats or actor information-all without ever interrupting their experience.
Build adaptive apps
Learn how to unlock your app's full potential on phones, tablets, foldables, and beyond.
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
20 May 2025 5:58pm GMT
On-device GenAI APIs as part of ML Kit help you easily build with Gemini Nano
Posted by Caren Chang - Developer Relations Engineer, Chengji Yan - Software Engineer, Taj Darra - Product Manager
We are excited to announce a set of on-device GenAI APIs, as part of ML Kit, to help you integrate Gemini Nano in your Android apps.
To start, we are releasing 4 new APIs:
- Summarization: to summarize articles and conversations
- Proofreading: to polish short text
- Rewriting: to reword text in different styles
- Image Description: to provide short description for images
Key benefits of GenAI APIs
GenAI APIs are high level APIs that allow for easy integration, similar to existing ML Kit APIs. This means you can expect quality results out of the box without extra effort for prompt engineering or fine tuning for specific use cases.
GenAI APIs run on-device and thus provide the following benefits:
- Input, inference, and output data is processed locally
- Functionality remains the same without reliable internet connection
- No additional cost incurred for each API call
To prevent misuse, we also added safety protection in various layers, including base model training, safety-aware LoRA fine-tuning, input and output classifiers and safety evaluations.
How GenAI APIs are built
There are 4 main components that make up each of the GenAI APIs.
- Gemini Nano is the base model, as the foundation shared by all APIs.
- Small API-specific LoRA adapter models are trained and deployed on top of the base model to further improve the quality for each API.
- Optimized inference parameters (e.g. prompt, temperature, topK, batch size) are tuned for each API to guide the model in returning the best results.
- An evaluation pipeline ensures quality in various datasets and attributes. This pipeline consists of: LLM raters, statistical metrics and human raters.
Together, these components make up the high-level GenAI APIs that simplify the effort needed to integrate Gemini Nano in your Android app.
Evaluating quality of GenAI APIs
For each API, we formulate a benchmark score based on the evaluation pipeline mentioned above. This score is based on attributes specific to a task. For example, when evaluating the summarization task, one of the attributes we look at is "grounding" (ie: factual consistency of generated summary with source content).
To provide out-of-box quality for GenAI APIs, we applied feature specific fine-tuning on top of the Gemini Nano base model. This resulted in an increase for the benchmark score of each API as shown below:
Use case in English | Gemini Nano Base Model | ML Kit GenAI API |
---|---|---|
Summarization | 77.2 | 92.1 |
Proofreading | 84.3 | 90.2 |
Rewriting | 79.5 | 84.1 |
Image Description | 86.9 | 92.3 |
In addition, this is a quick reference of how the APIs perform on a Pixel 9 Pro:
Prefix Speed (input processing rate) |
Decode Speed (output generation rate) |
|
---|---|---|
Text-to-text | 510 tokens/second | 11 tokens/second |
Image-to-text | 510 tokens/second + 0.8 seconds for image encoding | 11 tokens/second |
Sample usage
This is an example of implementing the GenAI Summarization API to get a one-bullet summary of an article:
val articleToSummarize = "We are excited to announce a set of on-device generative AI APIs..." // Define task with desired input and output format val summarizerOptions = SummarizerOptions.builder(context) .setInputType(InputType.ARTICLE) .setOutputType(OutputType.ONE_BULLET) .setLanguage(Language.ENGLISH) .build() val summarizer = Summarization.getClient(summarizerOptions) suspend fun prepareAndStartSummarization(context: Context) { // Check feature availability. Status will be one of the following: // UNAVAILABLE, DOWNLOADABLE, DOWNLOADING, AVAILABLE val featureStatus = summarizer.checkFeatureStatus().await() if (featureStatus == FeatureStatus.DOWNLOADABLE) { // Download feature if necessary. // If downloadFeature is not called, the first inference request will // also trigger the feature to be downloaded if it's not already // downloaded. summarizer.downloadFeature(object : DownloadCallback { override fun onDownloadStarted(bytesToDownload: Long) { } override fun onDownloadFailed(e: GenAiException) { } override fun onDownloadProgress(totalBytesDownloaded: Long) {} override fun onDownloadCompleted() { startSummarizationRequest(articleToSummarize, summarizer) } }) } else if (featureStatus == FeatureStatus.DOWNLOADING) { // Inference request will automatically run once feature is // downloaded. // If Gemini Nano is already downloaded on the device, the // feature-specific LoRA adapter model will be downloaded very // quickly. However, if Gemini Nano is not already downloaded, // the download process may take longer. startSummarizationRequest(articleToSummarize, summarizer) } else if (featureStatus == FeatureStatus.AVAILABLE) { startSummarizationRequest(articleToSummarize, summarizer) } } fun startSummarizationRequest(text: String, summarizer: Summarizer) { // Create task request val summarizationRequest = SummarizationRequest.builder(text).build() // Start summarization request with streaming response summarizer.runInference(summarizationRequest) { newText -> // Show new text in UI } // You can also get a non-streaming response from the request // val summarizationResult = summarizer.runInference(summarizationRequest) // val summary = summarizationResult.get().summary } // Be sure to release the resource when no longer needed // For example, on viewModel.onCleared() or activity.onDestroy() summarizer.close()
For more examples of implementing the GenAI APIs, check out the official documentation and samples on GitHub:
Use cases
Here is some guidance on how to best use the current GenAI APIs:
For Summarization, consider:
- Conversation messages or transcripts that involve 2 or more users
- Articles or documents less than 4000 tokens (or about 3000 English words). Using the first few paragraphs for summarization is usually good enough to capture the most important information.
For Proofreading and Rewriting APIs, consider utilizing them during the content creation process for short content below 256 tokens to help with tasks such as:
- Refining messages in a particular tone, such as more formal or more casual
- Polishing personal notes for easier consumption later
For the Image Description API, consider it for:
- Generating titles of images
- Generating metadata for image search
- Utilizing descriptions of images in use cases where the images themselves cannot be displayed, such as within a list of chat messages
- Generating alternative text to help visually impaired users better understand content as a whole
GenAI API in production
Envision is an app that verbalizes the visual world to help people who are blind or have low vision lead more independent lives. A common use case in the app is for users to take a picture to have a document read out loud. Utilizing the GenAI Summarization API, Envision is now able to get a concise summary of a captured document. This significantly enhances the user experience by allowing them to quickly grasp the main points of documents and determine if a more detailed reading is desired, saving them time and effort.

Supported devices
GenAI APIs are available on Android devices using optimized MediaTek Dimensity, Qualcomm Snapdragon, and Google Tensor platforms through AICore. For a comprehensive list of devices that support GenAI APIs, refer to our official documentation.
Learn more
Start implementing GenAI APIs in your Android apps today with guidance from our official documentation and samples on GitHub: AI Catalog GenAI API Samples with Compose, ML Kit GenAI APIs Quickstart.
20 May 2025 5:57pm GMT
New in-car app experiences
Posted by Noam Gefen - Product Manager, Android for Cars, Sole Alborno - Product Manager, Gemini, and Ben Sagmoe - Developer Relations Engineer, Android for Cars
The in-car experience continues to evolve rapidly, and Google remains committed to pushing the boundaries of what's possible. At Google I/O 2025, we're excited to unveil the latest advancements for drivers, car manufacturers, and developers, furthering our goal of a safe, seamless, and helpful connected driving experience.
Today's car cabins are increasingly digital, offering developers exciting new opportunities with larger displays and more powerful computing. Android Auto is now supported in nearly all new cars sold, with almost 250 million compatible vehicles on the road.
We're also seeing significant growth in cars powered by Android Automotive OS with Google built-in. Over 50 models are currently available, with more launching this year. This growth is fueled by a thriving app ecosystem, including over 300 apps already available on the Play Store. These include apps optimized for a safe and seamless experience while driving as well as entertainment apps for while you're parked and waiting in your car-many of which are adaptive mobile apps that have been seamlessly brought to cars through the Car Ready Mobile Apps Program.
A vibrant developer community is essential to delivering these innovative in-car experiences utilizing the different screens within the car cabin. This past year, we've focused on key areas to help empower developers to build more differentiated experiences in cars across both platforms, as we embark on the Gemini era in cars.
Gemini for Cars
Exciting news for in-car experiences: Gemini, Google's advanced AI, is coming to vehicles! This unlocks a new era of safe and helpful interactions on the go.
Gemini enables natural voice conversations and seamless multitasking, empowering drivers to get more done simply by speaking naturally. Imagine effortlessly finding charging stations or navigating to a location pulled directly from an email, all with just your voice.
You can learn how to leverage Gemini's potential to create engaging in-car experiences in your app.
Navigation apps can integrate with Gemini using three core intent formats, allowing you to start navigation, display relevant search results, and execute custom actions, such as enabling users to report incidents like traffic congestion using their voice.
Gemini for cars will be rolling out in the coming months. Get ready to build the next generation of in-car AI experiences!
New developer programs and tools

Last year, we introduced car app quality tiers to inspire developers to create high quality in-car experiences. By developing your app in compliance with the Car ready tier, you can bring video, gaming, or browser apps to run while parked in cars with Google built-in with almost no additional effort. Learn more about Car Ready Mobile Apps.
Your app can further shine in cars within the Car optimized and Car differentiated tiers to unlock experiences while the car is in motion, and also when transitioning between parked and driving modes, while utilizing the different screens within the modern car cabin. Check the car app quality guidelines for details.
To start with, across both Android Auto and for cars with Google built-in, we've made some exciting improvements for Car App Library:
- The Weather app category has graduated from beta: any developer can now publish weather apps to production tracks on both Android Auto and cars with Google Built-in. Before you publish your app, check that it meets the quality guidelines for weather apps.
- Designing templated apps is easier than ever with the Car App Templates Design Kit we just published on Figma.
- Two new templates, the SectionedItemTemplate and MediaPlaybackTemplate, are now available in the Car App Library 1.8 alpha release for use on Android Auto. These templates are a great fit for building templated media apps, allowing for increased customization in layout and browsing structure.

On Android Auto, many new app categories and capabilities are now in beta:
- We are adding support for Building media apps with the Car App Library, enabling media app developers to build both richer and more complete experiences that users are used to on their phones. During beta, developers can build and publish media apps built using the Car App Library to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta. See Build a templated media app to learn more and get started.
- The communications category is in beta. We've simplified calling integration for calling apps by utilizing the CallsManager Jetpack API. Together with the templates provided by the Car App Library, this enables communications apps to build features like full message history, upcoming meetings list, rich in-call views, and more. During beta, developers can build and publish communications apps to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.
- Games are now supported in Android Auto, while parked, on phones running Android 15 and above. You can already find some popular titles like Angry Birds 2, Farm Heroes Saga, Candy Crush Soda Saga and Beach Buggy Racing 2. To add support for Android Auto to your own app, see Build games for cars and Add support for Android Auto to your parked app. The Games category is in Beta and developers can publish games to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.
Finally, we have further simplified building, testing and distribution experience for developers building apps for Android Automotive OS cars with Google built-in:
- Games Category now in Beta for Google Built-In, allowing developers to distribute their adaptive games to cars. You can express interest to release the production track. Google Play Games Services (v2) are now available on Cars with Google built-in, enabling seamless login flows, cross device save states, and more. Get started with Google Play Games Services to learn more.
- Distribution through Google Play is more flexible than ever. It's now possible for apps in the parked categories to distribute in the same APK or App Bundle to cars with Google built-in as to phones, including through the mobile release track. Learn more on how to Distribute to cars.
- Android Automotive OS on Pixel Tablet is now generally available, giving you a physical device option for testing Android Automotive OS apps without buying or renting a car. Additionally, the most recent system images include support for acting as an Android Auto receiver, meaning you can use the same device to test both your app's experience on Android Auto and Android Automotive OS. Apply for access to these images.
The road ahead
You can look forward to more updates later this year, including:
- Video apps will be supported on Android Auto, starting with phones running Android 16 on select compatible cars. If your app is already adaptive, enabling your app experience while parked only requires minimal steps to distribute to cars.
- For Android Automotive OS cars running Android 14+ with Google built-in, we are working with car manufacturers to add additional app compatibility, to enable thousands of adaptive mobile apps in the next phase of the Car Ready Mobile Apps Program.
- Updated design documentation that visualizes car app quality guidelines and integration paths to simplify designing your app for cars.
- Google Play Services for cars with Google built-in are expanding to bring them on-par with mobile, including:
-
a. Passkeys and Credential Manager APIs for a more seamless user sign-in experience.b. Quick Share, which will enable easy cross-device sharing from phone to car.
- Audio while driving for video apps: For cars with Google built-in, we're working with OEMs to enable audio-only listening for video apps while driving. Sign up to express interest in participating in the early access program.If you'd like to prepare for this feature's general availability, you can work through the audio while driving codelab or check out the Build video apps for Android Automotive OS page
- Firebase Test Lab is adding Android Automotive OS devices to its device catalog, making it possible to test on real car hardware without needing to buy or rent a car. Sign up to express interest in becoming an early access partner.
- Pre-launch reports for Android Automotive OS are coming soon to the Play Console, helping you ensure app quality before distributing your app to cars.
Be sure to keep up to date through goo.gle/cars-whats-new on these features and more as we continuously invest in the future of Android in the car. Stay tuned for more resources to help you build innovative and engaging experiences for drivers and passengers.
Ready to publish your car app? Check our guidance for distributing to cars.
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
20 May 2025 5:56pm GMT
I/O 2025: What's new in Google Play
Posted by Paul Feng, VP of Product Management, Google Play
At Google Play, we're dedicated to helping people discover experiences they'll love, while empowering developers like you to bring your ideas to life and build successful businesses.
At this year's Google I/O, we unveiled the latest ways we're empowering your success with new tools that provide robust testing and actionable insights. We also showcased how we're continuing to build a content-rich Play Store that fosters repeat engagement alongside new subscription capabilities that streamline checkout and reduce churn.
Check out all exciting developments from I/O that will help you grow your business on Google Play. Continue reading or watch the session to dive in.
Helping you succeed every step of the way
Last month, we introduced our latest Play Console updates focused on improving quality and performance. A redesigned app dashboard centered around four developer objectives (Test and release, Monitor and improve, Grow users, Monetize) and new Android vitals metrics offer quick insights and actionable suggestions to proactively improve the user experience.
Get more actionable insights with new Play Console overview pages
Building on these updates, we've launched dedicated overview pages for two developer objectives: Test and release and Monitor and improve. These new pages bring together more objective-related metrics, relevant features, and a "Take action" section with contextual, dynamic advice. Overview pages for Grow and Monetize will be coming soon.
Halt fully-rolled out releases when needed
Historically, a release at 100% live meant there was no turning back, leaving users stuck with a flawed version until a new update rolled out. Soon, you'll be able to halt fully-live releases, through Play Console and the Publishing API to stop the distribution of problematic versions to new users.
Optimize your store listings with better management tools and metrics
We launched two tools to enhance your store listings. The asset library makes it easy to upload, edit, and view your visual assets. Upload them from Google Drive, organize with tags, and crop for repurposing. And with new open metrics, you gain deeper insights into listing performance so you can better understand how they attract, engage, and re-engage users.
Stay ahead of threats with the Play Integrity API
We're committed to robust security and preventing abuse so you can thrive on Play's trusted platform. The Play Integrity API continuously evolves to combat emerging threats, with these recent enhancements:
- Stronger abuse detection for all developers that leverages the latest Android hardware-security with no developer effort required.
- Device security update checks to safeguard your app's sensitive actions like transfers or data access.
- Public beta for device recall which enables you to detect if a device is being reused for abuse or repeated actions, even after a device reset. You can express interest in this beta.
Unlocking more discovery and engagement for your apps and its content
Last year, we shared our vision for a content-rich Google Play that has already delivered strong results. Year-over-year, Apps Home has seen over a 25% increase in average monthly visitors with apps seeing a 10% growth in acquisitions and double-digit growth in app spend for those monetizing on Google Play. Building on that vision, we're introducing even more updates to elevate your discovery and engagement, both on and off the store.
For example, curated spaces, launched last year, celebrate seasonal interests like football (soccer) in Brazil and cricket in India, and evergreen interests like comics in Japan. By adding daily content-match highlights, promotions, and editorial articles directly on the Apps Home-these spaces foster discovery and engagement. Curated spaces are a hit with over 920,000 highly engaged users in Japan returning to the comics space monthly. Building on this momentum, we are expanding to more locations and categories this year.
We're launching new topic browse pages that feature timely, relevant, and visually engaging content. Users can find them throughout the Store, including Apps Home, store listing pages, and search. These pages debut this month in the US with Media & Entertainment, showcasing over 100,000 shows, movies, and select sports. More localized topic pages will roll out globally later this year.
We're expanding Where to Watch to more markets, including the UK, Korea, Indonesia, and Mexico, to help users find and deep-link directly into their subscribed apps for movies and TV. Since launching in the US in November 2024, we've seen promising results: People who view app content through Where to Watch return to Play more frequently and increase their content search activity by 30%.
We're also enhancing how your content is displayed on the Play Store. Starting this July, all app developers can add a hero content carousel and a YouTube playlist carousel to their store listings. These formats will help showcase your best content and drive greater user engagement and discovery.
For apps best experienced through sound, we're launching audio samples on the Apps Home. A simple tap offers users a brief escape into your audio content. In early testing, audio samples made users 3x more likely to install or open an app! This feature is now available for all Health & Wellness app developers with users in the US, with more categories and markets coming soon. You can express your interest in promoting audio content.
Helping you take advantage of deeper engagement on Play, on and off the Store
Last year, we introduced Engage SDK, a unified solution to deliver personalized content and guide users to relevant in-app experiences. Integrating it unlocks surfaces like Collections, our immersive full-screen experience bringing content directly to the user's home screen.
We're rolling out updates to expand your content's reach even further:
- Engage SDK content is coming to the Play Store this summer, in addition to existing spaces like Collections and Entertainment Space on select Android tablets.
- New content categories are now supported, starting today with Travel.
- Collections are rolling out globally to Google Play markets starting today, including Brazil, India, Indonesia, Japan, and Mexico.
Integrate with Engage SDK today to take advantage of this new expansion and boost re-engagement. Try our codelab to test the ease of publishing content with Engage SDK and express interest in the developer preview.
Maximizing your revenue with subscriptions enhancements
With over a quarter-billion subscriptions, Google Play is one of the world's largest subscriptions platforms. We're committed to helping you turn engaged users into revenue growth by continually enhancing our tools to meet evolving customer needs.
To streamline your purchase flow, we're introducing multi-product checkout for subscriptions. This lets you sell subscription add-ons alongside base subscriptions, all under a single, aligned payment schedule. Users get a simplified experience with one price and one transaction, while you gain more control over how subscribers upgrade, downgrade, or manage their add-ons.
To help you retain more of your subscribers, we're now showcasing subscription benefits in more places across Play - including the Subscriptions Center, in reminder emails, and during purchase and cancellation flows. This increased visibility has already reduced voluntary churn by 2%. Be sure to enter your subscription benefits in Play Console so you can leverage this powerful new capability.
Reducing involuntary churn is a key factor in optimizing your revenue. When payment methods unexpectedly decline, users might unintentionally cancel. Now, instead of immediate cancellation, you can now choose a grace period (up to 30 days) or an account hold (up to 60 days). Developers who increased the decline recovery period - from 30 to 60 days - saw an average 10% reduction in involuntary churn for renewals.
On top of this, we're expanding our commitment to get more buyers ready for purchases throughout their entire journey. This includes prompting users to set up payment methods and verification right at device setup. After setup, we've integrated prompts into highly visible areas like the Play and Google account menus. And as always, we're continuously enabling payments in more markets and expanding payment options. Plus, our AI models now help optimize in-app transactions by suggesting the right payment method at the right time, and we're bringing buyers back with effective cart abandonment reminders.
Grow your business on Google Play
Our latest updates reinforce our commitment to fostering a thriving Google Play ecosystem. From enhanced discovery and robust tools to new monetization avenues, we're empowering you to innovate and grow. We're excited for the future we're building together and encourage you to use these new capabilities to create even more impactful experiences. Thank you for being an essential part of the Google Play community.
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
20 May 2025 5:55pm GMT