02 Jul 2025
TalkAndroid
Honor Magic V5 Mixes “Thinnest Foldable Ever” With Flagship Specs
Honor has reclaimed its crown of thinnest foldable in the world, though only with one colorway.
02 Jul 2025 4:30pm GMT
Board Kings Free Rolls – Updated Every Day!
Run out of rolls for Board Kings? Find links for free rolls right here, updated daily!
02 Jul 2025 4:18pm GMT
Coin Tales Free Spins – Updated Every Day!
Tired of running out of Coin Tales Free Spins? We update our links daily, so you won't have that problem again!
02 Jul 2025 4:17pm GMT
Coin Master Free Spins & Coins Links
Find all the latest Coin Master free spins right here! We update daily, so be sure to check in daily!
02 Jul 2025 4:15pm GMT
Monopoly Go Events Schedule Today – Updated Daily
Current active events are Main Event - Golden Hour Wonders, Tournament - Breeze Bash - Special Event - Heatwave Racers
02 Jul 2025 4:14pm GMT
Family Island Free Energy Links (Updated Daily)
Tired of running out of energy on Family Island? We have all the latest Family Island Free Energy links right here, and we update these daily!
02 Jul 2025 4:07pm GMT
Crazy Fox Free Spins & Coins (Updated Daily)
If you need free coins and spins in Crazy Fox, look no further! We update our links daily to bring you the newest working links!
02 Jul 2025 4:05pm GMT
Match Masters Free Gifts, Coins, And Boosters (Updated Daily)
Tired of running out of boosters for Match Masters? Find new Match Masters free gifts, coins, and booster links right here! Updated Daily!
02 Jul 2025 4:03pm GMT
Solitaire Grand Harvest – Free Coins (Updated Daily)
Get Solitaire Grand Harvest free coins now, new links added daily. Only tested and working links, complete with a guide on how to redeem the links.
02 Jul 2025 4:01pm GMT
Dice Dreams Free Rolls – Updated Daily
Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.
02 Jul 2025 4:00pm GMT
Ultra Mobile, Another T-Mobile Prepaid Brand, Refreshes Its Plans
Ultra's new plans will give you more data and better international-focused features.
02 Jul 2025 3:30pm GMT
This Serie Is Outperforming Yellowstone: Inside Paramount+’s Latest Hit Series with Tom Hardy
Tom Hardy's gangster saga "Mobland" has become Paramount+'s latest sensation, earning an impressive 4.2/5 rating from viewers. The…
02 Jul 2025 3:30pm GMT
Nothing Unveils Its Flagship Phone 3, Alongside Headphone 1
Nothing achieves two firsts: a flagship phone and over-ear headphones.
02 Jul 2025 2:08pm GMT
I Try The SIHOO M56C Office Chair For 3 Weeks – Is It Worth It?
Looking for a decent ergonomic chair without breaking the bank? The SIHOO M56C might be exactly what you…
02 Jul 2025 9:51am GMT
Never Get Lost Again: Instant Location Sharing Tips for Android Users
Sending your location through Android has become an essential way to coordinate meetups or share your whereabouts with…
02 Jul 2025 6:30am GMT
Idle Office Tycoon Codes – July 2025
Find all the latest Idle Office Tycoon codes here! Keep reading for more!
02 Jul 2025 2:31am GMT
01 Jul 2025
Android Developers Blog
Level up your game: Google Play's Indie Games Fund in Latin America returns for its 4th year
Posted by Daniel Trócoli - Google Play Partnerships
We're thrilled to announce the return of Google Play's Indie Games Fund (IGF) in Latin America for its fourth consecutive year! This year, we're once again committing $2 million to empower another 10 indie game studios across the region. With this latest round of funding, our total investment in Latin American indie games will reach an impressive $8 million USD.
Since its inception, the IGF has been a cornerstone of our commitment to fostering growth for developers of all sizes on Google Play. We've seen firsthand the transformative impact this support has had, enabling studios to expand their teams, refine their creations, and reach new audiences globally.
What's in store for the Indie Games Fund in 2025?
Just like in previous years, selected small game studios based in Latin America will receive a share of the $2 million fund, along with support from the Google Play team.
As Vish Game Studio, a previously selected studio, shared: "The IGF was a pivotal moment for our studio, boosting us to the next level and helping us form lasting connections." We believe in fostering these kinds of pivotal moments for all our selected studios.
The program is open to indie game developers who have already launched a game, whether it's on Google Play, another mobile platform, PC, or console. Each selected recipient will receive between $150,000 and $200,000 to help them elevate their game and realize their full potential.
Check out all eligibility criteria and apply now! Applications will close at 12:00 PM BRT on July 31, 2025. To give your application the best chance, remember that priority will be given to applications received by 12:00 PM BRT on July 15, 2025.

01 Jul 2025 2:00pm GMT
30 Jun 2025
Android Developers Blog
Top announcements to know from Google Play at I/O ‘25
Posted by Raghavendra Hareesh Pottamsetty - Google Play Developer and Monetization Lead
At Google Play, we're dedicated to helping people discover experiences they'll love, while empowering developers like you to bring your ideas to life and build successful businesses. This year, Google I/O was packed with exciting announcements designed to do just that. For a comprehensive overview of everything we shared, be sure to check out our blog post recapping What's new in Google Play.
Today, we'll dive specifically into the latest updates designed to help you streamline your subscriptions offerings and maximize your revenue on Play. Get a quick overview of these updates in our video below, or read on for more details.
#1: Subscriptions with add-ons: Streamlining subscriptions for you and your users
We're excited to announce multi-product checkout for subscriptions, a new feature designed to streamline your purchase flow and offer a more unified experience for both you and your users. This enhancement allows you to sell subscription add-ons right alongside your base subscriptions, all while maintaining a single, aligned payment schedule.
The result? A simplified user experience with just one price and one transaction, giving you more control over how your subscribers upgrade, downgrade, or manage their add-ons. Learn more about how to create add-ons.

#2: Showcasing benefits in more places across Play: Increasing visibility and value
We're also making it easier for you to retain more of your subscribers by showcasing subscription benefits in more key areas across Play. This includes the Subscriptions Center, within reminder emails, and even during the purchase and cancellation processes. This increased visibility has already proved effective, reducing voluntary churn by 2%. To take advantage of this powerful new capability, be sure to enter your subscription benefits details in Play Console.

#3: New grace period and account hold duration: Decreasing involuntary churn
Another way we're helping you maximize your revenue is by extending grace periods and account hold durations to tackle unintended subscription losses, which often occur when payment methods unexpectedly decline.
Now, you can customize both the grace period (when users retain access while renewal is attempted) and the account hold period (when access is suspended). You can set a grace period of up to 30 days and an account hold period of up to 60 days. However, the total combined recovery period (grace period + account hold) cannot exceed 60 days.
This means instead of an immediate cancellation, your users have a longer window to update their payment information. Developers who've already extended their decline recovery period-from 30 to 60 days-have seen impressive results, with an average 10% reduction in involuntary churn for renewals. Ready to see these results for yourself? Adjust your grace period and account hold durations in Play Console today.

But that's not all. We're constantly investing in ways to help you optimize conversion throughout the entire buyer lifecycle. This includes boosting purchase-readiness by prompting users to set up payment methods and verification right from device setup, and we've integrated these prompts into highly visible areas like the Play and Google account menus. Beyond that, we're continuously enabling payments in more markets and expanding payment options. Our AI models are even working to optimize in-app transactions by suggesting the right payment method at the right time, and we're bringing buyers back with effective cart abandonment reminders.
That's it for our top announcements from Google I/O 2025, but there's so many more updates to discover from this year's event. Check out What's new in Google Play to learn more, and to dive deeper into the session details, view the Google Play I/O playlist for all the announcements.

30 Jun 2025 4:00pm GMT
Get ready for the next generation of gameplay powered by Play Games Services
Posted by Chris Wilk - Group Product Manager, Games on Google Play
To captivate players and grow your game, you need tools that enhance discovery and retention. Play Games Services (PGS) is your key to unlocking a suite of services that connect you with over 2 billion monthly active players. PGS empowers you to drive engagement through features like achievements and increase retention with promotions tailored to each gameplay progress. These tools are designed to help you deliver relevant and compelling content that keeps players coming back.
We are continuously evolving gaming on Play, and this year, we're introducing more PGS-powered experiences to give you deeper player insights and greater visibility in the Play Store. To access these latest advancements and ensure continued functionality, you must migrate from PGS v1 to PGS v2 by May 2026. Let's take a closer look at what's new:
Drive discovery and engagement by rewarding gameplay progress
We're fundamentally transforming how achievements work in the Play Store, making them a key driver for a great gaming experience. Now deeply embedded across the store, achievements are easily discoverable via search filters and game detail pages, and further drive engagement when offered with Play Points.
At a minimum, you should have at least 15 achievements spread across the lifetime of the game, in the format of incremental achievements to show progress. Games that enable players to earn at least 5 achievements in the first 2 hours of gameplay are most successful in driving deeper engagement*.
The most engaging titles offer 40 or more achievements with diverse types of goals including leveling up characters, game progression, hidden surprises, or even failed attempts. To help you get the most out of achievements, we've made it easier to create achievements with bulk configuration in Play Console.
For eligible titles*, Play activates quests to reward players for completing achievements - for example with Play Points. Supercell activated quests for Hay Day, leading to an average 177% uplift in installs*. You can tailor your quests to achieve specific campaign objectives, whether it's attracting high-value players or driving spend through repeated engagement, all while making it easy to jump back into your game.

Increase retention with tailored promotions
Promotional content is a vital tool for you to highlight new events, major content updates, and exciting offers within your game. It turns Play into a direct marketing channel to re-engage with your players. We've enhanced audience targeting capabilities so you can tailor your content to reach and convert the most relevant players.
By integrating PGS, you can use the Play Grouping API to create custom segments based on gameplay context*. Using this feature, Kabam launched promotional content to custom audiences for Marvel Contest of Champions, resulting in a 4x increase in lapsed user engagement*.

Start implementing PGS features today
PGS is designed to make the sign-in experience more seamless for players, automatically syncing their progress and identity across Android devices. With a single tap, they can pick up where they left off or start a new game from any screen. Whether you use your own sign-in solution, services from third parties, or a combination of both, we've made it easier to integrate Play Games Services with the Recall API.
To ensure a consistent sign-in experience for all players, we're phasing out PGS v1.
All games currently using PGS v1 must migrate to PGS v2 by May 2026. After this date, you will no longer be able to publish or update games that use the v1 SDK.
Below you'll find the timeline to plan your migration:
Migration guide |
|
May 2025 | As announced at I/O, new apps using PGS v1 can no longer be published. While existing apps can release updates with v1 and the APIs are still functional, you'll need to migrate by May 2026, and APIs will be fully shut down in 2028. |
May 2026 | APIs are still functional for users, but are no longer included in the SDK. New app versions compiled with the most recent SDK would fail in the build process if your code still uses the removed APIs. If your app still relies on any of these APIs, you should migrate to PGS v2 as soon as possible. |
Q3 2028 | APIs are no longer functional and will fail when a request is sent by an app. |
Looking ahead, more opportunities powered by PGS
Coming soon, players will be able to generate unique, AI-powered avatars within their profiles - creating fun, diverse representations of their gaming selves. With PGS integration, developers can allow players to carry over their avatar within the game. This enables players to showcase their gaming identity across the entire gameplay experience, creating an even stronger motivation to re-engage with your game.

PGS is the foundational tool for maximizing your business growth on Play, enabling you to tailor your content for each player and access the latest gameplay innovations on the platform. Stay tuned for more PGS features coming this year to provide an even richer player experience.

30 Jun 2025 3:45pm GMT
25 Jun 2025
Android Developers Blog
How Mecha BREAK is driving PC-only growth on Google Play Games
Posted by Kosuke Suzuki - Director, Games on Google Play
On July 1, Amazing Seasun Games is set to unveil its highly anticipated action shooting game - Mecha BREAK, with a multiplatform launch across PC and Console. A key to their PC growth strategy is Google Play Games on PC, enabling the team to build excitement with a pre-registration campaign, maximize revenue with PC earnback, and ensure a secure, top-tier experience on PC.
Building momentum with pre-registration
With a legacy of creating high-quality games since 1995, Amazing Seasun Games has already seen Mecha BREAK attract over 3.5 million players during the last beta test. To build on this momentum, the studio is bringing their game to Google Play Games on PC to open pre-registration and connect with its massive player audience.
"We were excited to launch on Google Play Games on PC. We want to make sure all players can enjoy the Mecha BREAK experience worldwide."
- Kris Kwok, Executive Producer of Mecha BREAK and CEO of Amazing Seasun Games

Accelerating growth with the Native PC program
Mecha BREAK's launch strategy includes leveraging the native PC earnback, a program that gives native PC developers the opportunity to unlock up to 15% in additional earnback.
Beyond earnback, the program offers comprehensive support for PC game development, distribution, and growth. Developers can manage PC builds in Play Console, simplifying the process of packaging PC versions, configuring releases, and managing store listings. Now, you can also view PC-specific sales reports, providing a more precise analysis of your game's financial performance.
Delivering a secure and high quality PC experience
Mecha BREAK is designed to deliver an intense and high-fidelity experience on PC. Built on a cutting-edge, proprietary 3D engine, the game offers players three unique modes of fast-paced combat on land and in the air.
- Diverse combat styles: Engage in six-on-six hero battles, three-on-three matches, or the unique PvPvE extraction mode "Mashmak".
- Free customization options: Create personalized characters with a vast array of colors, patterns and gameplay styles, from close-quarters brawlers to long-range tactical units.

The decision to integrate with Google Play Games on PC was driven by the platform's robust security infrastructure, including tools such as Play Integrity API, supporting large-scale global games like Mecha BREAK.
"Mecha BREAK's multiplayer setting made Google Play Games a strong choice, as we expect exceptional operational stability and performance. The platform also offers advanced malware protection and anti-cheat capabilities."
- Kris Kwok, Executive Producer of Mecha BREAK and CEO of Amazing Seasun Games
Bring your game to Google Play Games on PC
This year, the native PC program is open to all PC games, including PC-only titles. If you're ready to expand your game's reach and accelerate its growth, learn more about the eligibility requirements and how to join the program today.
25 Jun 2025 5:00pm GMT
23 Jun 2025
Android Developers Blog
Top 3 updates for Android developer productivity at Google I/O ‘25
Posted by Meghan Mehta - Android Developer Relations Engineer
#1 Agentic AI is available for Gemini in Android Studio
Gemini in Android Studio is the AI-powered coding companion that makes you more productive at every stage of the dev lifecycle. At Google I/O 2025 we previewed new agentic AI experiences: Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier for you to build and test code. We also announced Agent Mode, which was designed to handle complex, multi-stage development tasks that go beyond typical AI assistant capabilities, invoking multiple tools to accomplish tasks on your behalf. We're excited to see how you leverage these agentic AI experiences which are now available in the latest preview version of Android Studio on the canary release channel.
You can also use Gemini to automatically generate Jetpack Compose previews, as well as transform UI code using natural language, saving you time and effort. Give Gemini more context by attaching images and project files to your prompts, so you can get more relevant responses. And if you're looking for enterprise-grade privacy and security features backed by Google Cloud, Gemini in Android Studio for businesses is now available. Developers and admins can unlock these features and benefits by subscribing to Gemini Code Assist Standard or Enterprise editions.
#2 Build better apps faster with the latest stable release of Jetpack Compose
Compose is our recommended UI toolkit for Android development, used by over 60% of the top 1K apps on Google Play. We released a new version of our Jetpack Navigation library: Navigation 3, which has been rebuilt from the ground up to give you more flexibility and control over your implementation. We unveiled the new Material 3 Expressive update which provides tools to enhance your product's appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for your users. The latest stable Bill of Materials (BOM) release for Compose adds new features such as autofill support, auto-sizing text, visibility tracking, animate bounds modifier, accessibility checks in tests, and more! This release also includes significant rewrites and improvements to multiple sub-systems including semantics, focus and text optimizations.
These optimizations are available to you with no code changes other than upgrading your Compose dependency. If you're looking to try out new Compose functionality, the alpha BOM offers new features that we're working on including pausable composition, updates to LazyLayout prefetch, context menus, and others. Finally, we've added Compose support to CameraX and Media3, making it easier to integrate camera capture and video playback into your UI with Compose idiomatic components.
#3 The new Kotlin Multiplatform (KMP) shared module template helps you share business logic
KMP enables teams to deliver quality Android and iOS apps with less development time. The KMP ecosystem continues to grow: last year alone, over 900 new KMP libraries were published. At Google I/O we released a new Android Studio KMP shared module template to help you craft and manage business logic, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help you get started with KMP. We also shared additional announcements at KotlinConf.
Learn more about what we announced at Google I/O 2025 to help you build better apps, faster.
23 Jun 2025 5:01pm GMT
Agentic AI takes Gemini in Android Studio to the next level
Posted by Sandhya Mohan - Product Manager, and Jose Alcérreca - Developer Relations Engineer
Software development is undergoing a significant evolution, moving beyond reactive assistants to intelligent agents. These agents don't just offer suggestions; they can create execution plans, utilize external tools, and make complex, multi-file changes. This results in a more capable AI that can iteratively solve challenging problems, fundamentally changing how developers work.
At Google I/O 2025, we offered a glimpse into our work on agentic AI in Android Studio, the integrated development environment (IDE) focused on Android development. We showcased that by combining agentic AI with the built-in portfolio of tools inside of Android Studio, the IDE is able to assist you in developing Android apps in ways that were never possible before. We are now incredibly excited to announce the next frontier in Android development with the availability of 'Agent Mode' for Gemini in Android Studio.
These features are available in the latest Android Studio Narwhal Feature Drop Canary release, and will be rolled out to business tier subscribers in the coming days. As with all new Android Studio features, we invite developers to provide feedback to direct our development efforts and ensure we are creating the tools you need to build better apps, faster.
Agent Mode
Gemini in Android Studio's Agent Mode is a new experimental capability designed to handle complex development tasks that go beyond what you can experience by just chatting with Gemini.
With Agent Mode, you can describe a complex goal in natural language - from generating unit tests to complex refactors - and the agent formulates an execution plan that can span multiple files in your project and executes under your direction. Agent Mode uses a range of IDE tools for reading and modifying code, building the project, searching the codebase and more to help Gemini complete complex tasks from start to finish with minimal oversight from you.
To use Agent Mode, click Gemini in the sidebar, then select the Agent tab, and describe a task you'd like the agent to perform. Some examples of tasks you can try in Agent Mode include:
- Build my project and fix any errors
- Extract any hardcoded strings used across my project and migrate to strings.xml
- Add support for dark mode to my application
- Given an attached screenshot, implement a new screen in my application using Material 3
The agent then suggests edits and iteratively fixes bugs to complete tasks. You can review, accept, or reject the proposed changes along the way, and ask the agent to iterate on your feedback.

While powerful, you are firmly in control, with the ability to review, refine and guide the agent's output at every step. When the agent proposes code changes, you can choose to accept or reject them.

Additionally, you can enable "Auto-approve" if you are feeling lucky 😎 - especially useful when you want to iterate on ideas as rapidly as possible.
You can delegate routine, time-consuming work to the agent, freeing up your time for more creative, high-value work. Try out Agent Mode in the latest preview version of Android Studio - we look forward to seeing what you build! We are investing in building more agentic experiences for Gemini in Android Studio to make your development even more intuitive, so you can expect to see more agentic functionality over the next several releases.

Supercharge Agent Mode with your Gemini API key

The default Gemini model has a generous no-cost daily quota with a limited context window. However, you can now add your own Gemini API key to expand Agent Mode's context window to a massive 1 million tokens with Gemini 2.5 Pro.
A larger context window lets you send more instructions, code and attachments to Gemini, leading to even higher quality responses. This is especially useful when working with agents, as the larger context provides Gemini 2.5 Pro with the ability to reason about complex or long-running tasks.

To enable this feature, get a Gemini API key by navigating to Google AI Studio. Sign in and get a key by clicking on the "Get API key" button. Then, back in Android Studio, navigate to the settings by going to File (Android Studio on macOS) > Settings > Tools > Gemini to enter your Gemini API key. Relaunch Gemini in Android Studio and get even better responses from Agent Mode.
Be sure to safeguard your Gemini API key, as additional charges apply for Gemini API usage associated with a personal API key. You can monitor your Gemini API key usage by navigating to AI Studio and selecting Get API key > Usage & Billing.
Note that business tier subscribers already get access to Gemini 2.5 Pro and the expanded context window automatically with their Gemini Code Assist license, so these developers will not see an API key option.
Model Context Protocol (MCP)
Gemini in Android Studio's Agent Mode can now interact with external tools via the Model Context Protocol (MCP). This feature provides a standardized way for Agent Mode to use tools and extend knowledge and capabilities with the external environment.
There are many tools you can connect to the MCP Host in Android Studio. For example you could integrate with the Github MCP Server to create pull requests directly from Android Studio. Here are some additional use cases to consider.
In this initial release of MCP support in the IDE you will configure your MCP servers through a mcp.json file placed in the configuration directory of Studio, using the following format:
{ "mcpServers": { "memory": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-memory" ] }, "sequential-thinking": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-sequential-thinking" ] }, "github": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "GITHUB_PERSONAL_ACCESS_TOKEN", "ghcr.io/github/github-mcp-server" ], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>" } } } }
For this initial release, we support interacting with external tools via the stdio transport as defined in the MCP specification. We plan to support the full suite of MCP features in upcoming Android Studio releases, including the Streamable HTTP transport, external context resources, and prompt templates.
For more information on how to use MCP in Studio, including the mcp.json configuration file format, please refer to the Android Studio MCP Host documentation.
By delegating routine tasks to Gemini through Agent Mode, you'll be able to focus on more innovative and enjoyable aspects of app development. Download the latest preview version of Android Studio on the canary release channel today to try it out, and let us know how much faster app development is for you!
As always, your feedback is important to us - check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!
23 Jun 2025 5:00pm GMT
16 Jun 2025
Android Developers Blog
Top 3 things to know for AI on Android at Google I/O ‘25
Posted by Kateryna Semenova - Sr. Developer Relations Engineer
AI is reshaping how users interact with their favorite apps, opening new avenues for developers to create intelligent experiences. At Google I/O, we showcased how Android is making it easier than ever for you to build smart, personalized and creative apps. And we're committed to providing you with the tools needed to innovate across the full development stack in this evolving landscape.
This year, we focused on making AI accessible across the spectrum, from on-device processing to cloud-powered capabilities. Here are the top 3 announcements you need to know for building with AI on Android from Google I/O '25:
#1 Leverage the efficiency of Gemini Nano for on-device AI experiences
For on-device AI, we announced a new set of ML Kit GenAI APIs powered by Gemini Nano, our most efficient and compact model designed and optimized for running directly on mobile devices. These APIs provide high-level, easy integration for common tasks including text summarization, proofreading, rewriting content in different styles, and generating image description. Building on-device offers significant benefits such as local data processing and offline availability at no additional cost for inference. To start integrating these solutions explore the ML Kit GenAI documentation, the sample on GitHub and watch the "Gemini Nano on Android: Building with on-device GenAI" talk.
#2 Seamlessly integrate on-device ML/AI with your own custom models
The Google AI Edge platform enables building and deploying a wide range of pretrained and custom models on edge devices and supports various frameworks like TensorFlow, PyTorch, Keras, and Jax, allowing for more customization in apps. The platform now also offers improved support of on-device hardware accelerators and a new AI Edge Portal service for broad coverage of on-device benchmarking and evals. If you are looking for GenAI language models on devices where Gemini Nano is not available, you can use other open models via the MediaPipe LLM Inference API.
Serving your own custom models on-device can pose challenges related to handling large model downloads and updates, impacting the user experience. To improve this, we've launched Play for On-Device AI in beta. This service is designed to help developers manage custom model downloads efficiently, ensuring the right model size and speed are delivered to each Android device precisely when needed.
For more information watch "Small language models with Google AI Edge" talk.
#3 Power your Android apps with Gemini Flash, Pro and Imagen using Firebase AI Logic
For more advanced generative AI use cases, such as complex reasoning tasks, analyzing large amounts of data, processing audio or video, or generating images, you can use larger models from the Gemini Flash and Gemini Pro families, and Imagen running in the cloud. These models are well suited for scenarios requiring advanced capabilities or multimodal inputs and outputs. And since the AI inference runs in the cloud any Android device with an internet connection is supported. They are easy to integrate into your Android app by using Firebase AI Logic, which provides a simplified, secure way to access these capabilities without managing your own backend. Its SDK also includes support for conversational AI experiences using the Gemini Live API or generating custom contextual visual assets with Imagen. To learn more, check out our sample on GitHub and watch "Enhance your Android app with Gemini Pro and Flash, and Imagen" session.
These powerful AI capabilities can also be brought to life in immersive Android XR experiences. You can find corresponding documentation, samples and the technical session: "The future is now, with Compose and AI on Android XR".

Get inspired and start building with AI on Android today
We released a new open source app, Androidify, to help developers build AI-driven Android experiences using Gemini APIs, ML Kit, Jetpack Compose, CameraX, Navigation 3, and adaptive design. Users can create personalized Android bot with Gemini and Imagen via the Firebase AI Logic SDK. Additionally, it incorporates ML Kit pose detection to detect a person in the camera viewfinder. The full code sample is available on GitHub for exploration and inspiration. Discover additional AI examples in our Android AI Sample Catalog.

Choosing the right Gemini model depends on understanding your specific needs and the model's capabilities, including modality, complexity, context window, offline capability, cost, and device reach. To explore these considerations further and see all our announcements in action, check out the AI on Android at I/O '25 playlist on YouTube and check out our documentation.
We are excited to see what you will build with the power of Gemini!
16 Jun 2025 5:01pm GMT
12 Jun 2025
Android Developers Blog
Upcoming changes to Wear OS watch faces
Posted by François Deschênes Product Manager - Wear OS
Today, we are announcing important changes to Wear OS watch face development that will affect how developers publish and update watch faces on Google Play. As part of our ongoing effort to enhance Wear OS app quality, we are moving towards supporting only the Watch Face Format and removing support for AndroidX / Wearable Support Library (WSL) watch faces.
We introduced Watch Face Format at Google I/O in 2023 to make it easier to create watch faces that are customizable and power-efficient. The Watch Face Format is a declarative XML format, so there is no executable code involved in creating a watch face, and there is no code embedded in the watch face APK.
What's changing?
Developers will need to migrate published watch faces to the Watch Face Format by January 14, 2026. Developers using Watch Face Studio to build watch faces will need to resubmit their watch faces to the Play Store using Watch Face Studio version 1.8.7 or above - see below for more details.
When are these changes coming?
Starting January 27, 2025 (already in effect):
- No new AndroidX or Wearable Support Library (WSL) watch faces (legacy watch faces) can be published on the Play Store. Developers can still publish updates to existing watch faces.
Starting January 14, 2026:
- Availability: Users will not be able to install legacy watch faces on any Wear OS devices from the Play Store. Legacy watch faces already installed on a Wear OS device will continue to work.
- Updates: Developers will not be able to publish updates for legacy watch faces to the Play Store.
- Monetization: The following won't be possible for legacy watch faces: one-off watch face purchases, in-app purchases, and subscriptions. Existing purchases and subscriptions will continue to work, but they will not renew, including auto-renewals.
What should developers do next?
To prepare for these changes and to continue publishing watch faces to the Play Store, developers using AndroidX or WSL to build watch faces must migrate their watch faces to the Watch Face Format and resubmit to the Play Store by January 14, 2026.
Developers using Watch Face Studio to build watch faces will need to resubmit their watch faces to the Play Store using Watch Face Studio version 1.8.7 or above:
- Be sure to republish for all Play tracks, including all testing tracks as well as production.
- Remove any bundles from these tracks that were created using Watch Face Studio versions prior to 1.8.7.
Benefits of the Watch Face Format
Watch Face Format was developed to support developers in creating watch faces. This format provides numerous advantages to both developers and end users:
- Simplified development: Streamlined workflows and visual design tools make building watch faces easier.
- Enhanced performance: Optimized for battery efficiency and smooth interactions.
- Increased security: Robust security features protect user data and privacy.
- Forward-compatible: Access to the latest features and capabilities of Wear OS.
Resources to help with migration
To get started migrating your watch faces to the Watch Face Format, check out the following developer guidance:
We encourage developers to begin the migration process as soon as possible to ensure a seamless transition and continued availability of your watch faces on Google Play.
We understand that this change requires effort. If you have further questions, please refer to the Wear OS community announcement. Please report any issues using the issue tracker.
12 Jun 2025 4:00pm GMT
11 Jun 2025
Android Developers Blog
Smoother app reviews with Play Policy Insights beta in Android Studio
Posted by Naheed Vora - Senior Product Manager, Android App Safety
Making it easier for you to build safer apps from the start
We understand you want clear Play policy guidance early in your development, so you can focus on building amazing experiences and prevent unexpected delays from disrupting launch plans. That's why we're making it easier to have smoother app publishing experiences, from the moment you start coding.
With Play Policy Insights beta in Android Studio, you'll get richer, in-context guidance on policies that may impact your app through lint warnings. You'll see policy summaries, dos and don'ts to avoid common pitfalls, and direct links to details.
We hope you caught an early demo at I/O. And now, you can check out Play Policy Insights beta in the Android Studio Narwhal Feature Drop Canary release.

How to use Play Policy Insights beta in Android Studio
Lint warnings will pop up as you code, like when you add a permission. For example, if you add an Android API that uses Photos and requires READ_MEDIA_IMAGES permission, then the Photos & Video Insights lint warning will appear under the respective API call line item in Android Studio.
You can also get these insights by going to Code > Inspect for Play Policy Insights and selecting the project scope to analyze. The scope can be set to the whole project, the current module or file, or a custom scope.

In addition to seeing these insights in Android Studio, you can also generate them as part of your Continuous Integration process by adding the following dependency to your project.
Kotlin
lintChecks("com.google.play.policy.insights:insights-lint:<version>")
Groovy
lintChecks 'com.google.play.policy.insights:insights-lint:<version>'
Share your feedback on Play Policy Insights beta
We're actively working on this feature and want your feedback to refine it before releasing it in the Stable channel of Android Studio later this year. Try it out, report issues, and stop by the Google Play Developer Help Community to share your questions and thoughts directly with our team.
Join us on June 16 when we answer your questions. We'd love to hear about:
- How will this change your current Android app development and Google Play Store submission workflow?
- Which was more helpful in addressing issues: lint warnings in the IDE or lint warnings from CI build?
- What was most helpful in the policy guidance, and what could be improved?
Developers have told us they like:
- Catching potential Google Play policy issues early, right in their code, so they can build more efficiently.
- Seeing potential Google Play policy issues and guidance all in one-place, reducing the need to dig through policy announcements and issue emails.
- Easily discussing potential issues with their team, now that everyone has shared information.
- Continuously checking for potential policy issues as they add new features, gaining confidence in a smoother launch.
For more, see our Google Play Help Center article or Android Studio preview release notes.
We hope features like this will help give you a better policy experience and more streamlined development.
11 Jun 2025 4:00pm GMT
10 Jun 2025
Android Developers Blog
Developer preview: Enhanced Android desktop experiences with connected displays
Posted by Francesco Romano - Developer Relations Engineer on Android, and Fahd Imtiaz - Product Manager, Android Developer
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.
Android has continued to evolve to enable users to be more productive on large screens.
Today, we're excited to share that connected displays support on compatible Android devices is now in developer preview with the Android 16 QPR1 Beta 2 release. As shown at Google I/O 2025, connected displays enable users to attach an external display to their Android device and transform a small screen device into a powerful tool with a large screen. This evolution gives users the ability to move apps beyond a single screen to unlock Android's full productivity potential on external displays.
The connected display update builds on our desktop windowing experience, a capability we previewed last year. Desktop windowing is set to launch later this year for users on compatible tablets running Android 16. Desktop windowing enables users to run multiple apps simultaneously and resize windows for optimal multitasking. This new windowing capability works seamlessly with split screen and other multitasking features users already love on Android and doesn't require switching to a special mode.
Google and Samsung have collaborated to bring a more seamless and powerful desktop windowing experience to large screen devices and phones with connected displays in Android 16 across the Android ecosystem. These advancements will enhance Samsung DeX, and also extend to other Android devices.
For developers, connected displays and desktop windowing present new opportunities for building more engaging and more productive app experiences that seamlessly adapt across form factors. You can try out these features today on your connected display with the Android 16 QPR1 Beta 2 on select Pixel devices.
What's new in connected displays support?
When a supported Android phone or foldable is connected to an external display through a DisplayPort connection, a new desktop session starts on the connected display. The phone and the external display operate independently, and apps are specific to the display on which they're running.
The experience on the connected display is similar to the experience on a desktop, including a task bar that shows running apps and lets users pin apps for quick access. Users are able to run multiple apps side by side simultaneously in freely resizable windows on the connected display.

When a desktop windowing enabled device (like a tablet) is connected to an external display, the desktop session is extended across both displays, unlocking an even more expansive workspace. The two displays then function as one continuous system, allowing app windows, content, and the cursor to move freely between the displays.

A cornerstone of this effort is the evolution of desktop windowing, which is stable in Android 16 and is packed with improvements and new capabilities.
Desktop windowing stable release
We've made substantial improvements in the stability and performance of desktop windowing in Android 16. This means users will encounter a smoother, more reliable experience when managing app windows on connected displays. Beyond general stability improvements, we're introducing several new features:
- Flexible window tiling: Multitasking gets a boost with more intuitive window tiling options. Users can more easily arrange multiple app windows side by side or in various configurations, making it simpler to work across different applications simultaneously on a large screen.
- Multiple desktops: Users can set up multiple desktop sessions to match their distinct productivity requirements and switch between the desktops using keyboard shortcuts, trackpad gestures, and Overview.
- Enhanced app compatibility treatments: New compatibility treatments ensure that even legacy apps behave more predictably and look better on external displays by default. This reduces the burden on developers while providing a better out-of-the-box experience for users.
- Multi-instance management: Users can manage multiple instances of supporting applications (for example, Chrome or, Keep) through the app header button or taskbar context menu. This allows for quick switching between different instances of the same app.
- Desktop persistence: Android can now better maintain window sizes, positions, and states across different desktops. This means users can set up their preferred workspace and have it restored across sessions, offering a more consistent and efficient workflow.
Best practices for optimal app experiences on connected displays
With the introduction of connected display support in Android, it's important to ensure your apps take full advantage of the new display capabilities. To help you build apps that shine in this enhanced environment, here are some key development practices to follow:
Build apps optimized for desktop
- Design for any window size: With phones now connecting to external displays, your mobile app can run in a window of almost any size and aspect ratio. This means the app window can be as big as the screen of the connected display but also flex to fit a smaller window. In desktop windowing, the minimum window size is 386 x 352 dp, which is smaller than most phones. This fundamentally changes how you need to think about UI. With orientation and resizability changes in Android 16, it becomes even more critical for you to update your apps to support resizability and portrait and landscape orientations for an optimal experience with desktop windowing and connected displays. Make sure your app supports any window size by following the best practices on adaptive development.
- Implement features for top productivity: You now have all the tools necessary to build mobile apps that match desktop, so start adding features to boost users productivity! Allow users to open multiple instances of the same app, which is invaluable for tasks like comparing documents, managing different conversations, or viewing multiple files simultaneously. Support data sharing with drag and drop, and maintain user flow across configuration changes by implementing a robust state management system.
Handle dynamic display changes
- Don't assume a constant Display object: The Display object associated with your app's context can change when an app window is moved to an external display or if the display configuration changes. Your app should gracefully handle configuration change events and query display metrics dynamically rather than caching them.
- Account for density configuration changes: External displays can have vastly different pixel densities than the primary device screen. Ensure your layouts and resources adapt correctly to these changes to maintain UI clarity and usability. Use density-independent pixels (dp) for layouts, provide density-specific resources, and ensure your UI scales appropriately.
Go beyond just the screen
- Correctly support external peripherals: When users connect to an external monitor, they often create a more desktop-like environment. This frequently involves using external keyboards, mice, trackpads, webcams, microphones, and speakers. If your app uses camera or microphone input, the app should be able to detect and utilize peripherals connected through the external display or a docking station.
- Handle keyboard actions: Desktop users rely heavily on keyboard shortcuts for efficiency. Implement standard shortcuts (for example, Ctrl+C, Ctrl+V, Ctrl+Z) and consider app-specific shortcuts that make sense in a windowed environment. Make sure your app supports keyboard navigation.
- Support mouse interactions: Beyond simple clicks, ensure your app responds correctly to mouse hover events (for example, for tooltips or visual feedback), right-clicks (for contextual menus), and precise scrolling. Consider implementing custom pointers to indicate different actions.
Getting started
Explore the connected displays and enhanced desktop windowing features in the latest Android Beta. Get Android 16 QPR1 Beta 2 on a supported Pixel device (Pixel 8 and Pixel 9 series) to start testing your app today. Then enable desktop experience features in the developer settings.
Support for connected displays in the Android Emulator is coming soon, so stay tuned for updates!
Dive into the updated documentation on multi-display support and window management to learn more about implementing these best practices.
Feedback
Your feedback is crucial as we continue to refine these experiences. Please share your thoughts and report any issues through our official feedback channels.
We're committed to making Android a versatile platform that adapts to the many ways users want to interact with their apps and devices. The improvements to connected display support are another step in that direction, and we can't wait to see the amazing experiences you'll build!
10 Jun 2025 6:02pm GMT
Top 3 updates for building excellent, adaptive apps at Google I/O ‘25
Posted by Mozart Louis - Developer Relations Engineer
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.
Google I/O 2025 brought exciting advancements to Android, equipping you with essential knowledge and powerful tools you need to build outstanding, user-friendly applications that stand out.
If you missed any of the key #GoogleIO25 updates and just saw the release of Android 16 or you're ready to dive into building excellent adaptive apps, our playlist is for you. Learn how to craft engaging experiences with Live Updates in Android 16, capture video effortlessly with CameraX, process it efficiently using Media3's editing tools, and engage users across diverse platforms like XR, Android for Cars, Android TV, and Desktop.
Check out the Google I/O playlist for all the session details.
Here are three key announcements directly influencing how you can craft deeply engaging experiences and truly connect with your users:
#1: Build adaptively to unlock 500 million devices
In today's diverse device ecosystem, users expect their favorite applications to function seamlessly across various form factors, including phones, tablets, Chromebooks, automobiles, and emerging XR glasses and headsets. Our recommended approach for developing applications that excel on each of these surfaces is to create a single, adaptive application. This strategy avoids the need to rebuild the application for every screen size, shape, or input method, ensuring a consistent and high-quality user experience across all devices.
The talk emphasizes that you don't need to rebuild apps for each form factor. Instead, small, iterative changes can unlock an app's potential.
Here are some resources we encourage you to use in your apps:
New feature support in Jetpack Compose Adaptive Libraries
- We're continuing to make it as easy as possible to build adaptively with Jetpack Compose Adaptive Libraries. with new features in 1.1 like pane expansion and predictive back. By utilizing canonical layout patterns such as List Detail or Supporting Pane layouts and integrating your app code, your application will automatically adjust and reflow when resized.
Navigation 3
- The alpha release of the Navigation 3 library now supports displaying multiple panes. This eliminates the need to alter your navigation destination setup for separate list and detail views. Instead, you can adjust the setup to concurrently render multiple destinations when sufficient screen space is available.
Updates to Window Manager Library
- AndroidX.window 1.5 introduces two new window size classes for expanded widths, facilitating better layout adaptation for large tablets and desktops. A width of 1600dp or more is now categorized as "extra large," while widths between 1200dp and 1600dp are classified as "large." These subdivisions offer more granularity for developers to optimize their applications for a wider range of window sizes.
Support all orientations and be resizable
- In Android 16 important changes are coming, affecting orientation, aspect ratio, and resizability. Apps targeting SDK 36 will need to support all orientations and be resizable.
Extend to Android XR
- We are making it easier for you to build for XR with the Android XR SDK in developer preview 2, which features new Material XR components, a fully integrated Emulator withinAndroid Studio and spatial videos support for your Play Store listings.
Upgrade your Wear OS apps to Material 3 Design
- Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. You can upgrade your app and Tiles to Material 3 Expressive by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles.
You should build a single, adaptive mobile app that brings the best experiences to all Android surfaces. By building adaptive apps, you meet users where they are today and in the future, enhancing user engagement and app discoverability. This approach represents a strategic business decision that optimizes an app's long-term success.
#2: Enhance your app's performance optimization
Get ready to take your app's performance to the next level! Google I/O 2025, brought an inside look at cutting-edge tools and techniques to boost user satisfaction, enhance technical performance metrics, and drive those all-important key performance indicators. Imagine an end-to-end workflow that streamlines performance optimization.
Redesigned UiAutomator API
- To make benchmarking reliable and reproducible, there's the brand new UiAutomator API. Write robust test code and run it on your local devices or in Firebase Test Lab, ensuring consistent results every time.
Macrobenchmarks
- Once your tests are in place, it's time to measure and understand. Macrobenchmarks give you the hard data, while App Startup Insights provide actionable recommendations for improvement. Plus, you can get a quick snapshot of your app's health with the App Performance Score via DAC. These tools combined give you a comprehensive view of your app's performance and where to focus your efforts.
R8, More than code shrinking and obfuscation
- You might know R8 as a code shrinking tool, but it's capable of so much more! The talk dives into R8's capabilities using the "Androidify" sample app. You'll see how to apply R8, troubleshoot any issues (like crashes!), and configure it for optimal performance. It'll also be shown how library developers can include "consumer Keep rules" so that their important code is not touched when used in an application.
#3: Build Richer Image and Video Experiences
In today's digital landscape, users increasingly expect seamless content creation capabilities within their apps. To meet this demand, developers require robust tools for building excellent camera and media experiences.
Media3Effects in CameraX Preview
- At Google I/O, developers delve into practical strategies for capturing high-quality video using CameraX, while simultaneously leveraging the Media3Effects on the preview.
Google Low-Light Boost
- Google Low Light Boost in Google Play services enables real-time dynamic camera brightness adjustment in low light, even without device support for Low Light Boost AE Mode.
New Camera & Media Samples!
- For Google I/O 2025, The Camera & Media team created new samples and demos for building excellent media and camera experiences on Android. It emphasizes future-proofing apps using Jetpack libraries like Media3 Transformer for advanced video editing and Compose for adaptive UIs, including XR. Get more information about incrementally adding premium features with CameraX, utilizing Media3 for AI-powered functionalities such as video summarization and HDR thumbnails, and employing specialized APIs like Oboe for efficient audio playback. We have also updated CameraX samples to fully use Compose instead of the View based system.
Learn more about how CameraX & Media3 can accelerate your development of camera and media related features.
Learn how to build adaptive apps
Want to learn more about building excellent, adaptive apps? Watch this playlist to learn more about all the session details.
10 Jun 2025 6:01pm GMT
A product manager's guide to adapting Android apps across devices
Posted by Fahd Imtiaz, Product Manager, Android Developer Experience
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.
With new form factors emerging continually, the Android ecosystem is more dynamic than ever.
From phones and foldables to tablets, Chromebooks, TVs, cars, Wear and XR, Android users expect their apps to run seamlessly across an increasingly diverse range of form factors. Yet, many Android apps fall short of these expectations as they are built with UI constraints such as being locked to a single orientation or restricted in resizability.
With this in mind, Android 16 introduced API changes for apps targeting SDK level 36 to ignore orientation and resizability restrictions starting with large screen devices, shifting toward a unified model where adaptive apps are the norm. This is the moment to move ahead. Adaptive apps aren't just the future of Android, they're the expectation for your app to stand out across Android form factors.
Why you should prioritize adaptive now

Prioritizing optimizations to make your app adaptive isn't just about keeping up with the orientation and resizability API changes in Android 16 for apps targeting SDK 36. Adaptive apps unlock tangible benefits across user experience, development efficiency, and market reach.
- Mobile apps can now reach users on over 500 million active large screen devices: Mobile apps run on foldables, tablets, Chromebooks, and even compatible cars, with minimal changes. Android 16 will introduce significant advancements in desktop windowing for a true desktop-like experience on large screens, including connected displays. And Android XR opens a new dimension, allowing your existing apps to be available in immersive environments. The user expectation is clear: a consistent, high-quality experience that intelligently adapts to any screen - be it a foldable, a tablet with a keyboard, or a movable, resizable window on a Chromebook.
- "The new baseline" with orientation and resizability API changes in Android 16: We believe mobile apps are undergoing a shift to have UI adapt responsively to any screen size, just like websites. Android 16 will ignore app-defined restrictions like fixed orientation (portrait-only) and non-resizable windows, beginning with large screens (smallest width of the device is >= 600dp) including tablets and inner displays on foldables. For most apps, it's key to helping them stretch to any screen size. In some cases if your app isn't adaptive, it could deliver a broken user experience on these screens. This moves adaptive design from a nice-to-have to a foundational requirement.

- Increase user reach and app discoverability in Play: Adaptive apps are better positioned to be ranked higher in Play, and featured in editorial articles across form factors, reaching a wider audience across Play search and homepages. Additionally, Google Play Store surfaces ratings and reviews across all form factors. If your app is not optimized, a potential user's first impression might be tainted by a 1-star review complaining about a stretched UI on a device they don't even own yet. Users are also more likely to engage with apps that provide a great experience across their devices.
- Increased engagement on large screens: Users on large screen devices often have different interaction patterns. On large screens, users may engage for longer sessions, perform more complex tasks, and consume more content.
-
Concepts saw a 70% increase in user engagement on large screens after optimizing.
Usage for 6 major media streaming apps in the US was up to 3x more for tablet and phone users, as compared to phone only users.
- More accessible app experiences: According to the World Bank, 15% of the world's population has some type of disability. People with disabilities depend on apps and services that support accessibility to communicate, learn, and work. Matching the user's preferred orientation improves the accessibility of applications, helping to create an inclusive experience for all.
Today, most apps are building for smartphones only

"...looking at the number of users, the ROI does not justify the investment".
That's a frequent pushback from product managers and decision-makers, and if you're just looking at top-line analytics comparing the number of tablet sessions to smartphone sessions, it might seem like a closed case.
While top-line analytics might show lower session numbers on tablets compared to smartphones, concluding that large screens aren't worth the effort based solely on current volume can be a trap, causing you to miss out on valuable engagement and future opportunities.
Let's take a deeper look into why:
1. The user experience 'chicken and egg' loop: Is it possible that the low usage is a symptom rather than the root cause? Users are quick to abandon apps that feel clunky or broken. If your app on large screens is a stretched-out phone interface, the app likely provides a negative user experience. The lack of users might reflect the lack of a good experience, not always necessarily lack of potential users.
2. Beyond user volume, look at user engagement: Don't just count users, analyze their worth. Users interact with apps on large screens differently. The large screen often leads to longer sessions and more immersive experiences. As mentioned above, usage data shows that engagement time increases significantly for users who interact with apps on both their phone and tablet, as compared to phone only users.
3. Market evolution: The Android device ecosystem is continuing to evolve. With the rise of foldables, upcoming connected displays support in Android 16, and form factors like XR and Android Auto, adaptive design is now more critical than ever. Building for a specific screen size creates technical debt, and may slow your development velocity and compromise the product quality in the long run.
Okay, I am convinced. Where do I start?

For organizations ready to move forward, Android offers many resources and developer tools to optimize apps to be adaptive. See below for how to get started:
1.Check how your app looks on large screens today: Begin by looking at your app's current state on tablets, foldables (in different postures), Chromebooks, and environments like desktop windowing. Confirm if your app is available on these devices or if you are unintentionally leaving out these users by requiring unnecessary features within your app.
2. Address common UI issues: Assess what feels awkward in your app UI today. We have a lot of guidance available on how you can easily translate your mobile app to other screens.
a. Check the Large screens design gallery for inspiration and understanding how your app UI can evolve across devices using proven solutions to common UI challenges.
b. Start with quick wins. For example, prevent buttons from stretching to the full screen width, or switch to a vertical navigation bar on large screens to improve ergonomics.
c. Identify patterns where canonical layouts (e.g. list-detail) could solve any UI awkwardness you identified. Could a list-detail view improve your app's navigation? Would a supporting pane on the side make better use of the extra space than a bottom sheet?
3. Optimize your app incrementally, screen by screen: It may be helpful to prioritize how you approach optimization because not everything needs to be perfectly adaptive on day one. Incrementally improve your app based on what matters most - it's not all or nothing.
a. Start with the foundations. Check out the large screen app quality guidelines which tier and prioritize the fixes that are most critical to users. Remove orientation restrictions to support portrait and landscape, and ensure support for resizability (for when users are in split screen), and prevent major stretching of buttons, text fields, and images. These foundational fixes are critical, especially with API changes in Android 16 that will make these aspects even more important.
b. Implement adaptive layout optimizations with a focus on core user journeys or screens first.
i. Identify screens where optimizations (for example a two-pane layout) offer the biggest UX win
ii. And then proceed to screens or parts of the app that are not as often used on large screens
c. Support input methods beyond touch, including keyboard, mouse, trackpad, and stylus input. With new form factors and connected displays support, this sets users up to interact with your UI seamlessly.
d. Add differentiating hero user experiences like support for tabletop mode or dual-screen mode on foldables. This can happen on a per-use-case basis - for example, tabletop mode is great for watching videos, and dual screen mode is great for video calls.
While there's an upfront investment in adopting adaptive principles (using tools like Jetpack Compose and window size classes), the long-term payoff may be significant. By designing and building features once, and letting them adapt across screen sizes, the benefits outweigh the cost of creating multiple bespoke layouts. Check out the adaptive apps developer guidance for more.
Unlock your app's potential with adaptive app design
The message for my fellow product managers, decision-makers, and businesses is clear: adaptive design will uplevel your app for high-quality Android experiences in 2025 and beyond. An adaptive, responsive UI is the scalable way to support the many devices in Android without developing on a per-form factor basis. If you ignore the diverse device ecosystem of foldables, tablets, Chromebooks, and emerging form factors like XR and cars, your business is accepting hidden costs from negative user reviews, lower discovery in Play, increased technical debt, and missed opportunities for increased user engagement and user acquisition.
Maximize your apps' impact and unlock new user experiences. Learn more about building adaptive apps today.
10 Jun 2025 6:01pm GMT
Android 16 is here
Posted by Matthew McCullough - VP of Product Management, Android Developer
Today, Android is launching a few updates across the platform! This includes the start of Android 16's rollout with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, updates for Android users across Google apps and more, plus the June Pixel Drop. We're also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.
Today we're releasing Android 16 and making it available on most supported Pixel devices. Look for new devices running Android 16 in the coming months.
This also marks the availability of the source code at the Android Open Source Project (AOSP). You can examine the source code for a deeper understanding of how Android works, and our focus on compatibility means that you can leverage your app development skills in Android Studio with Jetpack Compose to create applications that thrive across the entire ecosystem.
Major and minor SDK releases
With Android 16, we've added the concept of a minor SDK release to allow us to iterate our APIs more quickly, reflecting the rapid pace of the innovation Android is bringing to apps and devices.

We plan to have another release in Q4 of 2025 which also will include new developer APIs. Today's major release will be the only release in 2025 to include planned app-impacting behavior changes. In addition to new developer APIs, the Q4 minor release will pick up feature updates, optimizations, and bug fixes.
We'll continue to have quarterly Android releases. The Q3 update in-between the API releases is providing much of the new visual polish associated with Material Expressive, and you can get the Q3 beta today on your supported Pixel device.
Camera and media APIs to empower creators
Android 16 enhances support for professional camera users, allowing for night mode scene detection, hybrid auto exposure, and precise color temperature adjustments. It's easier than ever to capture motion photos with new Intent actions, and we're continuing to improve UltraHDR images, with support for HEIC encoding and new parameters from the ISO 21496-1 draft standard. Support for the Advanced Professional Video (APV) codec improves Android's place in professional recording and post-production workflows, with perceptually lossless video quality that survives multiple decodings/re-encodings without severe visual quality degradation.
Also, Android's photo picker can now be embedded in your view hierarchy, and users will appreciate the ability to search cloud media.
More consistent, beautiful apps
Android 16 introduces changes to improve the consistency and visual appearance of apps, laying the foundation for the upcoming Material 3 Expressive changes. Apps targeting Android 16 can no longer opt-out of going edge-to-edge, and ignores the elegantTextHeight attribute to ensure proper spacing in Arabic, Lao, Myanmar, Tamil, Gujarati, Kannada, Malayalam, Odia, Telugu or Thai.
Adaptive Android apps
With Android apps now running on a variety of devices and more windowing modes on large screens, developers should build Android apps that adapt to any screen and window size, regardless of device orientation. For apps targeting Android 16 (API level 36), Android 16 includes changes to how the system manages orientation, resizability, and aspect ratio restrictions. On displays with smallest width >= 600dp, the restrictions no longer apply and apps will fill the entire display window. You should check your apps to ensure your existing UIs scale seamlessly, working well across portrait and landscape aspect ratios. We're providing frameworks, tools, and libraries to help.

You can test these overrides without targeting using the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag. Read more about changes to orientation and resizability APIs in Android 16.
Predictive back by default and more
Apps targeting Android 16 will have system animations for back-to-home, cross-task, and cross-activity by default. In addition, Android 16 extends predictive back navigation to three-button navigation, meaning that users long-pressing the back button will see a glimpse of the previous screen before navigating back.
To make it easier to get the back-to-home animation, Android 16 adds support for the onBackInvokedCallback with the new PRIORITY_SYSTEM_NAVIGATION_OBSERVER. Android 16 additionally adds the finishAndRemoveTaskCallback and moveTaskToBackCallback for custom back stack behavior with predictive back.
Consistent progress notifications
Android 16 introduces Notification.ProgressStyle, which lets you create progress-centric notifications that can denote states and milestones in a user journey using points and segments. Key use cases include rideshare, delivery, and navigation. It's the basis for Live Updates, which will be fully realized in an upcoming Android 16 update.

Custom AGSL graphical effects
Android 16 adds RuntimeColorFilter and RuntimeXfermode, allowing you to author complex effects like Threshold, Sepia, and Hue Saturation in AGSL and apply them to draw calls.
Help to create better performing, more efficient apps and games
From APIs to help you understand app performance, to platform changes designed to increase efficiency, Android 16 is focused on making sure your apps perform well. Android 16 introduces system-triggered profiling to ProfilingManager, ensures at most one missed execution of scheduleAtFixedRate is immediately executed when the app returns to a valid lifecycle for better efficiency, introduces hasArrSupport and getSuggestedFrameRate(int) to make it easier for your apps to take advantage of adaptive display refresh rates, and introduces the getCpuHeadroom and getGpuHeadroom APIs along with CpuHeadroomParams and GpuHeadroomParams in SystemHealthManager to provide games and resource-intensive apps estimates of available GPU and CPU resources on supported devices.
JobScheduler updates
JobScheduler.getPendingJobReasons in Android 16 returns multiple reasons why a job is pending, due to both explicit constraints you set and implicit constraints set by the system. The new JobScheduler.getPendingJobReasonsHistory returns the list of the most recent pending job reason changes, allowing you to better tune the way your app works in the background.
Android 16 is making adjustments for regular and expedited job runtime quota based on which apps standby bucket the app is in, whether the job starts execution while the app is in a top state, and whether the job is executing while the app is running a Foreground Service.
To detect (and then reduce) abandoned jobs, apps should use the new STOP_REASON_TIMEOUT_ABANDONED job stop reason that the system assigns for abandoned jobs, instead of STOP_REASON_TIMEOUT.
16KB page sizes
Android 15 introduced support for 16KB page sizes to improve the performance of app launches, system boot-ups, and camera starts, while reducing battery usage. Android 16 adds a 16 KB page size compatibility mode, which, combined with new Google Play technical requirements, brings Android closer to having devices shipping with this important change. You can validate if your app needs updating using the 16KB page size checks & APK Analyzer in the latest version of Android Studio.
ART internal changes
Android 16 includes the latest updates to the Android Runtime (ART) that improve the Android Runtime's (ART's) performance and provide support for additional language features. These improvements are also available to over a billion devices running Android 12 (API level 31) and higher through Google Play System updates. Apps and libraries that rely on internal non-SDK ART structures may not continue to work correctly with these changes.
Privacy and security
Android 16 continues our mission to improve security and ensure user privacy. It includes Improved security against Intent redirection attacks, makes MediaStore.getVersion unique to each app, adds an API that allows apps to share Android Keystore keys, incorporates the latest version of the Privacy Sandbox on Android, introduces a new behavior during the companion device pairing flow to protect the user's location privacy, and allows a user to easily select from and limit access to app-owned shared media in the photo picker.
Local network permission testing
Android 16 allows your app to test the upcoming local network permission feature, which will require your app to be granted NEARBY_WIFI_DEVICES permission. This change will be enforced in a future Android major release.
An Android built for everyone
Android 16 adds features such as Auracast broadcast audio with compatible LE Audio hearing aids, Accessibility changes such as extending TtsSpan with TYPE_DURATION, a new list-based API within AccessibilityNodeInfo, improved support for expandable elements using setExpandedState, RANGE_TYPE_INDETERMINATE for indeterminate ProgressBar widgets, AccessibilityNodeInfo getChecked and setChecked(int) methods that support a "partially checked" state, setSupplementalDescription so you can provide text for a ViewGroup without overriding information from its children, and setFieldRequired so apps can tell an accessibility service that input to a form field is required.
Outline text for maximum text contrast
Android 16 introduces outline text, replacing high contrast text, which draws a larger contrasting area around text to greatly improve legibility, along with new AccessibilityManager APIs to allow your apps to check or register a listener to see if this mode is enabled.

Get your apps, libraries, tools, and game engines ready!
If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates to your SDK are needed to fully support Android 16.
Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 16. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply, even if you aren't yet targeting Android 16:
- JobScheduler: JobScheduler quotas are enforced more strictly in Android 16; enforcement will occur if a job executes while the app is on top, when a foreground service is running, or in the active standby bucket. setImportantWhileForeground is now a no-op. The new stop reason STOP_REASON_TIMEOUT_ABANDONED occurs when we detect that the app can no longer stop the job.
- Broadcasts: Ordered broadcasts using priorities only work within the same process. Use another IPC if you need cross-process ordering.
- ART: If you use reflection, JNI, or any other means to access Android internals, your app might break. This is never a best practice. Test thoroughly.
- Intents: Android 16 has stronger security against Intent redirection attacks. Test your Intent handling, and only opt-out of the protections if absolutely necessary.
- 16KB Page Size: If your app isn't 16KB-page-size ready, you can use the new compatibility mode flag, but we recommend migrating to 16KB for best performance.
- Accessibility: announceForAccessibility is deprecated; use the recommended alternatives. Make sure to test with the new outline text feature.
- Bluetooth: Android 16 improves Bluetooth bond loss handling that impacts the way re-pairing occurs.
Other changes that will be impactful once your app targets Android 16:
- User Experience: Changes include the removal of edge-to-edge opt-out, required migration or opt-out for predictive back, and the disabling of elegant font APIs.
- Core Functionality: Optimizations have been made to fixed-rate work scheduling.
- Large Screen Devices: Orientation, resizability, and aspect ratio restrictions will be ignored. Ensure your layouts support all orientations across a variety of aspect ratios to adapt to different surfaces.
- Health and Fitness: Changes have been implemented for health and fitness permissions.
Get your app ready for the future:
- Local network protection: Consider testing your app with the upcoming Local Network Protection feature. It will give users more control over which apps can access devices on their local network in a future Android major release.
Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.
Once you've published the Android 16-compatible version of your app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 16 and use the compatibility framework to help quickly detect issues.
Get started with Android 16
Your Pixel device should get Android 16 shortly if you haven't already been on the Android Beta. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 4.1 and have not yet taken an Android 16 QPR1 beta, you can opt out of the program and you will then be offered the release version of Android 16 over the air.
For the best development experience with Android 16, we recommend that you use the latest Canary build of Android Studio Narwhal. Once you're set up, here are some of the things you should do:
- Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or Android Emulator running Android 16 and extensively test it.
Thank you again to everyone who participated in our Android developer preview and beta program. We're looking forward to seeing how your apps take advantage of the updates in Android 16, and have plans to bring you updates in a fast-paced release cadence going forward.
For complete information on Android 16 please visit the Android 16 developer site.
10 Jun 2025 6:00pm GMT
20 May 2025
Android Developers Blog
Announcing Kotlin Multiplatform Shared Module Template
Posted by Ben Trengrove - Developer Relations Engineer, Matt Dyor - Product Manager
To empower Android developers, we're excited to announce Android Studio's new Kotlin Multiplatform (KMP) Shared Module Template. This template was specifically designed to allow developers to use a single codebase and apply business logic across platforms. More specifically, developers will be able to add shared modules to existing Android apps and share the business logic across their Android and iOS applications.
This makes it easier for Android developers to craft, maintain, and most importantly, own the business logic. The KMP Shared Module Template is available within Android Studio when you create a new module within a project.

A single code base for business logic
Most developers have grown accustomed to maintaining different code bases, platform to platform. In the past, whenever there's an update to the business logic, it must be carefully updated in each codebase. But with the KMP Shared Module Template:
- Developers can write once and publish the business logic to wherever they need it.
- Engineering teams can do more faster.
- User experiences are more consistent across the entire audience, regardless of platform or form factor.
- Releases are better coordinated and launched with fewer errors.
Customers and developer teams who adopt KMP Shared Module Templates should expect to achieve greater ROI from mobile teams who can turn their attention towards delighting their users more and worrying about inconsistent code less.
KMP enthusiasm
The Android developer community remains very excited about KMP, especially after Google I/O 2024 where Google announced official support for shared logic across Android and iOS. We have seen continued momentum and enthusiasm from the community. For example, there are now over 1,500 KMP libraries listed on JetBrains' klibs.io.
Our customers are excited because KMP has made Android developers more productive. Consistently, Android developers have said that they want solutions that allow them to share code more easily and they want tools which boost productivity. This is why we recommend KMP; KMP simultaneously delivers a great experience for Android users while boosting ROI for the app makers. The KMP Shared Module Template is the latest step towards a developer ecosystem where user experience is consistent and applications are updated seamlessly.
Large scale KMP adoptions
This KMP Shared Module Template is new, but KMP more broadly is a maturing technology with several large-scale migrations underway. In fact, KMP has matured enough to support mission critical applications at Google. Google Docs, for example, is now running KMP in production on iOS with runtime performance on par or better than before. Beyond Google, Stone's 130 mobile developers are sharing over 50% of their code, allowing existing mobile teams to ship features approximately 40% faster to both Android and iOS.
KMP was designed for Android development
As always, we've designed the Shared Module Template with the needs of Android developer teams in mind. Making the KMP Shared Module Template part of the native Android Studio experience allows developers to efficiently add a shared module to an existing Android application and immediately start building shared business logic that leverages several KMP-ready Jetpack libraries including Room, SQLite, and DataStore to name just a few.
Come check it out at KotlinConf
Releasing Android Studio's KMP Shared Module Template marks a significant step toward empowering Android development teams to innovate faster, to efficiently manage business logic, and to build high-quality applications with greater confidence. It means that Android developers can be responsible for the code that drives the business logic for every app across Android and iOS. We're excited to bring Shared Module Template to KotlinConf in Copenhagen, May 21 - 23.

Get started with KMP Shared Module Template
To get started, you'll need the latest edition of Android Studio. In your Android project, the Shared Module Template is available within Android Studio when you create a new module. Click on "File" then "New" then "New Module" and finally "Kotlin Multiplatform Shared Module" and you are ready to add a KMP Shared Module to your Android app.
We appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue. Remember to also follow us on X, LinkedIn, Blog, or YouTube for more Android development updates!
20 May 2025 10:00pm GMT
16 things to know for Android developers at Google I/O 2025
Posted by Matthew McCullough - VP of Product Management, Android Developer
Today at Google I/O, we announced the many ways we're helping you build excellent, adaptive experiences, and helping you stay more productive through updates to our tooling that put AI at your fingertips and throughout your development lifecycle. Here's a recap of 16 of our favorite announcements for Android developers; you can also see what was announced last week in The Android Show: I/O Edition. And stay tuned over the next two days as we dive into all of the topics in more detail!
Building AI into your Apps
1: Building intelligent apps with Generative AI
Generative AI enhances apps' experience by making them intelligent, personalized and agentic. This year, we announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks like summarization, proofreading, rewrite, and image description. We also provided capabilities for developers to harness more powerful models such as Gemini Pro, Gemini Flash, and Imagen via Firebase AI Logic for more complex use cases like image generation and processing extensive data across modalities, including bringing AI to life in Android XR, and a new AI sample app, Androidify, that showcases how these APIs can transform your selfies into unique Android robots! To start building intelligent experiences by leveraging these new capabilities, explore the developer documentation, sample apps, and watch the overview session to choose the right solution for your app.
New experiences across devices
2: One app, every screen: think adaptive and unlock 500 million screens
Mobile Android apps form the foundation across phones, foldables, tablets and ChromeOS, and this year we're helping you bring them to cars and XR and expanding usages with desktop windowing and connected displays. This expansion means tapping into an ecosystem of 500 million devices - a significant opportunity to engage more users when you think adaptive, building a single mobile app that works across form factors. Resources, including Compose Layouts library and Jetpack Navigation updates, help make building these dynamic experiences easier than before. You can see how Peacock, NBCUniveral's streaming service (available in the US) is building adaptively to meet users where they are.
3: Material 3 Expressive: design for intuition and emotion
The new Material 3 Expressive update provides tools to enhance your product's appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. Check out the I/O talk to learn more about expressive design and how it inspires emotion, clearly guides users toward their goals, and offers a flexible and personalized experience.

4: Smarter widgets, engaging live updates
Measure the return on investment of your widgets (available soon) and easily create personalized widget previews with Glance 1.2. Promoted Live Updates notify users of important ongoing notifications and come with a new Progress Style standardized template.

5: Enhanced Camera & Media: low light boost and battery savings
This year's I/O introduces several camera and media enhancements. These include a software low light boost for improved photography in dim lighting and native PCM offload, allowing the DSP to handle more audio playback processing, thus conserving user battery. Explore our detailed sessions on built-in effects within CameraX and Media3 for further information.
6: Build next-gen app experiences for Cars
We're launching expanded opportunities for developers to build in-car experiences, including new Gemini integrations, support for more app categories like Games and Video, and enhanced capabilities for media and communication apps via the Car App Library and new APIs. Alongside updated car app quality tiers and simplified distribution, we'll soon be providing improved testing tools like Android Automotive OS on Pixel Tablet and Firebase Test Lab access to help you bring your innovative apps to cars. Learn more from our technical session and blog post on new in-car app experiences.
7: Build for Android XR's expanding ecosystem with Developer Preview 2 of the SDK
We announced Android XR in December, and today at Google I/O we shared a bunch of updates coming to the platform including Developer Preview 2 of the Android XR SDK plus an expanding ecosystem of devices: in addition to the first Android XR headset, Samsung's Project Moohan, you'll also see more devices including a new portable Android XR device from our partners at XREAL. There's lots more to cover for Android XR: Watch the Compose and AI on Android XR session, and the Building differentiated apps for Android XR with 3D content session, and learn more about building for Android XR.

8: Express yourself on Wear OS: meet Material Expressive on Wear OS 6
This year we are launching Wear OS 6: the most powerful and expressive version of Wear OS. Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. Developers gain access to Material 3 Expressive on Wear OS by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles. Get started with Material 3 libraries and other updates on Wear.

9: Engage users on Google TV with excellent TV apps
You can leverage more resources within Compose's core and Material libraries with the stable release of Compose for TV, empowering you to build excellent adaptive UIs across your apps. We're also thrilled to share exciting platform updates and developer tools designed to boost app engagement, including bringing Gemini capabilities to TV in the fall, opening enrollment for our Video Discovery API, and more.
Developer productivity
10: Build beautiful apps faster with Jetpack Compose
Compose is our big bet for UI development. The latest stable BOM release provides the features, performance, stability, and libraries that you need to build beautiful adaptive apps faster, so you can focus on what makes your app valuable to users.

11: Kotlin Multiplatform: new Shared Template lets you build across platforms, easily
Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. We've released a new Android Studio KMP shared module template, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help developers who are looking to get started with KMP. Shared module templates make it easier for developers to craft, maintain, and own the business logic. Read more on what's new in Android's Kotlin Multiplatform.
12: Gemini in Android Studio: AI Agents to help you work
Gemini in Android Studio is the AI-powered coding companion that makes Android developers more productive at every stage of the dev lifecycle. In March, we introduced Image to Code to bridge the gap between UX teams and software engineers by intelligently converting design mockups into working Compose UI code. And today, we previewed new agentic AI experiences, Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier to build and test code. You can read more about these updates in What's new in Android development tools.
13: Android Studio: smarter with Gemini
In this latest release, we're empowering devs with AI-driven tools like Gemini in Android Studio, streamlining UI creation, making testing easier, and ensuring apps are future-proofed in our ever-evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in a dynamic mobile landscape. To take advantage, upgrade to the latest Studio release. You can read more about these innovations in What's new in Android development tools.

And the latest on driving business growth
14: What's new in Google Play
Get ready for exciting updates from Play designed to boost your discovery, engagement and revenue! Learn how we're continuing to become a content-rich destination with enhanced personalization and fresh ways to showcase your apps and content. Plus, explore powerful new subscription features designed to streamline checkout and reduce churn. Read I/O 2025: What's new in Google Play to learn more.

15: Start migrating to Play Games Services v2 today
Play Games Services (PGS) connects over 2 billion gamer profiles on Play, powering cross-device gameplay, personalized gaming content and rewards for your players throughout the gaming journey. We are moving PGS v1 features to v2 with more advanced features and an easier integration path. Learn more about the migration timeline and new features.
16: And of course, Android 16
We unpacked some of the latest features coming to users in Android 16, which we've been previewing with you for the last few months. If you haven't already, make sure to test your apps with the latest Beta of Android 16. Android 16 includes Live Updates, professional media and camera features, desktop windowing and connected displays, major accessibility enhancements and much more.
Check out all of the Android and Play content at Google I/O
This was just a preview of some of the cool updates for Android developers at Google I/O, but stay tuned to Google I/O over the next two days as we dive into a range of Android developer topics in more detail. You can check out the What's New in Android and the full Android track of sessions, and whether you're joining in person or around the world, we can't wait to engage with you!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
20 May 2025 6:03pm GMT
What’s new in Wear OS 6
Posted by Chiara Chiappini - Developer Relations Engineer
This year, we're excited to introduce Wear OS 6: the most power-efficient and expressive version of Wear OS yet.
Wear OS 6 introduces the new design system we call Material 3 Expressive. It features a major refresh with visual and motion components designed to give users an experience with more personalization. The new design offers a great level of expression to meet user demand for experiences that are modern, relevant, and distinct. Material 3 Expressive is coming to Wear OS, Android, and all your favorite Google apps on these devices later this year.
The good news is that you don't need to compromise battery for beauty: thanks to Wear OS platform optimizations, watches updating from Wear OS 5 to Wear OS 6 can see up to 10% improvement in battery life.1
Wear OS 6 developer preview
Today we're releasing the Developer Preview of Wear OS 6, the next version of Google's smartwatch platform, based on Android 16.
Wear OS 6 brings a number of developer-facing changes, such as refining the always-on display experience. Check out what's changed and try the new Wear OS 6 emulator to test your app for compatibility with the new platform version.
Material 3 Expressive on Wear OS

Material 3 Expressive for the watch is fully optimized for the round display. We recommend developers embrace the new design system in their apps and tiles. To help you adopt Material 3 Expressive in your app, we have begun releasing new design guidance for Wear OS, along with corresponding Figma design kits.
As a developer, you can get access the Material 3 Expressive on Wear OS using new Jetpack libraries:
- Wear Compose Material 3 that provides components for apps.
- Wear ProtoLayout Material 3 that provides components and layouts for tiles.
These two libraries provide implementations for the components catalog that adheres to the Material 3 Expressive design language.
Make it personal with richer color schemes using themes

The Wear Compose Material 3 and Wear Protolayout Material 3 libraries provide updated and extended color schemes, typography, and shapes to bring both depth and variety to your designs. Additionally, your tiles now align with the system font by default (on Wear OS 6+ devices), offering a more cohesive experience on the watch.
Both libraries introduce dynamic color theming, which automatically generates a color theme for your app or tile to match the colors of the watch face of Pixel watches.
Make it more glanceable with new tile components
Tiles now support a new framework and a set of components that embrace the watch's circular form factor. These components make tiles more consistent and glanceable, so users can more easily take swift action on the information included in them.
We've introduced a 3-slot tile layout to improve visual consistency in the Tiles carousel. This layout includes a title slot, a main content slot, and a bottom slot, designed to work across a range of different screen sizes:

Highlight user actions and key information with components optimized for round screen
The new Wear OS Material 3 components automatically adapt to larger screen sizes, building on the Large Display support added as part of Wear OS 5. Additionally, components such as Buttons and Lists support shape morphing on apps.
The following sections highlight some of the most exciting changes to these components.
Embrace the round screen with the Edge Hugging Button
We introduced a new EdgeButton for apps and tiles with an iconic design pattern that maximizes the space within the circular form factor, hugs the edge of the screen, and comes in 4 standard sizes.

Fluid navigation through lists using new indicators
The new TransformingLazyColumn from the Foundation library makes expressive motion easy with motion that fluidly traces the edges of the display. Developers can customize the collapsing behavior of the list when scrolling to the top, bottom and both sides of the screen. For example, components like Cards can scale down as they are closer to the top of the screen.

Material 3 Expressive also includes a ScrollIndicator that features a new visual and motion design to make it easier for users to visualize their progress through a list. The ScrollIndicator is displayed by default when you use a TransformingLazyColumn and ScreenScaffold.

Lastly, you can now use segments with the new ProgressIndicator, which is now available as a full-screen component for apps and as a small-size component for both apps and tiles.

To learn more about the new features and see the full list of updates, see the release notes of the latest beta release of the Wear Compose and Wear Protolayout libraries. Check out the migration guidance for apps and tiles on how to upgrade your existing apps, or try one of our codelabs if you want to start developing using Material 3 Expressive design.
Watch Faces
With Wear OS 6 we are launching updates for watch face developers:
- New options for customizing the appearance of your watch face using version 4 of Watch Face Format, such as animated state transitions from ambient to interactive and photo watch faces.
- A new API for building watch face marketplaces.
Learn more about what's new in Watch Face updates.
Look for more information about the general availability of Wear OS 6 later this year.
Library updates
ProtoLayout
Since our last major release, we've improved capabilities and the developer experience of the Tiles and ProtoLayout libraries to address feedback we received from developers. Some of these enhancements include:
- New Kotlin-only protolayout-material3 library adds support for enhanced visuals: Lottie animations (in addition to the existing animation capabilities), more gradient types, and new arc line styles.
- Developers can now write more idiomatic Kotlin, with APIs refined to better align with Jetpack Compose, including type-safe builders and an improved modifier syntax.
The example below shows how to display a layout with a text on a Tile using new enhancements:
// returns a LayoutElement for use in onTileRequest() materialScope(context, requestParams.deviceConfiguration) { primaryLayout( mainSlot = { text( text = "Hello, World!".layoutString, typography = BODY_LARGE, ) } ) }
For more information, see the migration instructions.
Credential Manager for Wear OS
The CredentialManager API is now available on Wear OS, starting with Google Pixel Watch devices running Wear OS 5.1. It introduces passkeys to Wear OS with a platform-standard authentication UI that is consistent with the experience on mobile.
The Credential Manager Jetpack library provides developers with a unified API that simplifies and centralizes their authentication implementation. Developers with an existing implementation on another form factor can use the same CredentialManager code, and most of the same supporting code to fulfill their Wear OS authentication workflow.
Credential Manager provides integration points for passkeys, passwords, and Sign in With Google, while also allowing you to keep your other authentication solutions as backups.
Users will benefit from a consistent, platform-standard authentication UI; the introduction of passkeys and other passwordless authentication methods, and the ability to authenticate without their phone nearby.
Check out the Authentication on Wear OS guidance to learn more.
Richer Wear Media Controls

Devices that run Wear OS 5.1 or later support enhanced media controls. Users who listen to media content on phones and watches can now benefit from the following new media control features on their watch:
- They can fast-forward and rewind while listening to podcasts.
- They can access the playlist and controls such as shuffle, like, and repeat through a new menu.
Developers with an existing implementation of action buttons and playlist can benefit from this feature without additional effort. Check out how users will get more controls from your media app on a Google Pixel Watch device.
Start building for Wear OS 6 now
With these updates, there's never been a better time to develop an app on Wear OS. These technical resources are a great place to learn more how to get started:
Earlier this year, we expanded our smartwatch offerings with Galaxy Watch for Kids, a unique, phone-free experience designed specifically for children. This launch gives families a new way to stay connected, allowing children to explore Wear OS independently with a dedicated smartwatch. Consult our developer guidance to create a Wear OS app for kids.
We're looking forward to seeing the experiences that you build on Wear OS!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.
1 Actual battery performance varies.
20 May 2025 6:02pm GMT