Android Studio Panda 3 is now stable and ready for you to use in production. This release gives you even more control and customization over your AI-powered workflows, making it easier than ever to build high-quality Android apps.
Whether you're bringing new capabilities to an existing app or standing up a brand new app, these updates elevate your development experience by allowing your AI Agent in Android Studio to learn your specific practices and giving you granular control over its permissions.
Lastly, in addition to AI skills and Agent Mode enchantments, Android Studio Panda 3 also includes updated support for build Android apps for cars.
Here's a deep dive into what's new:
Agent skills
Create a more helpful AI agent by using agent skills in Android Studio. Agent skills are specialized instructions that teach the agent new capabilities and best practices for a specific workflow, which the agent can then leverage as needed. This significantly reduces the level of detail required for your day-to-day prompts. Agent skills work with Gemini in Android Studio or with other remote 3rd party LLMs you integrate into the agent framework in Android Studio.
You and members of your team can create skills that tell the agent exactly how you want to handle specific tasks in your codebase. For example, you could create a custom "code review" skill tailored to your organization's coding standards, or custom skill to provide the agent with more information on using an in-house library.
Once you have created a skill, the agent will be able to use it automatically, or you can manually trigger it by typing @ followed by the skill name. Check out the documentation to learn more about how to create skills for your codebase, or better yet-ask your agent to help you build a new skill and it will guide you through the details!
Manually Trigger Agent Skill in Android Studio
Getting Started
To build a skill for your project, do the following:
Create a .skills directory inside your project's root folder.
Place a SKILL.md file inside this new directory.
Add a name and description to the file to define your custom workflow, and your skill is ready.
Optionally include scripts,assets, and references to provide even more guidance to your agent.
Agent skills in Android Studio
Manage permissions for Agent Mode
You control your codebase, and you can now be more deliberate with which data and capabilities you choose to share with AI agents. The new granular agent permissions in Android Studio let you decide exactly what agents can do for you.
When Agent Mode needs to read files, run shell commands, or access the web, it explicitly asks for your permission. We know that 'approval fatigue' is a real risk in AI workflows-when a tool asks for permission too often, it's easy to start clicking 'Allow' without fully reviewing the action. By offering granular 'Always Allow' rules for trusted operations and an optional sandbox for experimental ones, Android Studio helps you stay focused on the high-stakes decisions that actually require your manual sign-off.
Agent Permissions
Agent permissions are intuitive to set up and use. For example, granting high-level permissions automatically authorizes related sub-tools, while commands you have previously approved will run automatically without interrupting your flow. Rest assured, accessing sensitive files like SSH keys will always require your explicit sign-off.
For even more security, you can also use an optional sandbox to enforce strict, isolated control over the agent.
Agent Shell Sandbox
Empty Car App Library App template
We're making it easier to build Android apps for cars. Building apps for the car used to mean wrestling with complex configurations just to get the project to build successfully.
Now, you can accelerate your development with the new "Empty Car App Library App" template in Android Studio. This template takes care of the required boilerplate code for a driving-optimized app on both Android Auto and Android Automotive OS, saving you significant time and effort. Instead of getting bogged down in setup, you can focus on creating the best experience for your users on the road.
Getting Started
To use the new template:
Select New Project on the Welcome to Android Studio screen (or File > New > New Project from within a project).
Search for or select the Empty Car App Library App template.
Name your app and click Finish to generate your driving-optimized app.
Empty Car App Library App template
Android Studio Panda releases
Panda 3 builds off last month's AI-focused Panda 2 release. Check out Go from prompt to working prototype with Android Studio Panda 2 post to learn more about new Android Studio features, including the AI-powered New Project Flow that takes you from prompt to prototype and the Version Upgrade Assistant that takes the toil out of updating your dependencies.
Get started
Dive in and accelerate your development. Download Android Studio Panda 3 and start exploring these powerful new agentic features today.
Posted by Matthew McCullough, VP of Product Management Android Development
Today, we are enhancing Android development with Gemma 4, our latest state-of-the-art open model designed with complex reasoning and autonomous tool-calling capabilities.
Our vision is to enable local agentic AI on Android across the entire software lifecycle, from development to production. Android supports a range of Gemma 4 models, from the most efficient ones running directly on-device in your apps to more powerful ones running on your development machine to help you build apps. We are bringing Gemma 4 to Android developers through two pillars:
Local-first Agentic coding: Experience powerful, local AI code assistance with Gemma 4 in Android Studio in your development computer.
On-device intelligence: Build intelligent experiences using the ML Kit GenAI Prompt API to run Gemma 4 directly on Android device hardware.
Coding with Gemma 4 in Android Studio
When building Android apps, Android Studio can use Gemma 4 to leverage its state-of-the-art reasoning power and native support for tool use, while keeping the model and inference contained entirely on your local machine.
Gemma 4 was trained on Android development and designed with Agent Mode in mind. This means that when you select Gemma 4 as your local model, you can leverage the full suite of Agent Mode capabilities for a variety of Android development use cases, including refactoring legacy code, building an entire app or new features, and applying fixes iteratively.
Learn more about the possibilities Gemma 4 brings to your app development flow and how to get started.
Prototyping with Gemma 4 on-device
Since the introduction of Gemini Nano as the foundation model on Android, it has become available on over 140 million devices. Gemma 4 is the base model for the next generation of Gemini Nano (Gemini Nano 4) that is optimized for performance and quality on Android devices. This model is up to 4x faster than the previous version and uses up to 60% less battery.
To make it as easy as possible to preview and prototype with Gemma 4 E2B and E4B models directly on AICore-supported devices, we're launching the AICore Developer Preview. While we continue to expand the ML Kit GenAI Prompt API surface to unlock additional advanced capabilities of the model, you can already start exploring new use cases with Gemma 4 using the Prompt API.
Prepare your apps for the launch of the Gemini Nano 4 on the new flagship Android devices later this year by prototyping with Gemma 4 today. Read about the upcoming features and deep dive into AICore Developer Preview and its Gemma 4 support here.
Local agentic intelligence with Gemma 4
Running Gemma 4 locally, you can leverage its advanced reasoning and tool-calling capabilities in your entire workflow, from developing with the AI coding assistant in Android Studio to shipping intelligent features in your app with ML Kit GenAI Prompt API. This local-first approach, available under Gemma's open Apache license, provides an alternative for developers to innovate in a privacy-centric and cost effective manner. In a future release, we will update Android Bench to include Gemma 4 and other open models, providing the quantified data you need to navigate performance trade-offs and select the best model for your use case.
Posted by David Chou, Product Manager and Caren Chang, Developer Relations Engineer
At Google, we're committed to bringing the most capable AI models directly to the Android devices in your pocket. Today, we're thrilled to announce the release of our latest state-of-the-art open model: Gemma 4.
These models are the foundation for the next generation of Gemini Nano, so code you write today for Gemma 4 will automatically work on Gemini Nano 4-enabled devices that will be available later this year. With Gemini Nano 4, you'll benefit from our additional performance optimizations so you can ship to production across the Android ecosystem with the most efficient on-device inference.
Select the Gemini Nano 4 Fast model in the Developer Preview UI
to see its blazing fast inference speed in action before you write any code
Because Gemma 4 natively supports over 140 languages, you can expect improved localized, multilingual experiences for your global audience. Furthermore, Gemma 4 offers industry-leading performance with multimodal understanding, allowing your apps to understand and process text, images, and audio. To give you the best balance of performance and efficiency, Gemma 4 on Android comes in two sizes:
E4B: Designed for higher reasoning power and complex tasks.
E2B: Optimized for maximum speed (3x faster than the E4B model!) and lower latency.
The new model is up to 4x faster than previous versions and uses up to 60% less battery. Starting today, you can experiment with improved capabilities including:
Reasoning: Chain-of-thought commands and conditional statements can now be expected to return higher quality results. For example: "Determine if the following comment for a discussion thread passes the community guidelines. The comment does not pass the community guideline if it contains one or more of these reason_for_flag: profanity, derogatory language, hate speech". If the review passes the community guidelines, return {true}. Otherwise, return {false, reason_for_flag}."
Math: With better math skills, the model can now more accurately answer questions. For example: "If I get 26 paychecks per year, how much should I contribute each paycheck to reach my savings goal of $10,000 over the course of a year?"
Time understanding: The model is now more capable when reasoning about time, making it more accurate for use cases that involve calendars, reminders, and alarms. For example: "The event is at 6PM on August 18th, and a reminder should be sent out 10 hours before the event. Return the time and date the reminder should be sent."
Image understanding: Use cases that involve OCR (Optical Character Recognition) - such as chart understanding, visual data extraction, and handwriting recognition - will now return more accurate results.
Join the Developer Preview today to download these models in preview models and start building next-generation features right away.
Start building with Gemma 4
Start testing the model
You can try out the model without code by following the Developer Preview guide. If you want to jump straight into integrating these models with your existing workflow, we've made that seamless. Head over to Android Studio to refine your prompt and build with the familiar ML Kit Prompt API. We've introduced a new ability to specify a model, allowing you to target the E2B (fast) or E4B (full) variants for testing.
// Define the configuration with a specific track and preferenceval previewFullConfig = generationConfig {
modelConfig = ModelConfig {
releaseTrack = ModelReleaseTrack.PREVIEW
preference = ModelPreference.FULL
}
}
// Initialize the GenerativeModel with the configurationval previewModel = GenerativeModel.getClient(previewFullConfig)
// Verify that the specific preview model is availableval previewModelStatus = previewModel.checkStatus()
if (previewModelStatus == FeatureStatus.AVAILABLE) {
// Proceed with inferenceval response = previewModel.generateContent("If I get 26 paychecks per year, how much I should contribute each paycheck to reach my savings goal of $10k over the course of a year? Return only the amount.")
} else {
// Handle the case where the preview model is not available// (e.g., print out log statements)
}
What to expect during the Developer Preview
The goal of this Developer Preview is to give you a head start on refining prompt accuracy and exploring new use cases for your specific apps.
We will be making several updates throughout the preview period, including support for tool calling, structured output, system prompts, and thinking mode in Prompt API, making it easier to take full advantage of the new capabilities and significant performance optimizations in Gemma 4.
The preview models are available for testing on AICore-enabled devices. These models will run on the latest generation of specialized AI accelerators from Google, MediaTek, and Qualcomm Technologies. On other devices, the models will initially run on a CPU implementation that is not representative of final production performance. If your device is not AICore-enabled, you can also test these models via the AI Edge Gallery app. We'll provide support for more devices in the future.
Every developer's AI workflow and needs are unique, and it's important to be able to choose how AI helps your development. In January, we introduced the ability to choose any local or remote AI model to power AI functionality in Android Studio, and today, we're announcing the availability of Gemma 4 for AI coding assistance in Android Studio. This new local model trained on Android development provides the best of both worlds: the privacy and cost-efficiency of on-device processing alongside state-of-the-art reasoning and tool-calling capabilities.
AI assistance, locally delivered
By running locally on your machine, Gemma 4 gives you AI code assistance that doesn't require an internet connection or an API key for its core operations. Key benefits include:
Privacy and security: Your code stays on your machine. Gemma 4 processes all Agent Mode requests locally, making it an ideal choice for developers working with data privacy requirements or in secure corporate environments.
Cost efficiency: Run complex agentic workflows without worrying about hitting quotas. Gemma 4 is optimized to run efficiently on modern development hardware, utilizing local GPU and RAM to provide snappy, responsive assistance.
Offline availability: Use the agent to write code even when you don't have an internet connection.
State-of-the-art reasoning: Gemma 4 delivers best-in-class reasoning, capable of complex multi-step coding tasks in Agent Mode.
Powerful agentic coding
Gemma 4 was trained for Android development with agentic tool calling capabilities. When you select Gemma 4 as your local model, you can leverage Agent Mode for a variety of development use cases, such as:
Designing new features: Developers can ask the agent to build a new feature or an entire app with commands like "build a calculator app" and the agent will not only generate the UI code but will use Android best practices like writing in Kotlin and using Jetpack Compose.
Refactoring: You can give high-level commands such as "Extract all hardcoded strings and migrate them to strings.xml." The agent will scan your codebase, identify instances requiring changes, and apply the edits across multiple files simultaneously.
Bug fixing and build resolution: If a project fails to build or has persistent lint errors, you can prompt the agent to "Build my project and fix any errors." The agent will navigate to the offending code and iteratively apply fixes until the build is successful.
Recommended hardware requirements
The 26B MoE is recommended for Android app developers using a machine with the minimum hardware requirements. Total RAM needed includes both Android Studio and Gemma.
Model
Total RAM needed
Storage needed
Gemma E2B
8GB
2 GB
Gemma E4B
12 GB
4 GB
Gemma 26B MoE
24 GB
17 GB
Get started
To get started, ensure you have the latest version of Android Studio installed.
Install an LLM provider, such as LM Studio or Ollama, on your local computer.
In Settings > Tools > AI > Model Providers add your LM Studio or Ollama instance.
Download the Gemma 4 model from Ollama or LM Studio. Refer to hardware requirements for model size selection.
In Agent Mode, select Gemma 4 as your active model.
For a detailed walkthrough on configuration, check out the official documentation on how to use a local model.
We are excited to see how Gemma 4 enables more private, secure, and powerful development workflows. As always, your feedback is essential as we continue to refine the AI experience in Android Studio. If you find a bug or issue, please file an issue. Also you can be part of our vibrant Android developer community on LinkedIn, YouTube, or X. Happy coding!
Posted by Michael Stillwell, Developer Relations Engineer and Dimitris Kosmidis, Product Manager, Wear OS
64-bit architectures provide performance improvements and a foundation for future innovation, delivering faster and richer experiences for your users. We've supported 64-bit CPUs since Android 5. This aligns Wear OS with recent updates for Google TV and other form factors, building on the 64-bit requirement first introduced for mobile in 2019.
Today, we are extending this 64-bit requirement to Wear OS. This blog provides guidance to help you prepare your apps to meet these new requirements.
The 64-bit requirement: timeline for Wear OS developers
Starting September 15, 2026:
All new apps and app updates that include native code will be required to provide 64-bit versions in addition to 32-bit versions when publishing to Google Play.
Google Play will start blocking the upload of non-compliant apps to the Play Console.
We are not making changes to our policy on 32-bit support, and Google Play will continue to deliver apps to existing 32-bit devices.
The vast majority of Wear OS developers has already made this shift, with 64-bit compliant apps already available. For the remaining apps, we expect the effort to be small.
Preparing for the 64-bit requirement
Many apps are written entirely in non-native code (i.e. Kotlin or Java) and do not need any code changes. However, it is important to note that even if you do not write native code yourself, a dependency or SDK could be introducing it into your app, so you still need to check whether your app includes native code.
Assess your app
Inspect your APK or app bundle for native code using the APK Analyzer in Android Studio.
Look for .so files within the lib folder. For ARM devices, 32-bit libraries are located in lib/armeabi-v7a, while the 64-bit equivalent is lib/arm64-v8a.
Ensure parity: The goal is to ensure that your app runs correctly in a 64-bit-only environment. While specific configurations may vary, for most apps this means that for each native 32-bit architecture you support, you should include the corresponding 64-bit architecture by providing the relevant .so files for both ABIs.
Upgrade SDKs: If you only have 32-bit versions of a third-party library or SDK, reach out to the provider for a 64-bit compliant version.
How to test 64-bit compatibility
The 64-bit version of your app should offer the same quality and feature set as the 32-bit version. The Wear OS Android Emulator can be used to verify that your app behaves and performs as expected in a 64-bit environment.
Note: Since Wear OS apps are required to target Wear OS 4 or higher to be submitted to Google Play, you are likely already testing on these newer, 64-bit only images.
When testing, pay attention to native code loaders such as SoLoader or older versions of OpenSSL, which may require updates to function correctly on 64-bit only hardware.
Next steps
We are announcing this requirement now to give developers a six-month window to bring their apps into compliance before enforcement begins in September 2026. For more detailed guidance on the transition, please refer to our in-depth documentation on supporting 64-bit architectures.
This transition marks an exciting step for the future of Wear OS and the benefits that 64-bit compatibility will bring to the ecosystem.
Media3 1.10 includes new features, bug fixes and feature improvements, including Material3-based playback widgets, expanded format support in ExoPlayer and improved speed adjustment when exporting media with Transformer. Read on to find out more, and check out the full release notes for a comprehensive list of changes.
Playback UI and Compose
We are continuing to expand the media3-ui-compose-material3 module to help you build Compose UIs for playback.
We've added a new Player Composable that combines a ContentFrame with customizable playback controls, giving you an out-of-the-box player widget with a modern UI.
This release also adds a ProgressSlider Composable for displaying player progress and performing seeks using dragging and tapping gestures. For playback speed management, a new PlaybackSpeedControl is available in the base media3-ui-compose module, alongside a styled PlaybackSpeedToggleButton in the Material 3 module.
We'll continue working on new additions like track selection utils, subtitle support and more customization options in the upcoming Media3 releases. We're eager to hear your feedback so please share your thoughts on the project issue tracker.
PlayerComposable in the Media3 Compose demo app
Playback feature enhancements
Media3 1.10 includes a variety of additions and improvements across the playback modules:
Format support: ExoPlayer now supports extracting Dolby Vision Profile 10 and Versatile Video Coding (VVC) tracks in MP4 containers, and we've introduced MPEG-H UI manager support in the decoder_mpeghextension. The IAMF extension now seamlessly supports binaural output, either through the decoder viaiamf_tools or through the Android OS Spatializer, with new logic to match the output layout of the speakers.
Ad playback: Improvements to reliability, improved HLS interstitial support forX-PLAYOUT-LIMIT and X-SNAP, and with the latest IMA SDK dependency you can control whether ad click-through URLs open in custom tabs with setEnableCustomTabs.
HLS: ExoPlayer now allows location fallback upon encountering load errors if redundant streams from different locations are available.
Session: MediaSessionService now extends LifecycleService, allowing apps to access the lifecycle scoping of the service.
One of our key focus areas this year is on playback efficiency and performance. Media3 1.10 includes experimental support for scheduling the core playback loop in a more efficient way. You can try this out by enabling experimentalSetDynamicSchedulingEnabled() via the ExoPlayer.Builder. We plan to make further improvements in future releases so stay tuned!
Media editing and Transformer
For developers building media editing experiences, we've made speed adjustments more robust. EditedMediaItem.Builder.setFrameRate()can now set a maximum output frame rate for video. This is particularly helpful for controlling output size and maintaining performance when increasing media speed with setSpeed().
New modules for frame extraction and applying Lottie effects
In this release we've split some functionality into new modules to reduce the scope of some dependencies:
FrameExtractor has been removed from the main media3-inspector module, so please migrate your code to use the new media3-inspector-framemodule and update your imports toandroidx.media3.inspector.frame.FrameExtractor.
We have also moved theLottieOverlayeffect to a separate media3-effect-lottie module. As a reminder, this gives you a straightforward way to apply vector-based Lottie animations directly to video frames.
Please get in touch via the issue tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!
Posted by Ben Weiss, Senior Developer Relations Engineer
Monzo is a UK digital bank with 15 million customers and growing. As the app scaled, the engineering team identified app startup time as a critical area for improvement but worried it would require significant changes to their codebase.
By fully enabling R8 optimizations, Monzo achieved a massive 35% reduction in their Application Not Responding (ANR) rate. This simple change proved that impactful optimizations don't always require complex engineering efforts.
Unlocking broad performance wins with R8 full mode
Monzo identified R8 full mode as an easy fix worth trying; and it worked, improving performance across the board:
●Startup Reliability: Cold starts improved by 30%, Warm starts by 24%, and Hot starts by 14%.
●Launch Speed: P50 launch times improved by 11% and P90 launch times by 12%.
●Efficiency: Overall app size was reduced by 9%.
●Stability: ANR reduction of 35%.
Enabling optimizations with a single change
Many Android apps use an outdated default configuration file which disables most functionality of the R8 optimizer. The main change Monzo made to unlock these performance improvements was to replace the proguard-android.txt default file with proguard-android-optimize.txt. This change removes the -dontoptimize instruction and allows R8 to properly do its job.
After making this change, it's worth looking at your Keep configuration files. These files tell R8 which parts of your code to leave alone (usually because they're called dynamically or by external libraries). Tidying up unnecessary Keep rules means R8 can do more.
Improving scroll performance with Baseline Profiles
To further enhance the user experience, Monzo implemented Baseline Profiles, specifically targeting scroll and rendering performance on their main feed. This strategy ensured that the most common user journeys-opening the app and scrolling the feed-were fully optimized. The impact on rendering was substantial: P90 scroll performance became 71% faster, and P95 scroll performance improved by 87%. Now scrolling the app is smoother than before.
Monzo built this into their release process to maintain these improvements over time. "We trigger the baseline profile generation every week day (before running our nightly builds) and commit the latest changes once completed," Neumayer explains.
Keeping up with modern Android development
Monzo's experience shows what's possible when you stay up to date with Android build-tooling recommendations. While legacy apps often struggle with complex reflection usage, Monzo found the transition straightforward by documenting their Keep Rules properly. "We always add a comment explaining why Keep Rules are in place, so we know when it's safe to remove the rules," Neumayer notes.
Neumayer's advice for other teams? Regularly check your practices against current standards: "Take a look at the latest recommendations from Google around app performance and check if you're following all the latest advice."
Posted by Matthew Forsythe, Director Product Management, Android App Safety
Android is for everyone. It's built on a commitment to an open and safe platform. Users should feel confident installing apps, no matter where they get them from. However, our recent analysis found over 90 times more malware from sideloaded sources than on Google Play. So as an extra layer of security, we are rolling out Android developer verification to help prevent malicious actors from hiding behind anonymity to repeatedly spread harm. Over the past several months, we've worked closely with the community to improve the design so we account for the many ways people use Android to balance openness with safety.
Start your verification today
Today, we're starting to roll out Android developer verification to all developers in both the new Android Developer Console and Play Console. This allows you to complete your verification and register your apps before user-facing changes begin later this year.
If you only distribute apps outside of Google Play, you can create an account in Android Developer Console today.
If you're on Google Play, check your Play Console account for updates over the next few weeks. If you've already verified your identity here, then you're likely already set.
Most of your users' download experience will not change at all
While verification tools are rolling out now, the experience for users downloading your apps will not change until later this year. The user side protections will first go live in Brazil, Indonesia, Singapore, and Thailand this September, before expanding globally in 2027. We've shared this timeline early to ensure you have ample time to complete your verification.
Following this deadline, for the vast majority of users, the experience of installing apps will stay exactly the same. It's only when a user tries to install an unregistered app that they'll require ADB or advanced flow, helping us keep the broader community safe while preserving the flexibility for our power users.
Developers can still choose where to distribute their apps. Most users' download experience will not change
Tailoring the verification experience to your feedback
To balance the need for safety with our commitment to openness, we've improved the verification experience based on your feedback. We've streamlined the developer experience to be more integrated with existing workflows and maintained choice for power users.
For Android Studio developers: In the next two months, you'll see your app's registration status right in Android Studio when you generate a signed App Bundle or APK.
You'll see your app's registration status in Android Studio when you generate a signed App Bundle or APK.
For Play developers: If you've completed Play Console's developer verification requirements, your identity is already verified and we'll automatically register eligible Play apps for you. In the rare case that we are unable to register your apps for you, you will need to follow the manual app claim process. Over the next couple of weeks, more details will be provided in the Play Console and through email. Also, you'll be able to register apps you distribute outside of Play in the Play Console too.
The Android developer verification page in your Play Console will show the registration status for each of your apps.
For students and hobbyists: To keep Android accessible to everyone, we're building a free, no government ID required, limited distribution account so you can share your work with up to 20 devices. You only need an email account to get started. Sign up for early access. We'll send invites in June.
For power users: We are maintaining the choice to install apps from any source. You can use the new advanced flow for sideloading unregistered apps or continue using ADB. This maintains choice while protecting vulnerable users.
What's next?
We're rolling this out carefully and working closely with developers, users, and our partners. In April, we'll introduce Android Developer Verifier, a new Google system service that will be used to check if an app is registered to a verified developer.
April 2026: Users will start to see Android Developer Verifier in their Google Systems services settings.
June 2026: Early access: Limited distribution accounts for students and hobbyists.
September 30, 2026: Apps must be registered by verified developers in order to be installed and updated on certified Android devices in Brazil, Indonesia, Singapore, and Thailand. Unregistered apps can be sideloaded with ADB or advanced flow.
2027 and beyond: We will roll out this requirement globally.
We're committed to an Android that is both open and safe. Check out our developer guides to get started today.
Posted by Robert Clifford, Developer Relations Engineer and Manjeet Rulhania, Software Engineer
A pillar of the Android ecosystem is our shared commitment to user trust. As the mobile landscape has evolved, so does our approach to protecting sensitive information. In Android 17, we're introducing a suite of new location privacy features designed to give users more control and provide developers elegant solutions for data minimization and product safety. Our strategy focuses on introducing new tools to balance high-quality experiences with robust privacy protections, and improving transparency for users to help manage their data.
Introducing the location button: simplified access for one time use
For many common tasks, like finding a nearby shop or tagging a social post, your app doesn't need permanent or background access to a user's precise location.With Android 17, we are introducing the location button, a new UI element designed to provide a well-lit path for responsible one time precise location access. Industry partners have requested this new feature as a way to bring a simpler, and more private location flow to their users.
Users get better privacy protection
Moving the decision making for location sharing to the point where a user takes action, helps the user make a clearer choice about how much information they want to share and for how long. This empowers users to limit data sharing to only what apps need in that session. Once consent is provided, this session based access eliminates repeated prompts for location dependent features. This benefits developers by creating a smoother experience for their users and providing high confidence in user intent, as access is explicitly requested at the moment of action.
Full UI customization to match your app's aesthetic
The location button provides extensive customization options to ensure integration with your app's aesthetic while maintaining system-wide recognizability. You can modify the button's visual style including:
Background and icon color scheme
Outline style
Size and shape
Additionally, you can select the appropriate text label from a predefined list of options. To ensure security and trust, the location icon itself remains mandatory and non-customizable, while the font size is system-managed to respect user accessibility settings
Simplified Integration with Jetpack and automatic backwards compatibility
The location button will be provided as a Jetpack library, ensuring easy integration into your existing app layouts similar to any other Jetpack view implementation, and simplifying how you request permission to access precise location. Additionally, when you implement location button with the Jetpack library it will automatically handle backwards compatibility by defaulting to the existing location prompt when a user taps it on a device running Android 16 or below.
The Android location button is available for testing as of Android 17 Beta 3.
Location access transparency
Users often struggle to understand the tools they can use to monitor and control access to their location data. In Android 17, we are aligning location permission transparency with the high standards already set for the Microphone and Camera.
Updated Location Indicator: A persistent indicator will now appear to inform a user whenever a non-system app accesses their location
Attribution & Control: Users can tap the indicator to see exactly which apps have recently accessed their location and manage those permissions immediately through a "Recent app use" dialog.
Strengthening user privacy with density-based Coarse Location
Android 17 is also improving the algorithm for approximate (coarse) locations to be aware of population density. Previously, coarse locations used a static 2 km-wide grid, which in low-population areas may not be sufficiently private since a 2km square could often contain only a handful of users. The new approach replaces this fixed grid with a dynamically-sized area based on local population density. By increasing the grid for areas with lower population density, Android ensures a more consistent privacy guarantee across different environments from dense urban centers to remote regions.
Improved runtime permission dialog
The runtime permission dialog for location is one of the more complex flows for users to navigate, with users being asked to decide on the granularity and length of permission access they are willing to grant to each app. In an effort to help users to make the most informed privacy decisions with less friction, we've redesigned the dialog to make "Precise" and "Approximate" choices more visually distinct, encouraging users to select the level of access which best suits their needs.
Start building for Android 17
The new location privacy tools are available now in Beta 3. We're looking for your feedback to help refine these features before the general release.
Posted by Matthew McCullough, VP of Product Management, Android Developer
Android 17 has officially reached platform stability today with Beta 3. That means that the API surface is locked; you can perform final compatibility testing and push your Android 17-targeted apps to the Play Store. In addition, Beta 3 brings a host of new capabilities to help you build better, more secure, and highly integrated applications.
Get your apps, libraries, tools, and game engines ready!
If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your downstream developers know if updates are needed to fully support Android 17.
Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 17 Beta 3. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are some changes to focus on:
Resizability on large screens: Once you target Android 17, you can no longer opt out of maintaining orientation, resizability and aspect ratio constraints on large screens.
Dynamic code loading: If your app targets Android 17 or higher, the Safer Dynamic Code Loading (DCL) protection introduced in Android 14 for DEX and JAR files now extends to native libraries. All native files loaded using System.load() must be marked as read-only. Otherwise, the system throws UnsatisfiedLinkError.
Enable CT by default: Certificate transparency (CT) is enabled by default. (On Android 16, CT is available but apps had to opt in.)
Local network protections: Apps targeting Android 17 or higher have local network access blocked by default. Switch to using privacy preserving pickers if possible, and use the new ACCESS_LOCAL_NETWORK for broad, persistent access.
Media and camera enhancements
Photo Picker customization options
Android now allows you to tailor the visual presentation of the photo picker to better complement your app's user interface. By leveraging the new PhotoPickerUiCustomizationParams API, you can modify the grid view aspect ratio from the standard 1:1 square to a 9:16 portrait display. This flexibility extends to both the ACTION_PICK_IMAGES intent and the embedded photo picker, enabling you to maintain a cohesive aesthetic when users interact with media.
val params = PhotoPickerUiCustomizationParams.Builder()
.setAspectRatio(PhotoPickerUiCustomizationParams.ASPECT_RATIO_PORTRAIT_9_16)
.build()
val intent = Intent(MediaStore.ACTION_PICK_IMAGES).apply {
putExtra(MediaStore.EXTRA_PICK_IMAGES_UI_CUSTOMIZATION_PARAMS, params)
}
startActivityForResult(intent, REQUEST_CODE)
Support for the RAW14 image format: Android 17 introduces support for the RAW14 image format - the de-facto industry standard for high-end digital photography - via the new ImageFormat.RAW14 constant. RAW14 is a single-channel, 14-bit per pixel format that uses a densely packed layout where every four consecutive pixels are packed into seven bytes.
Vendor-defined camera extensions: Android 17 adds Vendor-defined extensions to enable hardware partners define and implement custom camera extension modes to provide you access to the best and latest camera features, such as 'Super Resolution' or cutting-edge AI-driven enhancements. You can query for these modes using the isExtensionSupported(int) API.
Camera device type APIs: New Android 17 APIs allow you to query the underlying device type to identify if a camera is built-in hardware, an external USB webcam, or a virtual camera.
Bluetooth LE Audio hearing aid support
Android now includes a specific device category for Bluetooth Low Energy (BLE) Audio hearing aids. With the addition of the AudioDeviceInfo.TYPE_BLE_HEARING_AID constant, your app can now distinguish hearing aids from regular headsets.
val audioManager = getSystemService(Context.AUDIO_SERVICE) as AudioManager
val devices = audioManager.getDevices(AudioManager.GET_DEVICES_OUTPUTS)
val isHearingAidConnected = devices.any { it.type == AudioDeviceInfo.TYPE_BLE_HEARING_AID }
Granular audio routing for hearing aids
Android 17 allows users to independently manage where specific system sounds are played. They can choose to route notifications, ringtones, and alarms to connected hearing aids or the device's built-in speaker.
Extended HE-AAC software encoder
Android 17 introduces a system-provided Extended HE-AAC software encoder. This encoder supports both low and high bitrates using unified speech and audio coding. You can access this encoder via the MediaCodec API using the name c2.android.xheaac.encoder or by querying for the audio/mp4a-latm MIME type.
val encoder = MediaCodec.createByCodecName("c2.android.xheaac.encoder")
val format = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 48000, 1)
format.setInteger(MediaFormat.KEY_BIT_RATE, 24000)
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectXHE)
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
Performance and Battery Enhancements
Reduce wakelocks with listener support for allow-while-idle alarms
Android 17 introduces a new variant of AlarmManager.setExactAndAllowWhileIdle that accepts an OnAlarmListener instead of a PendingIntent. This new callback-based mechanism is ideal for apps that currently rely on continuous wakelocks to perform periodic tasks, such as messaging apps maintaining socket connections.
val alarmManager = getSystemService(AlarmManager::class.java)
val listener = AlarmManager.OnAlarmListener {
// Do work here
}
alarmManager.setExactAndAllowWhileIdle(
AlarmManager.ELAPSED_REALTIME_WAKEUP,
SystemClock.elapsedRealtime() + 60000,
listener,
null
)
Privacy updates
System-provided Location Button
Android is introducing a system-rendered location button that you will be able to embed directly into your app's layout using an Android Jetpack library. When a user taps this system button, your app is granted precise location access for the current session only. To implement this, you need to declare the USE_LOCATION_BUTTON permission.
Discrete password visibility settings for touch and physical keyboards
This feature splits the existing "Show passwords" system setting into two distinct user preferences: one for touch-based inputs and another for physical (hardware) keyboard inputs. Characters entered via physical keyboards are now hidden immediately by default.
val isPhysical = event.source and InputDevice.SOURCE_KEYBOARD == InputDevice.SOURCE_KEYBOARD
val shouldShow = android.text.ShowSecretsSetting.shouldShowPassword(context, isPhysical)
Security
Enforced read-only dynamic code loading
To improve security against code injection attacks, Android now enforces that dynamically loaded native libraries must be read-only. If your app targets Android 17 or higher, all native files loaded using System.load() must be marked as read-only beforehand.
val libraryFile = File(context.filesDir, "my_native_lib.so")
// Mark the file as read-only before loading to comply with Android 17+ security requirements
libraryFile.setReadOnly()
System.load(libraryFile.absolutePath)
To prepare for future advancements in quantum computing, Android is introducing support for Post-Quantum Cryptography (PQC) through the new v3.2 APK Signature Scheme. This scheme utilizes a hybrid approach, combining a classical signature with an ML-DSA signature.
User experience and system UI
Better support for widgets on external displays
This feature improves the visual consistency of app widgets when they are shown on connected or external displays with different pixel densities using DP or SP units.
val options = appWidgetManager.getAppWidgetOptions(appWidgetId)
val displayId = options.getInt(AppWidgetManager.OPTION_APPWIDGET_DISPLAY_ID)
val remoteViews = RemoteViews(context.packageName, R.layout.widget_layout)
remoteViews.setViewPadding(
R.id.container,
16f, 8f, 16f, 8f,
TypedValue.COMPLEX_UNIT_DIP
)
Hidden app labels on the home screen
Android now provides a user setting to hide app names (labels) on the home screen workspace. Ensure your app icon is distinct and recognizable.
Desktop Interactive Picture-in-Picture
Unlike traditional Picture-in-Picture, these pinned windows remain interactive while staying always-on-top of other application windows in desktop mode.
val appTask: ActivityManager.AppTask = activity.getSystemService(ActivityManager::class.java).appTasks[0]
appTask.requestWindowingLayer(
ActivityManager.AppTask.WINDOWING_LAYER_PINNED,
context.mainExecutor,
object : OutcomeReceiver<Int, Exception> {
override fun onResult(result: Int) {
if (result == ActivityManager.AppTask.WINDOWING_LAYER_REQUEST_GRANTED) {
// Task successfully moved to pinned layer
}
}
override fun onError(error: Exception) {}
}
)
Redesigned screen recording toolbar
Core functionality
VPN app exclusion settings
By using the new ACTION_VPN_APP_EXCLUSION_SETTINGS Intent, your app can launch a system-managed Settings screen where users can select applications to bypass the VPN tunnel.
val intent = Intent(Settings.ACTION_VPN_APP_EXCLUSION_SETTINGS)
if (intent.resolveActivity(packageManager) != null) {
startActivity(intent)
}
OpenJDK 25 and 21 API updates
This update brings extensive features and refinements from OpenJDK 21 and OpenJDK 25, including the latest Unicode support and enhanced SSL support for named groups in TLS.
The wait is over! We are incredibly excited to share the Google Play Apps Accelerator class of 2026. We've handpicked a group of high-potential studios from across the globe to embark on a 12-week journey designed to supercharge their success.
Here's what's in store for the program's first ever class:
Curated learning: virtual masterclasses and workshops led by industry trailblazers.
Guidance & mentorship: 1-to-1 sessions covering everything from technical scaling to leadership.
Direct access: exclusive sessions with experts from Google and the world's top studios.
Without further ado, join us in congratulating them!
Google Play Apps Accelerator | Class of 2026
Americas
Anytune
AstroVeda
BetterYou
Changed
Focus Forge
Human Program
Know Your Lemons
kweliTV
Language Innovation
Matraquinha
MR ROCCO
MUU nutrition
NKENNE
Skarvo
Starcrossed
Wishfinity
Asia Pacific
Human Health
Kitakuji
Lazy Surfers
Mellers Tech
Reehee Company
Europe, Middle East & Africa
cabuu
Class54 Education
Digital Garden
EverPixel
Geolives
HelloMind
ifal
Idea Accelerator
Maposcope
Ochy
Picastro
Pixelbite
Record Scanner
Talkao
unorderly
Xeropan International
Congratulations again to all the founders selected, we can't wait to see your apps grow on our platform.
The Google Play Apps Accelerator is part of our mission to help businesses of all sizes grow on Google Play and reach their full potential. Discover more about Google Play's programs, resources and tools.
Posted by Roxanna Aliabadi Walker, Senior Product Manager
Privacy and user control remain at the heart of the Android experience. Just as the photo picker made media sharing secure and easy to implement, we are now bringing that same level of privacy, simplicity, and great user experience to contact selection.
A New Standard for Contact Privacy
Historically, applications requiring access to a specific user's contacts relied on the broad READ_CONTACTS permission. While functional, this approach often granted apps more data than necessary. The new Android Contact Picker, introduced in Android 17, changes this dynamic by providing a standardized, secure, and searchable interface for contact selection.
This feature allows users to grant apps access only to the specific contacts they choose, aligning with Android's commitment to data transparency and minimized permission footprints.
How It Works
Developers can integrate the Contact Picker using the Intent.ACTION_PICK_CONTACTS intent. This updated API offers several powerful capabilities:
Granular Data Requests: Apps can specify exactly which fields they need, such as phone numbers or email addresses, rather than receiving the entire contact record.
Multi-Selection Support: The picker supports both single and multiple contact selections, giving developers more flexibility for features like group invitations.
Selection Limits: Developers can set custom limits on the number of contacts a user can select at one time.
Temporary Access: Upon selection, the system returns a Session URI that provides temporary read access to the requested data, ensuring that access does not persist longer than necessary.
Access to other profiles: When using this new intent, the interface will allow users to select contents from other user profiles such as a work profile, cloned profile or a private space.
Optimized Performance: The Contact Picker returns a single Uri that allows for collective result querying, eliminating the need to query individual contact Uri separately as required by ACTION_PICK. This efficiency further reduces system overhead by utilizing a singleBinder transaction.
Backward Compatibility and Implementation
For devices running Android 17 or higher, the system automatically upgrades legacy ACTION_PICK intents that specify contact data types to the new, more secure interface. However, to take full advantage of advanced features like multi-selection, developers are encouraged to update their implementation code and utilize the ContentResolver to query the returned Session URI.
Integrate the contact pickerTo integrate the Contact Picker, developers use the ACTION_PICK_CONTACTS intent. Below is a code example demonstrating how to launch the picker and request specific data fields, such as email and phone numbers.
// State to hold the list of selected contacts
var contacts by remember { mutableStateOf<List<Contact>>(emptyList()) }
// Launcher for the Contact Picker intent
val pickContact = rememberLauncherForActivityResult(StartActivityForResult()) {
if (it.resultCode == Activity.RESULT_OK) {
val resultUri = it.data?.data ?: return@rememberLauncherForActivityResult
// Process the result URI in a background thread
coroutine.launch {
contacts = processContactPickerResultUri(resultUri, context)
}
}
}
// Define the specific contact data fields you need
val requestedFields = arrayListOf(
Email.CONTENT_ITEM_TYPE,
Phone.CONTENT_ITEM_TYPE,
)
// Set up the intent for the Contact Picker
val pickContactIntent = Intent(ACTION_PICK_CONTACTS).apply {
putExtra(EXTRA_PICK_CONTACTS_SELECTION_LIMIT, 5)
putStringArrayListExtra(
EXTRA_PICK_CONTACTS_REQUESTED_DATA_FIELDS,
requestedFields
)
putExtra(EXTRA_PICK_CONTACTS_MATCH_ALL_DATA_FIELDS, false)
}
// Launch the picker
pickContact.launch(pickContactIntent)
After the user makes a selection, the app processes the result by querying the returned Session URI to extract the requested contact information.
// Data class representing a parsed Contact with selected details
data class Contact(val id: String, val name: String, val email: String?, val phone: String?)
// Helper function to query the content resolver with the URI returned by the Contact Picker.
// Parses the cursor to extract contact details such as name, email, and phone number
private suspend fun processContactPickerResultUri(
sessionUri: Uri,
context: Context
): List<Contact> = withContext(Dispatchers.IO) {
// Define the columns we want to retrieve from the ContactPicker ContentProvider
val projection = arrayOf(
ContactsContract.Contacts._ID,
ContactsContract.Contacts.DISPLAY_NAME_PRIMARY,
ContactsContract.Data.MIMETYPE, // Type of data (e.g., email or phone)
ContactsContract.Data.DATA1, // The actual data (Phone number / Email string)
)
val results = mutableListOf<Contact>()
// Note: The Contact Picker Session Uri doesn't support custom selection & selectionArgs.
context.contentResolver.query(sessionUri, projection, null, null, null)?.use { cursor ->
// Get the column indices for our requested projection
val contactIdIdx = cursor.getColumnIndex(ContactsContract.Contacts._ID)
val mimeTypeIdx = cursor.getColumnIndex(ContactsContract.Data.MIMETYPE)
val nameIdx = cursor.getColumnIndex(ContactsContract.Contacts.DISPLAY_NAME_PRIMARY)
val data1Idx = cursor.getColumnIndex(ContactsContract.Data.DATA1)
while (cursor.moveToNext()) {
val contactId = cursor.getString(contactIdIdx)
val mimeType = cursor.getString(mimeTypeIdx)
val name = cursor.getString(nameIdx) ?: ""
val data1 = cursor.getString(data1Idx) ?: ""
// Determine if the current row represents an email or a phone number
val email = if (mimeType == Email.CONTENT_ITEM_TYPE) data1 else null
val phone = if (mimeType == Phone.CONTENT_ITEM_TYPE) data1 else null
// Add the parsed contact to our results list
results.add(Contact(contactId, name, email, phone))
}
}
return@withContext results
}
Posted by Eser Erdem, Senior Engineering Manager, Android Automotive
At Google we're deeply committed to the automotive industry--not just as a technology provider, but as a partner in the industry's transformation. We believe that car makers and users should have choice and flexibility, and that open platforms are the best enablers. For over a decade, we have provided Android Automotive OS (AAOS) as an open platform for infotainment, enabling rich innovation and differentiation in the in-vehicle digital experience. However, as vehicles modernize, car makers face new hurdles: fragmented software across compute components, poor portability between architectures, and a lack of granular update capabilities. To address these problems, we are expanding AAOS beyond infotainment with Android Automotive OS for Software Defined Vehicles (AAOS SDV)--an open platform featuring a modular structure, a topology-agnostic communication layer, and the support for granular updates.
The transition toward SDVs is an incredible industry transformation, and we are eager to contribute to the broader ecosystem making it happen. Later this year, AAOS SDV will be available in the Android Open Source Project (AOSP) for uses beyond infotainment. By bringing our SDV platform into the open-source domain, we empower the industry to develop or enhance features that lower costs, accelerate time to market, and provide significant advantages across the automotive landscape.
A Foundation for the Software-Defined Vehicle
AAOS SDV is engineered to address the core challenges of modern vehicle development. This new AAOS expansion provides a compact, performant and scalable software foundation based on a headless Android native stack, extending much deeper into the vehicle architecture to power software components throughout the vehicle such as the seat actuator, instrument cluster, climate control, lighting, cameras, mirrors, telemetry, and more.
AAOS SDV's core is a lightweight Android-based operating system incorporating low-level automotive specific frameworks for communications, diagnostics, software updates, and more. This enables AAOS SDV to power many different vehicle controllers, tackling Core Compute, Body Controls, and Cluster domains.
In addition, the AAOS SDV platform includes a new framework, Display Safety, for implementing instrument cluster applications including audible chimes, regulatory camera, and sophisticated graphics that blend seamlessly with AAOS IVI content. Display Safety includes a safety design toolchain and a reference safety monitor, allowing OEMs to meet functional safety requirements leveraging the diverse platform safety mechanisms of Automotive SoCs.
Flexible Deployment for AAOS SDV
Engineered for flexibility, the AAOS SDV framework can utilize hypervisor-backed virtualization with virtio support to separate software domains, or it can be deployed on bare metal for optimal low-latency performance.
Transforming the Developer Experience
AAOS SDV is designed to power modern vehicles, but it was also designed to change how modern vehicle software is developed, tested and delivered with the goals to reduce development time and cost while increasing innovation and agility. With its optimized development workflows, our open-source SDV platform provides a wide range of benefits across the automotive industry:
Accelerated Time-to-Market: AAOS SDV components can accelerate development with production ready software for various components that can be further modified.
Standard Signal Catalog: A new standard signal catalog to bring OEMs and automotive suppliers onto the same page eliminates redundant engineering efforts and significantly reduces platform development costs.
Optimized for virtual cloud development: AAOS SDV was designed ground-up to support virtual cloud development - enabling partners to design, test and validate components in the car well ahead of hardware availability. AAOS SDV already runs on Android Virtual Device (Cuttlefish), and works well with existing Google Cloud integrations such as Google Cloud Horizon, enabling a digital twin solution at scale.
A Service-Oriented Architecture: Vehicle functions are developed as topology-agnostic services which are reusable across different architectures. The platform treats the vehicle as a dynamic, connected system. This allows for granular, service-level updates with built-in dependency handling, enabling you to deploy new features over-the-air and create continuous improvement loops.
Future-Ready for new services: The platform is designed to simplify the development of telemetry, AI training feedback loops, accelerating the deployment of advanced features for both enterprise fleets and consumer vehicles.
Production Ready: Partnering with Renault
We are proud to highlight our deep partnership with Renault to underscore the production readiness of the AAOS SDV platform. Renault is currently leveraging the Android Automotive OS SDV platform for its upcoming Renault Trafic e-Tech, "[...] production set to begin in late 2026". The Renault Trafic e-Tech validates the platform's ability to accelerate development and enable a new generation of software-defined commercial vehicles.
Scaling Ready: Partnering with Qualcomm
Qualcomm is scaling the Android Automotive OS SDV platform through our strategic partnership. At CES 2026, Qualcomm introduced Snapdragon vSoC on Google Cloud and announced a scaling collaboration to deliver a turnkey, pre-integrated AAOS SDV stack on Snapdragon Digital Chassis platforms.
Building an Open AAOS Ecosystem
The power of AAOS comes from its vibrant ecosystem. To prepare for the open source release later this year, we are proactively working with leading industry carmakers, suppliers, silicon platforms, and software vendors to ensure that the AAOS SDV platform is well supported and robustly integrated within the automotive ecosystem. We look forward to sharing more updates with our partners in the months ahead.
Posted by Matthew Forsythe, Director Product Management, Android App Safety
Android proves you don't have to choose between an open ecosystem and a secure one. Since announcing updated verification requirements, we've worked with the community to ensure these protections are robust yet respectful of platform freedom. We've heard from power users that they want to take educated risks to install software from unverified developers. Today, we're sharing details on a new advanced flow that provides this option.
Advanced flow safeguards against coercion
Android is built on choice. That is why we've developed the advanced flow - an approach that allows power users to maintain the ability to sideload apps from unverified developers.
This flow is a one-time process for power users - but it was designed carefully to prevent those in the midst of a scam attempt from being coerced by high pressure tactics to install malicious software. In these scenarios, scammers exploit fear - using threats of financial ruin, legal trouble, or harm to a loved one - to create a sense of extreme urgency. They stay on the phone with victims, coaching them to bypass security warnings and disable security settings before the victim has a chance to think or seek help. According to a 2025 report from the Global Anti-Scam Alliance (GASA), 57% of surveyed adults experienced a scam in the past year, resulting in a global consumer loss of $442 billion. Because the consequences of these scams that use sophisticated social engineering tactics are so severe, we have carefully engineered the advanced flow to provide the critical time and space needed to break the cycle of coercion.
How the advanced flow works for users
Enable developer mode in system settings: Activating this issimple. This prevents accidental triggers or "one-tap" bypasses often used in high-pressure scams.
Confirm you aren't being coached: There is a quick check to make sure that no one is talking you into turning off your security. While power users know how to vet apps, scammers often pressure victims into disabling protections.
Restart your phone and reauthenticate: This cuts off any remote access or active phone calls a scammer might be using to watch what you're doing.
Come back after the protective waiting period and verify: There is a one-time, one-day wait and then you can confirm that this is really you who's making this change with our biometric authentication (fingerprint or face unlock) or device PIN. Scammers rely on manufactured urgency, so this breaks their spell and gives you time to think.
Install apps: Once you confirm you understand the risks, you're all set to install apps from unverified developers, with the option of enabling for 7 days or indefinitely. For safety, you'll still see a warning that the app is from an unverified developer, but you can just tap "Install Anyway."
A secure Android for every developer
We know a "one size fits all" approach doesn't work for our diverse ecosystem. We want to ensure that identity verification isn't a barrier to entry, so we're providing different paths to fit your specific needs.
In addition to the advanced flow we're building free, limited distribution accounts for students and hobbyists. This allows you to share apps with a small group (up to 20 devices) without needing to provide a government-issued ID or pay a registration fee. This ensures Android remains an open platform for learning and experimentation while maintaining robust protections for the broader community.
Limited distribution accounts and advanced flow for users will be available in August before the new developer verification requirements take effect.
Visit our website for more details. We look forward to sharing more in the coming days and weeks.
Posted by Ivy Knight, Senior Design Advocate, Android
We're thrilled to announce major updates to our design resources, giving you the comprehensive guidance you need to create polished, adaptive Android apps across all form factors! We now haveDesktop Experience guidanceand a refreshedAndroid Design Gallery.
New Desktop Experience Design Guidance
Your users are engaging with Android apps on more diverse devices than ever before-from phones and foldables to laptops and external monitors. A "desktop experience" occurs anytime your app is in a desktop-like mode, typically involving a non-touch input device like a keyboard or mouse, or another display such as a monitor (read more in theconnected display announcement). This means designing forlarger screensand accommodating additional input states. These new design experiences are meant to maximize productivity for your users with higher information density, multi-tasking capabilities.
Dive into desktop experience guidance to help optimize your app with desktop design principles, input interaction guidance, and system UI considerations.
The new guidance includes foundational guides where you can learn design principles that make desktop experiences unique, such as how multitasking is at the core of desktop experiences.
When your app is in a desktop experience, keep in mind crucial interaction experiences, such as how to best design around unique input interactions, like choosing cursors from system provided cursors.
For specialized actions not covered by system icons, consider creating a custom cursor icon, while ensuring it remains easy for users to find on the page.
A desktop experience brings more multitasking features, like windowing, so expect your app to take on a variety of dimensions with a header bar.
Desktops have much larger screens than mobile, and users typically interact using a mouse which has finer precision than a finger on a touch screen. This means you can present a UI with higher information density so your users can be more productive!
Want to get started quickly? Check out the walkthrough to go from mobile to desktop and design along with the updatedAdaptive Design lab.
Looking for inspiration? We've launched theAndroid Design Gallery! This new resource is a living catalog of inspirational examples across multiple verticals, form factors, and UX patterns. We'll be continually adding new inspirational examples, so check back often to see the latest and greatest in Android design.
Posted by Daniel Santiago Rivera, Software Engineer
The first alpha of Room 3.0 has been released! Room 3.0 is a major breaking version of the library that focuses on Kotlin Multiplatform (KMP) and adds support for JavaScript and WebAssembly (WASM) on top of the existing Android, iOS and JVM desktop support.
In this blog we outline the breaking changes, the reasoning behind Room 3.0, and the various things you can do to migrate from Room 2.0.
Breaking changes
Room 3.0 includes the following breaking API changes:
Dropping SupportSQLite APIs: Room 3.0 is fully backed by the androidx.sqlite driver APIs. The SQLiteDriver APIs are KMP-compatible and removing Room's dependency on Android's API simplifies the API surface for Android since it avoids having two possible backends.
No more Java code generation: Room 3.0 exclusively generates Kotlin code. This aligns with the evolving Kotlin-first paradigm but also simplifies the codebase and development process, enabling faster iterations.
Focus on KSP: We are also dropping support for Java Annotation Processing (AP) and KAPT. Room 3.0 is solely a KSP (Kotlin Symbol Processing) processor, allowing for better processing of Kotlin codebases without being limited by the Java language.
Coroutines first: Room 3.0 embraces Kotlin coroutines, making its APIs coroutine-first. Coroutines is the KMP-compatible asynchronous framework and making Room be asynchronous by nature is a critical requirement for supporting web platforms.
A new package
To prevent compatibility issues with existing Room 2.x implementations and for libraries with transitive dependencies to Room (for example, WorkManager), Room 3.0 resides in a new package which means it also has a new maven group and artifact ids. For example, androidx.room:room-runtime has become androidx.room3:room3-runtime and classes such as androidx.room.RoomDatabase will now be located at android.room3.RoomDatabase.
Kotlin and Coroutines First
With no more Java code generation, Room 3.0 also requires KSP and the Kotlin compiler even if the codebase interacting with Room is in Java. It is recommended to have a multi-module project where Room usage is concentrated and the Kotlin Gradle Plugin and KSP can be applied without affecting the rest of the codebase.
Room 3.0 also requires Coroutines and more specifically DAO functions have to be suspending unless they are returning a reactive type, such as a Flow. Room 3.0 disallows blocking DAO functions. See the Coroutines on Android documentation on getting started integrating Coroutines into your application.
Migration to SQLiteDriver APIs
With the shift away from SupportSQLite, apps will need to migrate to the SQLiteDriver APIs. This migration is essential to leveraging the full benefits of Room 3.0, including allowing the use of the bundled SQLite library via the BundledSQLiteDriver. You can start migrating to the driver APIs today with Room 2.7.0+. We strongly encourage you to avoid any further usage of SupportSQLite. If you migrate your Room integrations to SQLiteDriver APIs, then the transition to Room 3.0 is easier since the package change mostly involves updating symbol references (imports) and might require minimal changes to call-sites.
We understand completely removing SupportSQLite might not be immediately feasible for all projects. To ease this transition, Room 2.8.0, the latest version of the Room 2.0 series, introduced a new artifact called androidx.room:room-sqlite-wrapper. This artifact offers a compatibility API that allows you to convert a RoomDatabase into a SupportSQLiteDatabase, even if the SupportSQLite APIs in the database have been disabled due to a SQLiteDriver being installed. This provides a temporary bridge for developers who need more time to fully migrate their codebase. This artifact continues to exist in Room 3.0 as androidx.room3:room3-sqlite-wrapper to enable the migration to Room 3.0 while still supporting critical SupportSQLite usage.
For example, invocations of Database.openHelper.writableDatabase can be replaced by roomDatabase.getSupportWrapper() and a wrapper would be provided even if setDriver() is called on Room's builder.
Support for the Kotlin Multiplatform targets JS and WasmJS and brings some of the most significant API changes. Specifically, many APIs in Room 3.0 are suspend functions since proper support for web storage is asynchronous. The SQLiteDriver APIs have also been updated to support the Web and a new web asynchronous driver is available in androidx.sqlite:sqlite-web. It is a Web Worker based driver that enables persisting the database in the Origin private file system (OPFS).
For more details on how to set up Room for the Web check out the Room 3.0 release notes.
Custom DAO Return Types
Room 3.0 introduces the ability to add custom integrations to Room similar to RxJava and Paging. Through a new annotation API called @DaoReturnTypeConverter you can create your own integration such that Room's generated code becomes accessible at runtime, this enables @Dao functions having their custom return types without having to wait for the Room team to add the support. Existing integrations are migrated to use this functionality and thus will now require for those who rely on it to add the converters to the @Database or @Dao definitions.
For example, the Paging converter will be located in the android.room3:room3-paging artifact and it's called PagingSourceDaoReturnTypeConverter. Meanwhile for LiveData the converter is in android.room3:room3-livedata and is called LiveDataReturnTypeConverter.
For more details check out the DAO Return Type Converters section in the Room 3.0 release notes.
Maintenance mode of Room 2.x
Since the development of Room will be focused on Room 3, the current Room 2.x version enters maintenance mode. This means that no major features will be developed but patch releases (2.8.1, 2.8.2, etc.) will still occur with bug fixes and dependency updates. The team is committed to this work until Room 3 becomes stable.
Final thoughts
We are incredibly excited about the potential of Room 3.0 and the opportunities it unlocks for the Kotlin ecosystem. Stay tuned for more updates as we continue this journey!