23 Aug 2025

feedTalkAndroid

This epic series rivals Game of Thrones and dominates in 60 countries

Fans of sprawling fantasy sagas have been searching for the next big thing ever since Game of Thrones…

23 Aug 2025 6:30am GMT

22 Aug 2025

feedAndroid Developers Blog

The latest Gemini Nano with on-device ML Kit GenAI APIs

Posted by Caren Chang - Developer Relations Engineer, Joanna (Qiong) Huang - Software Engineer, and Chengji Yan - Software Engineer

The latest version of Gemini Nano, our most powerful multi-modal on-device model, just launched on the Pixel 10 device series and is now accessible through the ML Kit GenAI APIs. Integrate capabilities such as summarization, proofreading, rewriting, and image description directly into your apps.

With GenAI APIs we're focused on giving you access to the latest version of Gemini Nano while providing consistent quality across devices and model upgrades. Here's a sneak peak behind the scenes of some of the things we've done to achieve this.

Adapting GenAI APIs for the latest Gemini Nano

We want to make it as easy as possible for you to build AI powered features, using the most powerful models. To ensure GenAI APIs provide consistent quality across different model versions, we make many behind the scenes improvements including rigorous evals and adapter training.

  1. Evaluation pipeline: For each supported language, we prepare an evaluation dataset. We then benchmark the evals through a combination of: LLM-based raters, statistical metrics and human raters.
  2. Adapter training: With results from the evaluation pipeline, we then determine if we need to train feature-specific LoRA adapters to be deployed on top of the Gemini Nano base model. By shipping GenAI APIs with LoRA adapters, we ensure each API meets our quality bar regardless of the version of Gemini Nano running on a device.

The latest Gemini Nano performance

One area we're excited about is how this updated version of Gemini Nano pushes performance even higher, especially the prefix speed - that is how fast the model processes input.

For example, here are results when running text-to-text and image-to-text benchmarks on a Pixel 10 Pro.

Prefix Speed - Gemini nano-v2 on Pixel 9 Pro Prefix Speed - Gemini nano-v2* on Pixel 10 Pro Prefix Speed - Gemini nano-v3 on Pixel 10 Pro
Text-to-text 510 tokens/second 610 tokens/second 940 tokens/second
Image-to-text 510 tokens/second + 0.8 seconds for image encoding 610 tokens/second + 0.7 seconds for image encoding 940 tokens/second + 0.6 seconds for image encoding
*Experimentation with Gemini nano-v2 on Pixel 10 Pro for benchmarking purposes. All Pixel 10 Pros launched with Gemini nano-v3.

The future of Gemini Nano with GenAI APIs

As we continue to improve the Gemini Nano model, the team is committed to using the same process to ensure consistent and high quality results from GenAI APIs.

We hope this will significantly reduce the effort to integrate Gemini Nano in your Android apps while still allowing you to take full advantage of new versions and their improved capabilites.

Learn more about GenAI APIs

Start implementing GenAI APIs in your Android apps today with guidance from our official documentation and samples: GenAI API Catalog and ML Kit GenAI APIs quickstart samples.

22 Aug 2025 4:00pm GMT

feedTalkAndroid

Board Kings Free Rolls – Updated Every Day!

Run out of rolls for Board Kings? Find links for free rolls right here, updated daily!

22 Aug 2025 3:50pm GMT

Coin Tales Free Spins – Updated Every Day!

Tired of running out of Coin Tales Free Spins? We update our links daily, so you won't have that problem again!

22 Aug 2025 3:49pm GMT

Coin Master Free Spins & Coins Links

Find all the latest Coin Master free spins right here! We update daily, so be sure to check in daily!

22 Aug 2025 3:47pm GMT

Monopoly Go – Free Dice Links Today (Updated Daily)

If you keep on running out of dice, we have just the solution! Find all the latest Monopoly Go free dice links right here!

22 Aug 2025 3:45pm GMT

Family Island Free Energy Links (Updated Daily)

Tired of running out of energy on Family Island? We have all the latest Family Island Free Energy links right here, and we update these daily!

22 Aug 2025 3:43pm GMT

Crazy Fox Free Spins & Coins (Updated Daily)

If you need free coins and spins in Crazy Fox, look no further! We update our links daily to bring you the newest working links!

22 Aug 2025 3:40pm GMT

Match Masters Free Gifts, Coins, And Boosters (Updated Daily)

Tired of running out of boosters for Match Masters? Find new Match Masters free gifts, coins, and booster links right here! Updated Daily!

22 Aug 2025 3:34pm GMT

Solitaire Grand Harvest – Free Coins (Updated Daily)

Get Solitaire Grand Harvest free coins now, new links added daily. Only tested and working links, complete with a guide on how to redeem the links.

22 Aug 2025 3:30pm GMT

Android 16: Google makes switching and restoring smartphones easier

Changing smartphones has never been particularly fun. Between factory resets, data transfers, and the dreaded set-up screens, the…

22 Aug 2025 3:30pm GMT

Dice Dreams Free Rolls – Updated Daily

Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.

22 Aug 2025 3:29pm GMT

Monopoly Go Events Schedule Today – Updated Daily

Current active events are Main Event - Monopoly Motel, Tournament - Cactus Circuit, and Special Event - Tycoon Racers

22 Aug 2025 3:26pm GMT

Android: the foolproof trick to block unwanted calls and messages

Sick of your phone buzzing at all hours with spam calls or shady texts? You're not alone. Millions…

22 Aug 2025 6:30am GMT

RAID Shadow Legends Free Promo Codes

Look no further! Find the latest RAID Shadow Legends Free Promo Codes right here!

22 Aug 2025 3:22am GMT

Ultimate Tower Defense Simulator Free Codes

Look no further! All the latest Ultimate Tower Defense Simulator Free Codes are right here!

22 Aug 2025 3:19am GMT

Fruit Battlegrounds Free Redeem Codes

Look no further for all the latest Fruit Battlegrounds Free Redeem codes. We have you covered in this article!

22 Aug 2025 3:17am GMT

21 Aug 2025

feedAndroid Developers Blog

64-bit app compatibility for Google TV and Android TV

Posted by Fahad Durrani Product Management, Google TV

Google TV and Android TV will require 64-bit app compatibility to support upcoming 64-bit TV devices starting August 2026.

Following other Android form factors, Google TV and Android TV devices will soon support 64-bit app compatibility. 64-bit apps will offer improved performance, shorter start times, and new viewing experiences on upcoming 64-bit Google TV and Android TV devices.

Starting August 1st, 2026:

  • Any new app or app update that includes native code is required to provide 64-bit (arm64) versions in addition to 32-bit (armeabi-v7a) versions when submitted to Google Play. You can mitigate the size increase of your App Bundle. For more details, see Support 64-bit architectures.

We're not making any changes to 32-bit support, and Google Play will continue to deliver apps to 32-bit devices. The 64-bit requirement means that apps with 32-bit native code will need a 64-bit version as well. You should continue to provide 32-bit binaries alongside 64-bit binaries by using ABI splits in App Bundles.

How to transition

This requirement only impacts apps that utilize native code. You can check if your app has native code (.so files) with the APK Analyzer. For ARM devices, you can find native libraries in lib/armeabi-v7a (32-bit) or lib/arm64-v8a (64-bit).

For detailed guidance on transitioning to 64-bit, see Support 64-bit architectures.

How to test

  • The Google TV emulator image for macOS devices with Apple Silicon is configured for a 64-bit userspace and may be used for app testing and verification.
  • The Nvidia Shield (models P2571, P2897 and P2897) have both 32-bit and 64-bit userspace compatibility and may be used for testing on physical hardware. If your app contains 64-bit libraries, they will be used automatically.
  • 64-bit TV apps may be sideloaded to Pixel (7 or newer) phones after constraining the view window to TV resolution and DPI:
  • adb shell wm size 1080x1920
    adb shell wm density 231 #tvdpi
    adb install <package.apk>

Next steps

Prepare your TV apps to comply with 64-bit requirements by August 1st, 2026:

  1. Use the APK Analyzer to check if your app has native code.
  2. Update your native code to support 64-bit and 16 KB memory page size.
  3. Test and verify that your changes work as intended.
  4. Submit your app update to Google Play.

21 Aug 2025 9:30pm GMT

Build your app to meet users in every moment on the newest Pixel devices, from wearables to foldables, and more

Posted by Fahd Imtiaz - Senior Product Manager and Kseniia Shumelchyk - Engineering Manager, Developer Relations

This week at Made by Google, we introduced the new suite of Pixel devices, including the Pixel 10 Pro Fold and Pixel Watch 4. These devices are more than just an evolution in hardware; they are built to showcase the latest updates in Android, creating new possibilities for you to build experiences that are more helpful, personal, and adaptive than before.

Let's explore what this moment means for your apps and how you can start building today.

Give your app more room to shine on foldable and large screens

Pixel 10 pro fold open on the left and back view, closed, on the right

The new Pixel 10 Pro Fold represents the next step in mobile computing, inviting you to think beyond a single screen. With a stunning 8-inch inner display that unfolds to create an immersive, large screen experience and a fully-capable 6.4-inch outer display, your apps have a powerful and flexible stage to shine. Its advanced durability and all-day battery life make this form factor ready for everyday use, raising user expectations for premium app experiences.

Building a truly adaptive app is how you unlock the full potential of this hardware. On the new Pixel 10 Pro Fold, users will multitask with enhanced Split Screen and drag-and-drop, or use hands-free tabletop modes for entertainment. Your app must support resizability and both portrait and landscape orientations to deliver the seamless, dynamic layouts these new experiences demand. Following the best practices on adaptive development is the key to providing an optimal experience on every screen and in every posture.

woman wearing a blue sweater and blue ombre skirt uses a Pixel 10 pro fold

To help you build these adaptive experiences, we offer a suite of powerful tools. You can use existing tools like Jetpack Window Manager and the Compose Adaptive Layouts Libraries today. And coming soon to beta, Compose Adaptive Layout Library 1.2 will introduce new adaption strategies like Levitate and Reflow, plus support for Large and Extra Large width Window Class Sizes.

The goal is to not be confined to a single screen, but build one app that works great everywhere, from phones and foldables to tablets and other large screens. This is your opportunity to expand your app's reach and deliver the dynamic experiences users now expect. With the tools at your fingertips, you can start building for every screen today. Learn how you can unlock your app's full potential with adaptive development at developer.android.com/adaptive-apps.

Bring your most expressive apps to the wrist

a Google Pixel Watch 4 on a user's wrist

The new Pixel Watch 4 is here, and it's the first smartwatch built to showcase the full power of Material 3 Expressive on Wear OS 6. This is where the vision for the platform truly comes to life, allowing you to build stunning, modern apps and tiles without compromising on performance. With this release, you no longer have to choose between beautiful animations and battery life; with Wear OS 6, you can build experiences that are beautiful, helpful, and powerful, all at once.

To get that modern look, you can use the new Material 3 Expressive libraries for Compose on Wear OS, which provide powerful components like the TransformingLazyColumnuid lists and the EdgeButton to create UIs that feel natively built for the wrist.

moving image of Material 3 Expressive libraries for Compose on Wear OS demo

This focus on design naturally extends to the centerpiece of the user's experience, the watch face itself. To give you more creative control, we've introduced version 4 of the Watch Face Format, which unlocks possibilities like fluid, animated state transitions and lets users select their own photos for the background. And to help developers create their own watch face marketplaces, we've introduced the Watch Face Push API. We've partnered with well-known watch face developers - including Facer, TIMEFLIK, WatchMaker, and Pujie - who are bringing their unique watch face experiences to the new devices that users can already get today.

All of this is built on a more reliable and efficient foundation, with watches updating to Wear OS 6 seeing up to a 10% improvement in battery life and quicker app launches. This gives you the confidence to use these new creative tools, knowing your app will perform beautifully. Start building apps for the wrist using the resources and guidance at developer.android.com/wear.

Ready to build for every screen today?

open Pixel 10 Fold on the left and Pixel Watch 4 on the right

The opportunities for your app are bigger than ever, and you can start today. See how your app performs across screen sizes by using the resizable emulator in Android Studio, and explore our large-screen design gallery for inspiration.

For your wearables, the best way to begin is by upgrading your UI with the new Material 3 Expressive libraries for Compose on Wear OS and exploring the engaging experiences you can build with the Watch Face Push API. Finally, use the Wear OS 6 emulator to test and verify your app's experience.

You can find all the resources you need, including documentation, samples, and guides at developer.android.com/adaptive-apps and developer.android.com/wear.

We can't wait to see what you develop next!

21 Aug 2025 4:00pm GMT

#WeArePlay: How Maliyo Games is turning local culture into global hits

Posted by Robbie McLachlan - Developer Marketing


In our latest #WeArePlay film, which celebrates the people behind apps and games on Google Play, we meet Hugo, the founder of Maliyo Games. He is on a mission to put African stories and talent on the global gaming map by creating vibrant games inspired by local life and culture. Discover how he is building not just games, but an entire ecosystem for game development on the continent.


You went from a career in finance to becoming a pioneer in Africa's games industry. What inspired that leap?

I've always had a passion for economics and business, but after some years in finance, I wanted to move back to Nigeria and help build something new. I noticed a problem on our continent: we were huge consumers of digital content but not creators. Seeing the passion for our local music and film, I knew we could bring that same energy to gaming. My mission became clear: to shift us from being 'net consumers' to 'net creators,' using games as a medium to take our unique stories and culture to the world.

Hugo, founder of Maliyo Games, Nigeria

Your games are bursting with African culture. Why is it so important to you to tell these specific stories?

Because these are our stories to tell! I think of us as storytellers first, and games are our medium. We create games based on shared experiences from our childhoods, we feature things everyone can relate to like our love for property development and redesign in Safari City.

When people in Africa play our games, the reaction is pure joy and surprise. They say, "I can't believe you guys built this!" because they see themselves and their lives reflected. It's not just about enjoyment; it's about a deeper, emotional connection.

a user holds a mobile device with a Maliyo game on screen

Building a game studio where there was no established industry must have been tough. How did you tackle the challenge of finding and nurturing talent on the continent?

It was definitely hard. When we started in 2012, the biggest challenge wasn't just finding skills, but finding the right mindset; so we started GameUp Africa, a free, pan-African training program. It was a longer road, for sure, but it has transformed everything. Today, 90% of our team came through that program. We're now a team of about 30 people from Nigeria, Ghana, Kenya, and more. Seeing these young, brilliant creators, some as young as 17, building their careers with us is the most satisfying part of what I do.

a Game Up Africa cohort along with Hugo surrounds a student to observe their screen

How has Google Play helped you achieve global reach from your base in Lagos?

For us, Google Play was a no-brainer. Africa is primarily an Android market, so it's our primary distribution platform to reach our audience here and in the diaspora. But it's more than just a storefront; it's the entire infrastructure. We use the full suite of tools: Firebase's analytics give us incredible insights into player behaviour, Google AdMob helps us monetize, and the testing tools let us experiment with new features. Google Play makes it possible for a studio in Lagos to build, scale, and operate as a truly global company without having to build that foundation ourselves.

What is next for Maliyo Games?

Now, our sights are set on growth. The next big goal is to get one of our games to one million monthly active users, as we build towards our long-term ambition of reaching an engaged community of 500 million by 2030.


Discover other inspiring app and game founders featured in #WeArePlay.



Google Play logo

21 Aug 2025 1:00pm GMT

20 Aug 2025

feedAndroid Developers Blog

Android 16 QPR2 Beta 1 is here

Posted by Matthew McCullough - VP of Product Management, Android Developer


Today we're releasing Android 16 quarterly platform release 2 (QPR2) Beta 1, providing you with an early opportunity to try out the APIs and features that are moving Android forward. This beta focuses on several key improvements:

  • Enhanced User Experience: A better experience across all form factors, from phones to foldables and tablets.
  • Enabling Richer Apps: New APIs for creative expression, productivity, media, and connectivity.
  • Developer Productivity: new platform features to help you debug and test your apps.

A minor SDK version

This release marks the first Android beta with a minor SDK version allowing us to more rapidly innovate with new platform APIs provided outside of our usual once-yearly timeline. Unlike the major platform release in Q2 that included behavior changes that impact app compatibility, the changes in this release are largely additive and designed to minimize the need for additional app testing.

Android 16 SDK release cadence

Your app can safely call the new APIs on devices where they are available by using SDK_INT_FULL and the respective value from the VERSION_CODES_FULL enumeration.

if (Build.VERSION.SDK_INT_FULL >= Build.VERSION_CODES_FULL.BAKLAVA_1) {
    // Call new APIs from the Android 16 QPR2 release
}

You can also use the Build.getMinorSdkVersion() method to get just the minor SDK version number.

val minorSdkVersion = Build.getMinorSdkVersion(VERSION_CODES_FULL.BAKLAVA)

The original VERSION_CODES enumeration can still be used to compare against the SDK_INT enumeration for APIs declared in non minor releases.

if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.BAKLAVA) {
    // Call new APIs from the Android 16 release
}

Since minor releases aren't intended to have breaking behavior changes, they cannot be used in the uses-sdk manifest attributes.

UI, system experience, and accessibility

This release introduces refinements to the system UI, user experience, and accessibility, from theming changes to input handling to new APIs for adaptive apps.

Dark theme's new expanded option

To create a more consistent user experience for users who have low vision, photosensitivity, or simply those who prefer a dark system-wide appearance, an expanded option under dark theme is being introduced. When enabled by a user, the system will intelligently invert the UI of apps that appear light despite users having selected the dark theme.

A split image showing standard light theme on the left and expanded dark theme on the right on a pixel device

The system uses your app's isLightTheme theme attribute to determine whether to apply inversion. If your app inherits from one of the standard DayNight themes, this is done automatically for you, otherwise make sure to declare isLightTheme="false" in your dark theme to ensure your app is not inadvertently inverted. Standard Android Views, Composables, and WebViews will be inverted, while custom rendering engines like Flutter will not. The system also automatically darkens your app's splash screen and adjusts the status bar color for contrast.

This is largely intended as an accessibility feature. We strongly recommend implementing a native dark theme, which gives you full control over your app's appearance; you can protect your brand's identity, ensure text is always readable, and prevent any visual glitches from happening when your UI is automatically inverted, guaranteeing a polished, reliable experience for your users.

Auto-themed app icons

We recommend that apps control the design of their themed app icon by including a monochrome layer within their adaptive icon. You can preview the themed version of your app icon using Android Studio.

Android 16 QPR2 can automatically generate a themed icon for your app if you don't provide a dedicated one. The system applies a color filtering algorithm to your existing launcher icon to render it in a monochrome style, allowing it to integrate with the user's chosen theme.


Interactive chooser sessions

This new capability allows your app's UI to remain fully interactive when the system sharesheet is open. You can display custom UI, dynamically update the content or targets in the Chooser, and programmatically control its state. You'll use the new ChooserManager to start an interactive session and the ChooserSession object to manage it.

Smoother Android migrations

A new 3rd-party Data Transfer API is being introduced to enable more reliable and secure data migration between Android and iOS devices. Your app can now opt-in to participate in cross-platform data transfers. This requires updating your app's data extraction rules XML with a new <cross-platform-transfer> tag and implementing custom logic in the BackupAgent to export and import app data to and from other platforms. New methods are also being added to BackupAgent, such as onMeasureFullBackup, to give you more control over the backup process for large datasets.

PDF document editing

The android.graphics.pdf package has been significantly expanded to support annotating and editing PDF documents. This class provides core APIs for apps that wish to create their own PDF user experience, and is the foundation for the Jetpack PDF library, which also provides the UI for an embedded PDF viewer. The PdfRenderer.Page class now allows you to:

With these new APIs, your apps can support use cases such as form filling, document signing, document review/collaboration, interactive study/note taking, and more. We're also working to bring these annotation and editing capabilities to the Jetpack PDF library to further simplify the integration of these features.

Display Topology API

To support advanced multi-display experiences, the new Display Topology API provides your app with information about how multiple displays are arranged - their relative positions and absolute bounds. A new Display.isInternal() method helps distinguish between built-in and other screens. You can also register a TopologyListener to receive real-time updates when the display setup changes.

Device-aware ViewConfiguration

ViewConfiguration values (e.g., touch slop, long press timeout) can now be tailored to individual virtual devices. This means that an app running on a virtual device will now use configuration values appropriate for that device's characteristics, not the host device's.

To ensure your app behaves correctly in multi-display scenarios (e.g., an activity on the phone and another on a connected smart display), you should migrate from static ViewConfiguration methods to instance-based methods by calling ViewConfiguration.get(context).

// Instead of this:
// val longPressTimeout = ViewConfiguration.getLongPressTimeout()

// Do this, using the specific Activity's context:
val vc = ViewConfiguration.get(myActivityContext)
val longPressTimeout = vc.longPressTimeout

More granular haptic feedback control

A new API allows you to specify the usage in terms of VibrationAttributes (e.g., USAGE_TOUCH) when triggering haptic feedback. This ensures your app's vibrations align more precisely with user-defined intensity settings for different contexts, like touch vs. accessibility.

Use the new View.performHapticFeedback(HapticFeedbackRequest) method to pass a request that specifies both the HapticFeedbackConstant and the desired Usage. Existing calls will continue to work as before.

Quick Settings Tile categories

To improve the discoverability of your app's Quick Settings tiles, you can now optionally assign them to a predefined category. By adding a <meta-data> tag to your TileService declaration in the AndroidManifest.xml, your tile can be grouped with similar system tiles in the Quick Settings Edit mode.

Example for a connectivity-related tile:

<service
    android:name=".MyConnectivityTileService"
    ... >
    <intent-filter>
        <action android:name="android.service.quicksettings.action.QS_TILE" />
    </intent-filter>
    <meta-data
        android:name="android.service.quicksettings.TILE_CATEGORY"
        android:value="android.service.quicksettings.CATEGORY_CONNECTIVITY" />
</service>

Additional UI and System Experience updates

  • Controlled Mouse Scrolling: A new mouse system setting allows users to enable "Controlled Scrolling" for external mice, which makes scrolling speed directly proportional to the physical wheel movement.
  • Picture-in-Picture (PiP) Refactoring: The underlying mechanics of PiP transitions have been refactored, resulting in smoother and more reliable animations.
  • Public System Update Intent: The android.settings.ACTION_SYSTEM_UPDATE_SETTINGS intent action is now a public API, providing a standardized way for apps to direct users to their device's system update page. See the documentation for how to launch this intent securely.
  • Time Zone Notifications: The system will now notify users when their time zone is automatically changed.
  • Files Desktop UX: The DocumentsUI file manager is receiving a Material 3 Expressive design refresh and will show "Visual Signals" for file operations.
  • Printer Info Screen: The Android Default Print Service now displays a more comprehensive printer information screen, including status and supply levels.

Media and Audio

This release brings support for new audio formats, provides more granular control over audio playback, and enhances the volume experience for voice interactions.

IAMF decoding support

Android 16 QPR2 adds software decoding for Immersive Audio Model and Formats (IAMF) audio. IAMF is a new open-source spatial audio format, available under a royalty free license from Alliance for Open Media. The IAMF decoder supports Opus, PCM, AAC and FLAC audio within IAMF files, in full compliance with the IAMF specification. You can leverage IAMF to deliver rich, immersive audio experiences in your Android apps.

ExoPlayer will automatically use the framework IAMF decoder when available. For backwards compatibility, the IAMF ExoPlayer Extension can also be used to decode IAMF.

Personal Audio Sharing in Output Switcher

Personal Audio Sharing for Bluetooth Low Energy (LE) Audio devices is now integrated directly into the system's Output Switcher. This system-level UI enhancement provides a more intuitive and consistent way for users to manage and share audio from your app to multiple LE Audio devices without requiring any changes to your existing audio playback code.

New AAudio APIs for performance and control

The native AAudio library for high-performance audio has been updated. These new APIs provide more control and better performance for demanding audio applications that rely on the NDK, especially those focused on power-efficient, high-quality playback.

  • Partial Buffer Processing in Callbacks: A new data callback, AAudioStream_partialDataCallback, allows your app to specify exactly how many frames it has processed. This gives you more flexibility when working with large data buffers (like in compressed offload scenarios), as you no longer need to provide the entire requested buffer at once.
  • PCM Offload over MMAP: To improve power efficiency, AAudio now supports PCM offload over the MMAP path. You can request this by setting the performance mode to AAUDIO_PERFORMANCE_MODE_POWER_SAVING_OFFLOADED. A new API, AAudioStream_flushFromFrame, is also available for MMAP offload streams to reset the playback position when a user seeks or skips a track.

Additional Media and Audio updates

  • HDR/SDR Brightness Slider: A new system-level slider allows users to adjust the perceived brightness of HDR content. Your app's HDR content will automatically adapt to this user preference without any required code changes.

Connectivity

New APIs are available to support emerging connectivity standards, enhance device management, and give users more control over network privacy.

Companion Device Management enhancements

The Companion Device Manager (CDM) is receiving several updates to improve cross-app interactions and user control in system Settings.

  • Custom device icons: Your app can now provide a custom icon for self-managed device associations by supplying a Bitmap using the new setDeviceIcon() method on the AssociationRequest.Builder. The icon will be displayed in system dialogs and settings, creating a more recognizable and trusted user experience. You can also retrieve the icon for an existing association using AssociationInfo.getDeviceIcon().
  • Association removal notifications: Your app can now listen for the EVENT_ASSOCIATION_REMOVED callback via startObservingDevicePresence. This event fires when a user "forgets" a device in system Settings or when your app's data is cleared, allowing your app to maintain an accurate connection state.
  • Cross-App verification: System apps can now verify if your companion app has a legitimate association with a device and monitor the presence of devices managed by your app using the DeviceId created during association with the new createAndSetDeviceId API.

Additional connectivity updates

MediaRouter Network Privacy improvements

To support casting to devices over new mediums like Bluetooth and UWB, the MediaRouter framework is evolving. Your app can now cast to a wider array of devices, including in-car displays and gym equipment, while contributing to a more privacy-preserving discovery model.

The recommended approach is to use the system Output Switcher, which handles discovery over sensitive mediums without requiring your app to hold extra permissions. If your app uses a custom in-app picker and you want to discover devices over these new mediums, you will need to request permissions from the NEARBY_DEVICES permission group (e.g., BLUETOOTH_SCAN). New MediaRoute2Info.Builder methods are available for route providers to declare required permissions.

Privacy and Security

This release continues to enhance user privacy and device security with new features for locking devices and managing sensitive data.

Secure Lock Device

A new system-level security state, Secure Lock Device, is being introduced. When enabled (e.g., remotely via "Find My Device"), the device locks immediately and requires the primary PIN, pattern, or password to unlock, heightening security. When active, notifications and quick affordances on the lock screen will be hidden, and biometric unlock may be temporarily disabled.

Phone Theft Protection toggle

A user-facing toggle is being added to Theft Protection Settings, allowing users to enable or disable the "Failed Authentication Lock" security feature (introduced in Android 15) that automatically locks down your device after multiple failed login attempts.

Additional Security updates

  • Keyinfo Improvement: The KeyInfo class now provides a method isUnlockedDeviceRequired() which checks whether the key can be used only when the device is unlocked.

Developer productivity

New features and APIs are available to streamline debugging, testing, and profiling.

Widget engagement metrics

New AppWidgetManager APIs allow you to query for user interaction events with your widgets within a given time range, including clicks, scrolls, and impressions, providing data you can use to help you improve your widget's design.

Early warnings for 16KB page size compatibility

To help you prepare for the future requirement that all apps are 16 KB page-aligned, Android will now show alignment warnings on 4 KB production devices for debuggable apps installed via ADB. If your app is not 16 KB-aligned, a dialog will appear at launch, listing the specific native libraries that need to be fixed - allowing you to address them ahead of the Play Store deadline.

Android 16 program timeline highlighting beta releases in August

Enhanced profiling with new system triggers

The ProfilingManager has added support for new system-initiated profiling triggers, including when your app is killed by the user from the Recents screen, Force Stop, or the task manager. You can also now request the currently running background system trace using ProfilingManager.requestRunningSystemTrace(), allowing you to capture profiling that has occurred before the request takes place. Note that the background trace runs intermittently and will not be available all the time.

Debug printing with a new developer toggle

A new "Verbose print logging" toggle is now available in Developer Options. When enabled, the Android Print Framework and associated services will output additional debug information to logcat, which can help you troubleshoot printing-related issues in your apps.

More robust testing for desktop and multi-display experiences

To facilitate more robust testing of your apps on connected displays, new public APIs are available in UiAutomation to programmatically capture screenshots on non-default displays. Additionally, the AccessibilityWindowInfo.refresh() method is now public, allowing accessibility services to ensure they are working with the most up-to-date window information.

You can integrate these new UiAutomation capabilities into your test suites to expand coverage for your app's desktop mode or external monitor use cases. For accessibility service developers, calling refresh() can improve the reliability of your service.

  • API for Backported Fixes: Android 16 QPR2 contains support for the upcoming androidx.core:core-backported-fixes library, which will allow your app to programmatically query if a specific critical bug has been fixed on a device, enabling you to roll out features that depend on the fix much faster, without waiting for an OS release.
  • GUI Apps in Linux Terminal: The Linux terminal feature is being expanded to support running Linux GUI applications directly within the terminal environment virtual machine.
  • RootView Changed Listener: The WindowInspector class now includes addGlobalWindowViewsListener(), which allows your app or testing framework to be notified in real-time when root views (like Toasts) are added or removed, improving telemetry and test efficiency.

Program timeline

The Android 16 QPR2 beta program runs from August 2025 until the final public release in Q4. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight new APIs and features for you to try out as they are ready to test in blogs and on the Android 16 developer website.

Android 16 program timeline highlighting beta releases in August

We're targeting October of 2025 for our Platform Stability milestone. At this milestone, we'll deliver final SDK/NDK APIs. From that time you'll have several months before the final release to complete any integrations. Check out the release timeline details for milestones and updates.

Get started with the Android 16 QPR2 beta

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are already in the Android Beta program, you will be offered an over-the-air update to Beta 1. We'll update the system images and SDK regularly throughout the Android 16 QPR2 release cycle.

If you are in the Canary program and would like to enter the Beta program, you will need to wipe your device and manually flash it to the beta release.

For the best development experience with Android 16 QPR2, we recommend that you use the latest Canary of the feature drop of Android Studio (Narwhal).

We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.

Thank you for helping to shape the future of the Android platform.

20 Aug 2025 6:28pm GMT

14 Aug 2025

feedAndroid Developers Blog

Accelerating development with monthly releases for Android Studio - releasing 2X more often than before

Posted by Xavier Ducrohet - Tech Lead, Android Studio and Adarsh Fernando - Group Product Manager, Android Studio

Last year, we doubled our release frequency for Android Studio with the introduction of Feature Drops, a change designed to bring you new features and improvements more quickly. Today, we're excited to announce the next evolution in our release schedule: we're moving to monthly stable releases of Android Studio.

This new cadence means you'll be able to get your hands on the latest features and critical improvements, faster than ever before. Here's what you can expect: every few months, we'll introduce a version that contains the latest IntelliJ platform version, such as Android Studio Narwhal based on IntelliJ 2025.1. You'll then see Feature Drops each month that include important bug fixes and new functionality you'll want to try out, until it's time to release Android Studio with the next platform version of IntelliJ.

Android Studio Narwhal release cadence 2025

You've actually already experienced this new release cadence with Android Studio Narwhal! In the last Feature Drop release we were able to take features such as Agent Mode from Canary to the Stable channel faster than ever before, making it possible for you to try out new features, faster!

Why Monthly Releases?

You told us waiting for the next major release to get a critical bug fix or a quality-of-life improvement can be frustrating. With the move to monthly releases, we can deliver these updates to you without the long delays. This means you'll have access to the features you want and the fixes you need, right when you need them. It's important to note that the Android Emulator and the Android Gradle Plugin will continue to be updated separately from Android Studio at a pace of every two months. And, as always, you don't need to update these components to download and use the latest stable version of the IDE each month.

Our Commitment to Quality

A faster release cadence doesn't mean a compromise on quality. In fact, our ability to release more frequently is a direct result of our long-term investment in our testing infrastructure. This effort began with Project Marble, a concerted effort to improve the quality and testing of Android Studio. Since then, we've been continuously improving and tooling our testing strategy to be more reliable, and to get feedback from those tests faster. Last year, we reached a point where we could confidently double our releases, and now, we're ready to take the next step with monthly updates. This means you'll see releases 2X more often than before!

We're also continuing to provide early access to stable-ready releases. Previously, we've provided these opportunities first through Beta releases. With our investments in detecting and addressing issues earlier, we're able to take a release from Canary directly into our Release Candidate (RC) channel with a stable-ready level of quality and polish. This gives you a chance to try out the latest features and improvements before they're released to the stable channel and to provide us with valuable feedback.

Update Monthly and Help Us Improve

We encourage you to update to the latest stable version of Android Studio each month to take advantage of the latest features and improvements. Your feedback is essential to helping us make Android Studio the best it can be.

Here's how you can get involved:

  • Download Android Studio Narwhal 3 Feature Drop: It's currently available in the Canary channel and is the best way to get early access to new features and to provide us with feedback before a release is finalized. If you want a more stable build, download this version as soon as the Release Candidate becomes available.
  • Report a bug: If you encounter an issue, please let us know by reporting a bug. This helps us to identify and fix issues more quickly.

We're excited about this new chapter for Android Studio and we're confident that it will help you to build better apps, faster. As always, you can be part of our vibrant Android developer community on LinkedIn, Medium, YouTube, or X.

14 Aug 2025 5:01pm GMT

Test on a fleet of physical devices with Android Device Streaming, now with Android Partner Device Labs

Posted by Adarsh Fernando - Group Product Manager, Android Studio, and Grant Yang - Sr. Product Manager, Omnilab

Today, we're excited to give you an update on Android Device Streaming and announce that Android Partner Device Labs are now stable and available in the latest stable release of Android Studio Narwhal Feature Drop!

moving image of Android Device Streaming functionality in Android Studio Narwhal Feature Drop

Streamline your testing with Android Device Streaming

Android Device Streaming, powered by Firebase, allows you to securely connect to remote physical Android devices hosted in Google's secure data centers. This means you can test your app on a wide variety of devices and Android versions without leaving Android Studio, or having to purchase every device you want to test. This helps you:

  • Test on the latest hardware: Get your hands on the newest devices, including those not yet available to the public.
  • Cover a wide range of devices: Test on various form factors, from phones to foldables, from a multitude of manufacturers.
  • Improve developer productivity: Save time and resources by testing on real devices directly from your development environment.

At Google I/O 2025 earlier this year, we announced that Android Device Streaming graduated to stable. With that, we added the latest Google Pixel devices to the catalog, including the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, and Pixel 9 Pro Fold. We are also working to add the upcoming Pixel 10 devices to the catalog soon.

We also announced that we are partnering with leading OEMs to bring you an even greater selection of devices through Android Partner Device Labs. And best of all, with our monthly quota of minutes, you can start testing Android Device Streaming with a wide range of devices at no cost. Usage beyond the monthly quota of minutes may incur a charge, as described on the Firebase pricing page.

Introducing Android Partner Device Labs

Android Partner Device Labs are now stable and available in Android Studio Narwhal Feature Drop. Android Partner Device Labs give you access to a fleet of physical devices from our OEM partners, including Samsung, Xiaomi, OPPO, OnePlus, vivo, and more. This gives you the ability to test your apps on the specific devices your users have, ensuring a better user experience for everyone.

Here are some of the devices available today through our partners:

Android Partner Device Labs featuring a fleet of physical devices from our OEM partners, including OPPO, OnePlus, Samsung, Xiaomi, and vivo, in Android Studio Narwhal Feature Drop

Get started with partner devices

To start using devices from Android Partner Device Labs, follow these steps in Android Studio:

  1. Open the Device Manager by navigating to View > Tool Windows > Device Manager.
  2. Click on the Firebase icon and log in to your Google Developer account.
  3. Select a Firebase project with billing enabled.
  4. You will now see the list of available devices, including those from our partners.

For team development, a Firebase project administrator (Owner or Editor) will need to enable access to each OEM's lab of devices. To do so, the admin should go to the Google Cloud project page, ensure the correct project is selected, and then enable the desired device lab by using the toggle and following the on-screen prompts. Once enabled, the entire team will have access to those devices in Android Studio.

Android Device Streaming Pricing

You can learn more about Android Device Streaming quota and pricing on the Firebase pricing page. Devices in the Android Partner Device Labs are available at the same pricing and with the same monthly quota of minutes at no cost as all other devices in the Android Device Streaming catalog (unless otherwise specified).

Wrapping it up

Android Device Streaming, now with the addition of Android Partner Device Labs, gives you an unparalleled selection of physical devices to test your apps. With a growing catalog of devices from Google and our OEM device partners, you can ensure your app works flawlessly for all your users.

We invite you to download the latest stable release of Android Studio and try out Android Device Streaming and the new Android Partner Device Labs. With our generous monthly quota, you can start testing on a wide range of devices at no cost. We are constantly updating the device catalog, so be sure to check back often for new additions.

Happy streaming!

14 Aug 2025 5:00pm GMT

13 Aug 2025

feedAndroid Developers Blog

What’s new in the Jetpack Compose August ’25 release

Posted by Meghan Mehta - Developer Relations Engineer and Nick Butcher - Product Manager

Today, the Jetpack Compose August '25 release is stable. This release contains version 1.9 of core compose modules (see the full BOM mapping), introducing new APIs for rendering shadows, 2D scrolling, rich styling of text transformations, improved list performance, and more!

To use today's release, upgrade your Compose BOM version to 2025.08.00:

implementation(platform("androidx.compose:compose-bom:2025.08.00"))

Shadows

We're happy to introduce two highly requested modifiers: Modifier.dropShadow() and Modifier.innerShadow() allowing you to render box-shadow effects (compared to the existing Modifier.shadow() which renders elevation based shadows based on a lighting model).

Modifier.dropShadow()

The dropShadow() modifier draws a shadow behind your content. You can add it to your composable chain and specify the radius, color, and spread. Remember, content that should appear on top of the shadow (like a background) should be drawn after the dropShadow() modifier.

@Composable
@Preview(showBackground = true)
fun SimpleDropShadowUsage() {
    val pinkColor = Color(0xFFe91e63)
    val purpleColor = Color(0xFF9c27b0)
    Box(Modifier.fillMaxSize()) {
        Box(
            Modifier
                .size(200.dp)
                .align(Alignment.Center)
                .dropShadow(
                    RoundedCornerShape(20.dp),
                    dropShadow = DropShadow(
                        15.dp,
                        color = pinkColor,
                        spread = 10.dp,
                        alpha = 0.5f
                    )
                )
                .background(
                    purpleColor,
                    shape = RoundedCornerShape(20.dp)
                )
        )
    }
}
drop shadow drawn all around shape
Figure 1. Drop shadow drawn all around shape


Modifier.innerShadow()

The Modifier.innerShadow() draws shadows on the inset of the provided shape:

@Composable
@Preview(showBackground = true)
fun SimpleInnerShadowUsage() {
    val pinkColor = Color(0xFFe91e63)
    val purpleColor = Color(0xFF9c27b0)
    Box(Modifier.fillMaxSize()) {
        Box(
            Modifier
                .size(200.dp)
                .align(Alignment.Center)
                .background(
                    purpleColor,
                    shape = RoundedCornerShape(20.dp)
                )
                .innerShadow(
                    RoundedCornerShape(20.dp),
                    innerShadow = InnerShadow(
                        15.dp,
                        color = Color.Black,
                        spread = 10.dp,
                        alpha = 0.5f
                    )
                )
        )
    }
}
modifier.innerShadow() applied to a shape
Figure 2. Modifier.innerShadow() applied to a shape


The order for inner shadows is very important. The inner shadow draws on top of the content, so for the example above, we needed to move the inner shadow modifier after the background modifier. We'd need to do something similar when using it on top of something like an Image. In this example, we've placed a separate Box to render the shadow in the layer above the image:

@Composable
@Preview(showBackground = true)
fun PhotoInnerShadowExample() {
    Box(Modifier.fillMaxSize()) {
        val shape = RoundedCornerShape(20.dp)
        Box(
            Modifier
                .size(200.dp)
                .align(Alignment.Center)
        ) {
            Image(
                painter = painterResource(id = R.drawable.cape_town),
                contentDescription = "Image with Inner Shadow",
                contentScale = ContentScale.Crop,
                modifier = Modifier.fillMaxSize()
                    .clip(shape)
            )
            Box(
                modifier = Modifier.fillMaxSize()
                    .innerShadow(
                        shape,
                        innerShadow = InnerShadow(15.dp,
                            spread = 15.dp)
                    )
            )
        }
    }
}
Inner shadow on top of an image
Figure 3.Inner shadow on top of an image


New Visibility modifiers

Compose UI 1.8 introduced onLayoutRectChanged, a new performant way to track the location of elements on screen. We're building on top of this API to support common use cases by introducing onVisibilityChanged and onFirstVisible. These APIs accept optional parameters for the minimum fraction or amount of time the item has been visible for before invoking your action.

Use onVisibilityChanged for UI changes or side effects that should happen based on visibility, like automatically playing and pausing videos or starting an animation:

LazyColumn {
  items(feedData) { video ->
    VideoRow(
        video,
        Modifier.onVisibilityChanged(minDurationMs = 500, minFractionVisible = 1f) {
          visible ->
            if (visible) video.play() else video.pause()
          },
    )
  }
}

Use onFirstVisible for use cases when you wish to react to an element first becoming visible on screen for example to log impressions:

LazyColumn {
    items(100) {
        Box(
            Modifier
                // Log impressions when item has been visible for 500ms
                .onFirstVisible(minDurationMs = 500) { /* log impression */ }
                .clip(RoundedCornerShape(16.dp))
                .drawBehind { drawRect(backgroundColor) }
                .fillMaxWidth()
                .height(100.dp)
        )
    }
}

Rich styling in OutputTransformation

BasicTextField now supports applying styles like color and font weight from within an OutputTransformation.

The new TextFieldBuffer.addStyle() methods let you apply a SpanStyle or ParagraphStyle to change the appearance of text, without changing the underlying TextFieldState. This is useful for visually formatting input, like phone numbers or credit cards. This method can only be called inside an OutputTransformation.

// Format a phone number and color the punctuation
val phoneTransformation = OutputTransformation {
    // 1234567890 -> (123) 456-7890
    if (length == 10) {
        insert(0, "(")
        insert(4, ") ")
        insert(9, "-")

        // Color the added punctuation
        val gray = Color(0xFF666666)
        addStyle(SpanStyle(color = gray), 0, 1)
        addStyle(SpanStyle(color = gray), 4, 5)
        addStyle(SpanStyle(color = gray), 9, 10)
    }
}

BasicTextField(
    state = myTextFieldState,
    outputTransformation = phoneTransformation
)

LazyLayout

The building blocks of LazyLayout are all now stable! Check out LazyLayoutMeasurePolicy, LazyLayoutItemProvider, and LazyLayoutPrefetchState to build your own Lazy components.

Prefetch Improvements

There are now significant scroll performance improvements in Lazy List and Lazy Grid with the introduction of new prefetch behavior. You can now define a LazyLayoutCacheWindow to prefetch more content. By default, only one item is composed ahead of time in the direction of scrolling, and after something scrolls off screen it is discarded. You can now customize the amount of items ahead to prefetch and behind to retain through a fraction of the viewport or dp size. When you opt into using LazyLayoutCacheWindow, items begin prefetching in the ahead area straight away.

The configuration entry point for this is on LazyListState, which takes in the cache window size:

@OptIn(ExperimentalFoundationApi::class)
@Composable
private fun LazyColumnCacheWindowDemo() {
    // Prefetch items 150.dp ahead and retain items 100.dp behind the visible viewport
    val dpCacheWindow = LazyLayoutCacheWindow(ahead = 150.dp, behind = 100.dp)
    // Alternatively, prefetch/retain items as a fraction of the list size
    // val fractionCacheWindow = LazyLayoutCacheWindow(aheadFraction = 1f, behindFraction = 0.5f)
    val state = rememberLazyListState(cacheWindow = dpCacheWindow)
    LazyColumn(state = state) {
        items(1000) { Text(text = "$it", fontSize = 80.sp) }
    }
}
lazylayout in Compose 1.9 release

Note: Prefetch composes more items than are currently visible - the new cache window API will likely increase prefetching. This means that item's LaunchedEffects and DisposableEffects may run earlier - do not use this as a signal for visibility e.g. for impression tracking. Instead, we recommend using the new onFirstVisible and onVisibilityChanged APIs. Even if you're not manually customizing LazyLayoutCacheWindow now, avoid using composition effects as a signal of content visibility, as this new prefetch mechanism will be enabled by default in a future release.

Scroll

2D Scroll APIs

Following the release of Draggable2D, Scrollable2D is now available, bringing two-dimensional scrolling to Compose. While the existing Scrollable modifier handles single-orientation scrolling, Scrollable2D enables both scrolling and flinging in 2D. This allows you to create more complex layouts that move in all directions, such as spreadsheets or image viewers. Nested scrolling is also supported, accommodating 2D scenarios.

val offset = remember { mutableStateOf(Offset.Zero) }
Box(
    Modifier.size(150.dp)
        .scrollable2D(
            state =
                rememberScrollable2DState { delta ->
                    offset.value = offset.value + delta // update the state
                    delta // indicate that we consumed all the pixels available
                }
        )
        .background(Color.LightGray),
    contentAlignment = Alignment.Center,
) {
    Text(
        "X=${offset.value.x.roundToInt()} Y=${offset.value.y.roundToInt()}",
        style = TextStyle(fontSize = 32.sp),
    )
}
moving image of 2D scroll API demo

Scroll Interop Improvements

There are bug fixes and new features to improve scroll and nested scroll interop with Views, including the following:

  • Fixed the dispatching of incorrect velocities during fling animations between Compose and Views.
  • Compose now correctly invokes the View's nested scroll callbacks in the appropriate order.

Improve crash analysis by adding source info to stack traces

We have heard from you that it can be hard to debug Compose crashes when your own code does not appear in the stack trace. To address this we're providing a new, opt-in API to provide richer crash location details, including composable names and locations enabling you to:

  • Efficiently identify and resolve crash sources.
  • More easily isolate crashes for reproducible samples.
  • Investigate crashes that previously only showed internal stack frames.

Note that we do not recommend using this API in release builds due to the performance impact of collecting this extra information, nor does it work in minified apks.

To enable this feature, add the line below to the application entry point. Ideally, this configuration should be performed before any compositions are created to ensure that the stack trace information is collected:

class App : Application() {
   override fun onCreate() {
        // Enable only for debug flavor to avoid perf regressions in release
        Composer.setDiagnosticStackTraceEnabled(BuildConfig.DEBUG)
   }
}

New annotations and Lint checks

We are introducing a new runtime-annotation library that exposes annotations used by the compiler and tooling (such as lint checks). This allows non-Compose modules to use these annotations without a dependency on the Compose runtime library. The @Stable, @Immutable, and @StableMarker annotations have moved to runtime-annotation, allowing you to annotate classes and functions that do not depend on Compose.

Additionally, we have added two new annotations and corresponding lint checks:

  • @RememberInComposition: An annotation that can mark constructors, functions, and property getters, to indicate that they must not be called directly inside composition without being remembered. Errors will be raised by a corresponding lint check.
  • @FrequentlyChangingValue: An annotation that can mark functions, and property getters, to indicate that they should not be called directly inside composition, as this may cause frequent recompositions (for example, marking scroll position values and animating values). Warnings are provided by a corresponding lint check.

Additional updates

  • To simplify compatibility and improve stability for lint check support, Compose now requires Android Gradle Plugin (AGP) / Lint version 8.8.2 or higher. Check out this new documentation page to learn more.
  • Two new APIs have been added for context menus:

Get started

We appreciate all bug reports and feature requests submitted to our issue tracker. Your feedback allows us to build the APIs you need in your apps. Happy composing!

13 Aug 2025 6:00pm GMT

11 Aug 2025

feedAndroid Developers Blog

Media3 1.8.0 - What’s new?

Posted by Toni Heidenreich - Engineering Manager

This release includes several bug fixes, performance improvements, and new features. Read on to find out more, and as always please check out the full release notes for a comprehensive overview of changes in this release.

Scrubbing in ExoPlayer

This release introduces a scrubbing mode in ExoPlayer, designed to optimize performance for frequent, user-driven seeks, like dragging a seek bar handle. You can enable it with ExoPlayer.setScrubbingModeEnabled(true). We've also integrated this into PlayerControlView in the UI module where it can be enabled with either time_bar_scrubbing_enabled="true" in XML or the setTimeBarScrubbingEnabled(boolean) method. Media3 1.8.0 contains the first batch of scrubbing improvements, with more to come in 1.9.0!

moving image showing repeated seeking while scrubbing with scrubbing mode off in ExoPlayer
Repeated seeking while scrubbing with scrubbing mode OFF


moving image showing repeated seeking while scrubbing with scrubbing mode on in ExoPlayer
Repeated seeking while scrubbing with scrubbing mode ON

Live streaming ads with HLS interstitials

Extending the initial support for VOD in Media3 1.6.0, HlsInterstitialsAdsLoader now supports live streams and asset lists for all your server-guided ad insertion (SGAI) needs. The Google Ads Manager team explains how SGAI works. Follow our documentation for how to integrate HLS interstitals into your app.

chart of HLS intertitials processing flow from content server to ads server to Exoplayer
HLS interstitials processing flow

Duration retrieval without playback

MetadataRetriever has been significantly updated - it's now using an AutoCloseable pattern and lets you retrieve the duration of media items without playback. This means Media3 now offers the full functionality of the Android platform MediaMetadataRetriever but without having to worry about device specific quirks and cross-process communication (some parts like frame extraction are still experimental, but we'll integrate them properly in the future).

try {
  MetadataRetriever.Builder(context, mediaItem).build().use {
     val trackInfo = it.retrieveTrackGroups().await()
     val duration = it.retrieveDurationUs().await()
  }
} catch (e: IOException) {
  handleFailure(e)
}

Partial downloads, XR audio routing and more efficient playback

There were several other improvements and bug fixes across ExoPlayer and playback related components. To name just a few:

  • Downloader implementations now support partial downloads, with a new PreCacheHelper to organize manual caching of single items. This will be integrated into ExoPlayer's DefaultPreloadManager in Media3 1.9.0 for an even more seamless caching and preloading experience.
  • When created with a Context with a virtual device ID, ExoPlayer now automatically routes the audio to the virtual XR device for that ID.
  • We enabled more efficient interactions with Android's MediaCodec, for example skipping buffers that are not needed earlier in the pipeline.

Playback resumption in demo app and better notification defaults

The MediaSession module has a few changes and improvements for notification handling. It's now keeping notifications for longer by default, for example when playback is paused, stopped or failed, so that a user has more time to resume playback in your app. Notifications for live streams (in particular with DVR windows) also became more useful by removing the confusing DVR window duration and progress from the notification.

The media session demo app now also supports playback resumption to showcase how the feature can be integrated into your app! It allows the user to resume playback long after your app has been terminated and even after reboot.

Media resumption notification after device reboot
Media resumption notification after device reboot

Faster trim operations with edit list support

We are continuing to add optimizations for faster trim operations to Transformer APIs. In the new 1.8.0 release, we introduced support for trimming using MP4 edit lists. Call experimentalSetMp4EditListTrimEnabled(true) to make trim-only edits significantly faster.

val transformer = Transformer.Builder(requireContext())
        .addListener(transformerListener)
        .experimentalSetMp4EditListTrimEnabled(true)
        .build()

A standard trimming operation often requires a full re-transcoding of the video, even for a simple trim. This meant decoding, re-encoding the entire file, which is a time-consuming and resource-intensive process. With MP4 edit list support, Transformer can now perform "trim-only" edits much more efficiently. Instead of re-encoding, it leverages the existing encoded samples and defines a "pre-roll" within the edit list. This pre-roll essentially tells the player where to start playback within an existing encoded sample, effectively skipping the unwanted beginning portion.

The following diagram illustrates how this works:

processing overview for faster trim optimizations
Processing overview for faster trim optimizations

As illustrated above, each file contains encoded samples and each sample begins with a keyframe. The red line indicates the intended clip point in the original file, allowing us to safely discard two first samples. The major difference in this approach lies in how we handle the third encoded sample. Instead of running a transcoding operation, we transmux this sample and define a pre-roll for a video start position. This significantly accelerates the export operation; however this optimization is only applicable if no other effects are applied. Player implementations may also ignore the pre-roll component of the final video and play from the start of the encoded sample.

Chipset specific optimizations with CodecDbLite

CodecDBLite optimizes two elements of encoder configuration on a chipset-by-chipset basis: codec selection and B-frames. Depending on the chipset, these parameters can have either a positive or adverse impact on video quality. CodecDB Lite leverages benchmark data collected on production devices to recommend a configuration that achieves the maximum user-perceived quality for the developer's target bitrate. By enabling CodecDB Lite, developers can leverage advanced video codecs and features without worrying about whether or not they work on a given device.

To use CodecDbLite, simply call setEnableCodecDbLite(true) when building the encoder factory:

val transformer =
    Transformer.Builder()
        .setEncoderFactory(
            DefaultEncoderFactory.Builder()
                .setEnableCodecDbLite(true)
                .build()
        )
        .build()

New Composition demo

The Composition Demo app has been refreshed, and is now built entirely with Kotlin and Compose to showcase advanced multi-asset editing capabilities in Media3. Our team is actively extending the APIs, and future releases will introduce more advanced editing features, such as transitions between media items and other more advanced video compositing settings.

Adaptive-first: Editing flows can get complicated, so it helps to take advantage of as much screen real estate as possible. With the adaptive layouts provided by Jetpack Compose, such as the supporting pane layout, we can dynamically adapt the UI based on the device's screen size.

new Composition demo app
Processing overview for faster trim optimizations

Multi-asset video compositor: We've added a custom video compositor that demonstrates how to arrange input media items into different layouts, such as a 2x2 grid or a picture-in-picture overlay. These compositor settings are applied to the Composition, and can be used both with CompositionPlayer for preview and Transformer for export.

picture-in-picture video overlay in the Composition demo app
Picture-in-picture video overlay in the Composition demo app


Get started with Media3 1.8.0

Please get in touch via the Media3 issue Tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!

11 Aug 2025 7:00pm GMT

07 Aug 2025

feedAndroid Developers Blog

#WeArePlay: Meet the people coding a more sustainable world

Posted by Robbie McLachlan, Developer Marketing

How do you tackle the planet's biggest sustainability and environmental challenges? For 10 new founders we're spotlighting in #WeArePlay, it starts with coding. Their apps and games are helping to build a healthier planet by developing career paths for aspiring environmentalists, preserving indigenous knowledge, and turning nature education into an adventure for all.

Here are a few of our favourites:

Ariane, Flávia, Andréia, and Mayla's game BoRa turns a simple park visit into an immersive, gamified adventure.

Ariane, Flávia, Andréia, and Mayla, co-founders of Fubá Educação Ambiental, São Carlos, Brazil
Ariane, Flávia, Andréia, and Mayla, co-founders of Fubá Educação Ambiental
São Carlos, Brazil

Passionate about nature, co-founders Mayla, Flávia, Andréia, and Ariane met while researching environmental education. They wanted to foster more meaningful connections between people and Brazil's national parks. Their app, BoRa - Iguaçu National Park, transforms a visit into an immersive experience using interactive storytelling, gamified trails, and accessibility features like sign language, helping everyone connect more deeply with the natural world.

Louis and Justin's app, CyberTracker, turns the ancient knowledge of indigenous trackers into vital scientific data for modern conservation.

Louis, co-founder of CyberTracker Conservation, Cape Town, South Africa
Louis, co-founder of CyberTracker Conservation
Cape Town, South Africa

Louis knew that animal tracking was a science, but the expert knowledge of many indigenous trackers couldn't be recorded because they were unable to read or write. He partnered with Justin to create CyberTracker to solve this. Their app uses a simple icon-based interface, enabling non-literate trackers to record vital biodiversity data. This innovation preserves invaluable knowledge and supports conservation efforts worldwide.

Bharati and Saurabh's app, Earth5R, turns a passion for the planet into real-world experience and careers in the green economy.

Bharati and Saurabh, co-founders of Earth5R Environmental Services, Mumbai, India
Bharati and Saurabh, co-founders of Earth5R Environmental Services
Mumbai, India

After a life-changing cycling trip around the world, Saurabh was inspired by sustainable practices he saw in different communities. He and his wife, Bharati, brought those lessons home to Mumbai and launched Earth5R. Their app provides environmental education and career development, connecting people to internships and hands-on projects. By providing the skills and experience needed for the green economy, they're building the next generation of environmental leaders.


Discover more #WeArePlay stories from founders across the globe.



Google Play logo

07 Aug 2025 4:00pm GMT

06 Aug 2025

feedAndroid Developers Blog

What is HDR?

Posted by John Reck - Software Engineer

For Android developers, delivering exceptional visual experiences is a continuous goal. High Dynamic Range (HDR) unlocks new possibilities, offering the potential for more vibrant and immersive content. Technologies like UltraHDR on Android are particularly compelling, providing the benefits of HDR displays while maintaining crucial backwards compatibility with SDR displays. On Android you can use HDR for both video and images.

Over the years, the term HDR has been used to signify a number of related, but ultimately distinct visual fidelity features. Users encounter it in the context of camera features (exposure fusion), or as a marketing term in TV or monitor ("HDR capable"). This conflates distinct features like wider color gamuts, increased bit depth or enhanced contrast with HDR itself.

From an Android Graphics perspective, HDR primarily signifies higher peak brightness capability that extends beyond the conventional Standard Dynamic Range. Other perceived benefits often derive from standards such as HDR10 or Dolby Vision which also include the usage of wider color spaces, higher bit depths, and specific transfer functions.

In this article, we'll establish the foundational color principles, then address common myths, clarify HDR's role in the rendering pipeline, and examine how Android's display technologies and APIs enable HDR experience.

The components of color

Understanding HDR begins with defining the three primary components that form the displayed volume of color: bit depth, transfer function, and color gamut. These describe the precision, scaling, and range of the color volume, respectively.

While a color model defines the format for encoding pixel values (e.g., RGB, YUV, HSL, CMYK, XYZ), RGB is typically assumed in a graphics context. The combination of a color model, a color gamut, and a transfer function constitutes color space. Examples include sRGB, Display P3, Adobe RGB, BT.2020, or BT.2020 HLG. Numerous combinations of color gamut and transfer function are possible, leading to a variety of color spaces.

components of color include bit depth + transfer fn + color gamut + color model with the last three being within the color space
Components of color

Bit Depth

Bit depth defines the precision of color representation. A higher bit depth allows for finer gradation between color values. In modern graphics, bit depth typically refers to bits per channel (e.g., an 8-bit image uses 8 bits for each red, green, blue, and optionally alpha channel).

Crucially, bit depth does not determine the overall range of colors (minimum and maximum values) an image can represent; this is set by the color gamut and, in HDR, the transfer function. Instead, increasing bit depth provides more discrete steps within that defined range, resulting in smoother transitions and reduced visual artifacts such as banding in gradients.

5-bit

5-bit color gradient showing distinct transition between color values

8-bit

8-bit color gradient showing smoother transition between color values

Although 8-bit is one of the most common formats in widespread usage, it's not the only option. RAW images can be captured at 10, 12, 14, or 16 bits. PNG supports 16 bits. Games frequently use 16-bit floating point (FP16) instead of integer space for intermediate render buffers. Modern GPU APIs like Vulkan even support 64-bit RGBA formats in both integer and floating point varieties, providing up to 256-bits per pixel.

Transfer Function

A transfer function defines the mathematical relationship between a pixel's stored numerical value and its final displayed luminance or color. In other words, the transfer function describes how to interpret the increments in values between the minimum and maximum. This function is essential because the human visual system's response to light intensity is non-linear. We are more sensitive to changes in luminance at low light levels than at high light levels. Therefore, a linear mapping from stored values to display luminance would not result in an efficient usage of the available bits. There would be more than necessary precision in the brighter region and too little in the darker region with respect to what is perceptual. The transfer function compensates for this non-linearity by adjusting the luminance values to match the human visual response.

While some transfer functions are linear, most employ complex curves or piecewise functions to optimize image quality for specific displays or viewing conditions. sRGB, Gamma 2.2, HLG, and PQ are common examples, each prioritizing bit allocation differently across the luminance range.

Color Gamut

Color gamut refers to the entire range of colors that a particular color space or device can accurately reproduce. It is typically a subset of the visible color spectrum, which encompasses all the colors that the human eye can perceive. Each color space (e.g., sRGB, Display P3, BT2020) defines its own unique gamut, establishing the boundaries for color representation.

A wider gamut signifies that the color space can display a greater variety of colors, leading to richer and more vibrant images. However, simply having a larger gamut doesn't always guarantee better color accuracy or a more vibrant result. The device or medium used to display the colors must also be capable of reproducing the full range of the gamut. When a display encounters colors outside its reproducible gamut, the typical handling method is clipping. This is to ensure that in-gamut colors are properly preserved for accuracy, as otherwise attempts to scale the color gamut may produce unpleasant results, particularly in regions in which human vision is particularly sensitive like skin tones.

HDR myths and realities

With an understanding of what forms the basic working color principles, it's now time to evaluate some of the common claims of HDR and how they apply in a general graphics context.

Claim: HDR offers more vibrant colors

This claim comes from HDR video typically using the BT2020 color space, which is indeed a wide color volume. However, there are several problems with this claim as a blanket statement.

The first is that images and graphics have been able to use wider color gamuts, such as Display P3 or Adobe RGB, for quite a long time now. This is not a unique advancement that was coupled to HDR. In JPEGs for example this is defined by the ICC profile, which dates back to the early 1990s, although wide-spread adoption of ICC profile handling is somewhat more recent. Similarly on the graphics rendering side the usage of wider color spaces is fully decoupled from whether or not HDR is being used.

The second is that not all HDR videos even use such a wider gamut at all. Although HDR10 specifies the usage of BT2020, other HDR formats have since been created that do not use such a wide gamut.

The biggest issue, though, is one of capturing and displaying. Just because the format allows for the color gamut of BT2020 does not mean that the entire gamut is actually usable in practice. For example current Dolby Vision mastering guidelines only require a 99% coverage of the P3 gamut. This means that even for high-end professional content, it's not expected that the authoring of content beyond that of Display P3 is possible. Similarly, the vast majority of consumer displays today are only capable of displaying either sRGB or Display P3 color gamuts. Given that the typical recommendation of out-of-gamut colors is to clip them, this means that even though HDR10 allows for up to BT2020 gamut, the widest gamut in practice is still going to be P3.

Thus this claim should really be considered something offered by HDR video profiles when compared to SDR video profiles specifically, although SDR videos could use wider gamuts if desired without using an HDR profile.

Claim: HDR offers more contrast / better black detail

One of the benefits of HDR sometimes claimed is dark blacks (e.g. Dolby Vision Demo #3 - Core Universe - 4K HDR or "Dark scenes come alive with darker darks" ) or more detail in the dark regions. This is even reflected in BT.2390: "HDR also allows for lower black levels than traditional SDR, which was typically in the range between 0.1 and 1.0 cd/m2 for cathode ray tubes (CRTs) and is now in the range of 0.1 cd/m2 for most standard SDR liquid crystal displays (LCDs)." However, in reality no display attempts to show anything but SDR black as the blackest black the display is physically capable of. Thus there is no difference between HDR or SDR in terms of how dark it can reach - both bottom out at the same dark level on the same display.

As for contrast ratio, as that is the ratio between the brightest white and the darkest black, it is overwhelmingly influenced by how dark a display can get. With the prevalence of OLED displays, particularly in the mobile space, both SDR and HDR have the same contrast ratio as a result, as they both have essentially perfect black levels giving them infinite contrast ratios.

The PQ transfer function does allocate more bits to the dark region, so in theory it can convey better black detail. However, this is a unique aspect of PQ rather than a feature of HDR. HLG is increasingly the more common HDR format as it is preferred by mobile cameras as well as several high end cameras. And while PQ may contain this detail, that doesn't mean the HDR display can necessarily display it anyway, as discussed in Display Realities.

Claim: HDR offers higher bit depth

This claim comes from HDR10 and some, but not all, Dolby Vision profiles using 10 or 12-bits for the video stream. Similar to more vibrant colors, this is really just an aspect of particular video profiles rather than something HDR itself inherently provides or is coupled to HDR. The usage of 10-bits or more is otherwise not uncommon in imaging, particularly in the higher end photography world, with RAW and TIFF image formats capable of having 10, 12, 14, or 16-bits. Similarly, PNG supports 16-bits, although that is rarely used.

Claim: HDR offers higher peak brightness

This then, is all that HDR really is. But what does "higher peak brightness" really mean? After all, SDR displays have been pushing ever increasing brightness levels before HDR was significant, particularly for sunlight viewing. And even without that, what is the difference between "HDR" and just "SDR with the brightness slider cranked up"? The answer is that we define "HDR" as having a brightness range bigger than SDR, and we think of SDR as being the range driven by autobrightness to be comfortably readable in the current ambient conditions. Thus we define HDR in terms of things like "HDR headroom" or "HDR/SDR ratio" to indicate it's a floating region relative to SDR. This makes brightness policies easier to reason about. However, it does complicate the interaction with traditional HDR such as that used in video, specifically HLG and PQ content.

PQ/HLG transfer functions

PQ and HLG represent the two most common approaches to HDR in terms of video content. They represent two transfer functions that represent different concepts of what is "HDR." PQ, published as SMPTE ST 2084:2014, is defined in terms of absolute nits in the display. The expectation is that it encodes from 0 to 10,000 nits, and expects to be mastered for a particular reference viewing environment. HLG takes a different approach, instead opting to take a typical gamma curve for part of the range before switching to logarithmic for the brighter portion. This has a claimed nominal peak brightness of 1000 nits in the reference environment, although it is not defined in absolute luminance terms like PQ is.

Industry-wide specifications have recently formalized the brightness range of both PQ- and HLG-encoded content in relation to SDR. ITU-R BT. 2408-8 defines the reference white level for graphics to be 203 nits. ISO/TS 22028-5 and ISO/PRF 21496-1 have followed suit; 21496-1 in particular defines HDR headroom in terms of nominal peak luminance, relative to a diffuse white luminance at 203 nits.

The realities of modern displays, discussed below, as well as typical viewing environments mean that traditional HDR video are nearly never displayed as intended. A display's HDR headroom may evaporate under bright viewing conditions, demanding an on-demand tonemapping into SDR. Traditional HDR video encodes a fixed headroom, while modern displays employ a dynamic headroom, resulting in vast differences in video quality even on the same display.

Display Realities

So far most of the discussion around HDR has been from the perspective of the content. However, users consume content on a display, which has its own capabilities and more importantly limits. A high-end mobile display is likely to have characteristics such as gamma 2.2, P3 gamut, and a peak brightness of around 2000 nits. If we then consider something like HDR10 there are mismatches in bit usage prioritization:

  • PQ's increased bit allocation at the lower ranges ends up being wasted
  • The usage of BT2020 ends up spending bits on parts of a gamut that will never be displayed
  • Encoding up to 10,000 nits of brightness is similarly headroom that's not utilized

These mismatches are not inherently a problem, however, but it means that as 10-bit displays become more common the existing 10-bit HDR video profiles are unable to actually take advantage of the full display's capabilities. Thus HDR video profiles are in a position of simultaneously being forward looking while also already being unable to maximize a current 10-bit display's capabilities. This is where technology such as Ultra HDR or gainmaps in general provide a compelling alternative. Despite sometimes using an 8-bit base image, because the gain layer that transforms it to HDR is specialized to the content and its particular range needs it is more efficient with its bit usage, leading to results that still look stunning. And as that base image is upgraded to 10-bit with newer image formats such as AVIF, the effective bit usage is even better than those of typical HDR video codecs. Thus these approaches do not represent evolutionary or stepping stones to "true HDR", but rather are also an improvement on HDR in addition to having better backwards compatibility. Similarly Android's UI toolkit's usage of the extendedRangeBrightness API actually still primarily happens in 8-bit space. Because the rendering is tailored to the specific display and current conditions it is still possible to have a good HDR experience despite the usage of RGBA_8888.

Unlocking HDR on Android: Next steps

High Dynamic Range (HDR) offers advancement in visual fidelity for Android developers, moving beyond the traditional constraints of Standard Dynamic Range (SDR) by enabling higher peak brightness.

By understanding the core components of color - bit depth, transfer function, and color gamut - and debunking common myths, developers can leverage technologies like Ultra HDR to deliver truly immersive experiences that are both visually stunning and backward compatible.

In our next article, we'll delve into the nuances of HDR and user intent, exploring how to optimize your content for diverse display capabilities and viewing environments.

06 Aug 2025 4:00pm GMT

31 Jul 2025

feedAndroid Developers Blog

Android Studio Narwhal Feature Drop is stable - start using Agent Mode

Posted by Paris Hsu - Product Manager, Android Studio

The next wave of innovation is here with Android Studio Narwhal Feature Drop. We're thrilled to announce that Gemini in Android Studio's Agent Mode is now available in the stable release, ready to tackle your most complex coding challenges. This release also brings powerful new tools for XR development, continued quality improvements, and key updates to enhance your productivity and help you build high-quality apps.

Dive in to learn more about all the updates and new features designed to supercharge your workflow.

moving image of Gemini in Android Studio: Agent Mode
Gemini in Android Studio: Agent Mode

Develop with Gemini

Try out Agent Mode

Go beyond chat and assign tasks to Gemini. Gemini in Android Studio's Agent Mode is a powerful AI feature designed to handle complex, multi-stage development tasks. To use Agent Mode, click Gemini in the sidebar and then select the Agent tab. You can describe a high-level goal, like adding a new feature, generating comprehensive unit tests, or fixing a nuanced bug.

The agent analyzes your request, breaks it down into smaller steps, and formulates an execution plan that uses IDE tools, such as reading and writing files and performing Gradle tasks, and can span multiple files in your project. It then iteratively suggests code changes, and you're always in control-you can review, accept, or reject the proposed changes and ask the agent to iterate based on your feedback. Let the agent handle the heavy lifting while you focus on the bigger picture.

After releasing Agent Mode to Canary, we had positive feedback from the developers who tried it. We were so excited about the feature's potential, we moved it to the stable channel faster than ever before, so that you can get your hands on it. Try it out and let us know what you build.

screen grab of Gemini's Agent Mode in Android Studio
Gemini in Android Studio: Agent Mode


Currently, the default model offered in the free tier in Android Studio has a shorter context length, which can limit the depth of response from some agent questions and tasks. In order to get the best performance from Agent Mode, you can bring your own key for the public Gemini API. Once you add your Gemini API key with a paid GCP project, you'll then be able to use the latest Gemini 2.5 Pro with a full 1M context window from Android Studio. Remember to pick the "Gemini 2.5 Pro" from the model picker in the chat and agent input boxes.

screen grab of Gemini's model selector in Android Studio
Gemini in Android Studio: model selector

Rules in prompt library

Tailor the response from Gemini to fit your project's specific needs with Rules in the prompt library. You can define preferred coding styles, tech stacks, languages, or output formats to help Gemini understand your project standards for more accurate and personalized code assistance. You can set these preferences once, and they'll be automatically applied to all subsequent prompts sent to Gemini. For example, you can create a rule such as, "Always provide concise responses in Kotlin using Jetpack Compose." You can also set rules at the IDE level for personal use across projects, or at the project level, which can be shared with teammates by adding the .idea folder to your version control system.

screen grab of Rules in Prompt Library in Android Studio
Rules in prompt library

Transform UI with Gemini [Studio Labs]

You can now transform UI code within the Compose Preview environment using natural language, directly in the preview. This experimental feature, available through Studio Labs, speeds up UI development by letting you iterate with simple text commands. To use it, right-click in the Compose Preview and select Transform UI With Gemini. Then enter your natural language requests, such as "Center align these buttons," to guide Gemini in adjusting your layout or styling, or select specific UI elements in the preview for better context. Gemini will then edit your Compose UI code in place, which you can review and approve.

side by side screen captures of accessing the 'Transform UI with Gemini' menu on the left, and applying a natural language transformationto a Compose preview on the right in Android Studio

Immersive development

XR Android Emulator and template

Kickstart your extended reality development! Android Studio now includes:

  • XR Android Emulator: The XR Android Emulator now launches embedded within the IDE by default. You can deploy your Jetpack app, navigate the 3D space, and use the Embedded Layout Inspector directly inside Android Studio.
  • XR template: Get a head start on your next project with a new template specifically designed for Jetpack XR. This provides a solid foundation with boilerplate code to begin your immersive experience development journey right away.
XR Android Emulator in Android Studio
XR Android Emulator


XR Android Emulator in Android Studio
XR Android template in new project template

Embedded Layout Inspector for XR

The embedded Layout Inspector now supports XR applications, which lets you inspect and optimize your UI layouts within the XR environment. Get detailed insights into your app's component structure and identify potential layout issues to create more polished and performant experiences.

Embedded Layout Inspector for XR in Android Studio
Embedded Layout Inspector for XR

Android Partner Device Labs available with Android Device Streaming

Android Partner Device Labs are device labs operated by Google OEM partners, such as Samsung, Xiaomi, OPPO, OnePlus, vivo, and others, and expand the selection of devices available in Android Device Streaming. To learn more, see Connect to Android Partner Device Labs.

Embedded Layout Inspector for XR in Android Studio
Android Device Streaming supports Android Partner Device Labs

Optimize and refine

Jetpack Compose preview quality improvements

We've made several enhancements to Compose previews to make UI iteration faster and more intuitive:

  • Improved code navigation: You can now click on a preview's name to instantly jump to its @Preview definition, or click an individual component within the preview to navigate directly to the function where it's defined. Hover states and improved keyboard arrow navigation make moving through multiple previews a breeze.
  • Preview picker: The new Compose preview picker is now available. You can click any @Preview annotation in your Compose code to access the picker and easily manage your previews.
improved code navigation in Compose preview in Android Studio
Compose preview: Improved code navigation


Compose preview picker in Android Studio
Compose preview picker

K2 mode by default

Android Studio now uses the K2 Kotlin compiler by default. This next-generation compiler brings significant performance improvements to the IDE and your builds. By enabling K2, we are paving the way for future Kotlin programming language features and an even faster, more robust development experience in Kotlin.

K2 mode setting in Android Studio
K2 mode setting

16 KB page size support

To help you prepare for the future of Android hardware, this release adds improved support for transitioning to 16 KB page sizes. Android Studio now offers proactive warnings when building apps that are incompatible with 16 KB devices. You can use the APK Analyzer to identify which specific libraries in your project are incompatible. Lint checks also highlight the native libraries which are not 16 KB aligned. To test your app in this new environment, a dedicated 16 KB emulator target is also available in the AVD Manager.

16 KB page size support: APK Analyzer indication
16 KB page size support: APK Analyzer indication


16 KB page size support: APK Analyzer indication
16 KB page size support: Lint checks

Services compatibility policy

Android Studio offers service integrations that help you and your team make faster progress as you develop, release, and maintain Android apps. Services are constantly evolving and may become incompatible with older versions of Android Studio. Therefore, we are introducing a policy where features that depend on a Google Cloud service are supported for approximately a year in each version of Android Studio. The IDE will notify you when the current version is within 30 days of becoming incompatible so you can update it.

Example notification for services compatibility policy in Android Studio
Example notification for services compatibility policy

Summary

To recap, Android Studio Narwhal Feature Drop includes the following enhancements and features:

Develop with Gemini

  • Gemini in Android Studio: agent mode: use Gemini for tackling complex, multi-step coding tasks.
  • Rules in Prompt Library: Customize Gemini's output for your project's standards.
  • Transform preview with Gemini [Studio Labs]: Use natural language to iterate on Compose UI.


Immersive development

  • Embedded XR Android Emulator: Test and debug XR apps directly within the IDE.
  • XR template: A new project template to kickstart XR development.
  • Embedded Layout Inspector for XR: Debug and optimize your UI in an XR environment.
  • Android Partner Device Labs available with Android Device Streaming: access more Google OEM partner devices.


Optimize and refine

  • Compose preview improvements: Better navigation and a new picker for a smoother workflow.
  • K2 mode by default: Faster performance with the next-gen Kotlin compiler.
  • 16KB page size support: Lint warnings, analysis, and an emulator to prepare for new devices.
  • Services compatibility policy: Stay up-to-date for access to integrated Google services.

Get started

Ready to accelerate your development? Download Android Studio Narwhal Feature Drop and start exploring these powerful new features today! As always, your feedback is crucial to us.

Check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn Medium, YouTube, or X. Let's build the future of Android apps together!

31 Jul 2025 5:30pm GMT

24 Jul 2025

feedAndroid Developers Blog

#WeArePlay: 10 million downloads and counting, meet app and game founders from across the U.S.

Posted by Robbie McLachlan, Developer Marketing

They saw a problem and built the answer. Meet 20 #WeArePlay founders from across the U.S. who started their entrepreneurial journey with a question like: what if reading was no longer a barrier for anyone? What if an app could connect neighbors to fight local hunger? What if fitness or self-care could feel as engaging as playing a game?

These new stories showcase how innovation often starts with finding the answer to a personal problem. Here are just a few of our favorites:

Cliff's app Speechify makes the written word accessible to all

Headshot of Cliff, founder of Speechify, Miami, Florida
Cliff, founder of Speechify
Miami, Florida


Growing up with dyslexia, Cliff always wished he could enjoy books but found reading them challenging. After moving to the U.S., the then college student turned that personal challenge into a solution for millions. His app, Speechify, empowers people by turning any text-from PDFs to web pages-into audio. By making the written word accessible to all, Cliff's innovation gives students, professionals, and auditory learners a new kind of independence.

Jenny's game Run Legends turns everyday fitness into a social adventure

Headshot of Jenny, founder of Tofala Games, San Francisco, California
Jenny, founder of Talofa Games
San Francisco, California


As a teen, Jenny funded her computer science studies by teaching herself to code and publishing over 100 games. A passionate cross-country runner, she wanted to combine her love for gaming and fitness to make exercise feel more like an adventure. The result is Run Legends, a multiplayer RPG where players battle monsters by moving in real life. Jenny's on a mission to blend all types of exercise with playful storytelling, turning everyday fitness into a fun, social, and heroic quest.

Nino and Stephanie's app Finch makes self-care a rewarding daily habit

Headshot of Nino and Stephanie, co-founders of Finch, Santa Clara, California
Nino and Stephanie, co-founders of Finch
Santa Clara, California


As engineers, Nino and Stephanie knew the power of technology but found the world of self-care apps overwhelming. Inspired by their own mental health journeys and a gamified app Stephanie built in college, they created Finch. The app introduces a fresh take on the virtual pet: by completing small, positive actions for yourself, like journaling or practicing breathing exercises, you care for your digital companion. With over 10 million downloads, Finch has helped people around the world build healthier habits. With seasonal events every month and growing personalization, the app continues to evolve to make self-care more fun and rewarding.

John's app The HungreeApp connects communities to fight hunger

Headshot of John, founder of The HungreeApp, Denver, Colorado
John, founder of The HungreeApp
Denver, Colorado


John began coding as a nine-year-old in Nigeria, sometimes with just a pen and paper. After moving to the U.S., he was struck by how much food from events was wasted while people nearby went hungry. That spark led him to create The HungreeApp, a platform that connects communities with free, surplus food from businesses and restaurants. John's ingenuity turns waste into opportunity, creating a more connected and resourceful nation, one meal at a time.

Anthony's game studio Tech Tree Games turns a passion for idle games into cosmic adventures for aspiring tycoons

Headshot of Anthony, founder of Tech Tree Games, Austin, Texas
Anthony, founder of Tech Tree Games
Austin, Texas


While working as a chemical engineer, Anthony dreamed of creating an idle game like the ones he loved to play, leading him to teach himself how to code from scratch. This passion project turned into his studio Tech Tree Games and the hit title Idle Planet Miner, where players grow a space mining empire filled with mystical planets and alluring gems. After releasing a 2.0 update with enhanced visuals for the game, Anthony is back in prototyping mode with new titles in the pipeline.


Discover more #WeArePlay stories from the US and stories from across the globe.



Google Play logo

24 Jul 2025 4:00pm GMT

17 Jul 2025

feedAndroid Developers Blog

#WeArePlay: With over 3 billion downloads, meet the people behind Amanotes

Posted by Robbie McLachlan - Developer Marketing


In our latest #WeArePlay film, which celebrates the people behind apps and games on Google Play, we meet Bill and Silver - the duo behind Amanotes. Their game company has reached over 3 billion downloads with their mission 'everyone can music'. Their titles, including the global hit Magic Tiles 3, turn playing musical instruments into a fun, easy, and interactive experience, with no musical background needed. Discover how Amanotes blends creativity and technology to bring joy and connection to billions of players around the world.

What inspired you to create Amanotes?

Bill: It all began with a question I'd pursued for over 20 years - how can technology make music even more beautiful? I grew up in a musical family, surrounded by instruments, but I also loved building things with tech. Amanotes became the space where I could bring those two passions together.

Silver: Honestly, I wasn't planning to start a company. I had just finished studying entrepreneurship and was looking to join a startup, not launch one. I dropped a message in an online group saying I wanted to find a team to work with, and Bill reached out. We met for coffee, talked for about an hour, and by the end, we just said, why not give it a shot? That one meeting turned into ten years of building Amanotes.

Do you remember the first time you realized your game was more than just a game and that it could change someone's life?

Silver: There's one moment I'll never forget. A woman in the U.S. left a review saying she used to be a pianist, but after an accident, she lost use of some of her fingers and couldn't play anymore. Then she found Magic Tiles. She said the game gave her that feeling of playing again-even without full movement. That's when it hit me. We weren't just building a game. We were helping people reconnect with something they thought they'd lost.

Amanotes founders, Bill Vo and Silver Nguyen

How has Google Play helped your journey?

Silver: Google Play has been a huge part of our story. It was actually the first platform we ever published on. The audience was global from day one, which gave us the reach we needed to grow fast. We made great use of tools such as Firebase for A/B testing. We also relied on the Play Console for analytics and set custom pricing by country. Without Google Play, Amanotes wouldn't be where it is today.

A user plays Amanotes on their mobile device

What's next for Amanotes?

Silver: Music will always be the soul of what we do, but now we're building games with more depth. We want to go beyond just tapping to songs. We're adding stories, challenges, and richer gameplay on top of the music. We've got a whole lineup of new games in the works. Each one is a chance to push the boundaries of what music games can be.


Discover other inspiring app and game founders featured in #WeArePlay.



Google Play logo

17 Jul 2025 4:00pm GMT

15 Jul 2025

feedAndroid Developers Blog

New tools to help drive success for one-time products

Posted by Laura Nechita - Product Manager, Google Play and Rejane França - Group Product Manager, Google Play

Starting today, Google Play is revamping the way developers can manage one time products, providing greater flexibility and new ways to sell. Play has continually enhanced the ways developers can reach buyers by helping you to diversify the way you can sell products.

Starting in 2022, we created more flexibility for subscriptions and a new Console interface. And now, we are bringing the same flexibility to one-time products, aligning the taxonomy for our one-time products. Previously known as in-app products, one-time product purchases are a vital way for developers to monetize on Google Play. As this business model continues to evolve, we've heard from many of you that you need more flexibility and less complexity in how you offer these digital products.

To address these needs, we're launching new capabilities and a new way of thinking about your products that can help you grow your business. At its core, we've separated what the product is from how you sell it. For each one-time product, you can now configure multiple purchase options and offers. This allows you to sell the same product in multiple ways, reducing operational costs by removing the need to create and manage an ever-increasing number of catalog items.

You might have already noticed some changes as we introduce this new model, which provides a more structured way to define and manage your one-time product offerings.

Introducing the new model

flow chart showing the new model hierarchy with one time product at the top, purchase options in the middle, and offers at the bottom


We're introducing a new three-level hierarchy for defining and managing one-time products. This new structure builds upon concepts already familiar from our subscription model and aligns the taxonomy for all of your in-app product offerings on Play.

  • One-time product: This object defines what the user is buying. Think of it as the core item in your catalog, such as a "Diamond sword", "Coins" or "No ads".
  • Purchase option: This defines how the entitlement is granted to the user, its price, and where the product will be available. A single one-time product can have multiple purchase options representing different ways to acquire it, such as buying it or renting it for a set period of time. Purchase options now have two distinct types: buy and rent.
  • Offer: Offers further modify a purchase option and can be used to model discounts or pre-orders. A single purchase option can have multiple offers associated with it.

This allows for a more organized and efficient way to manage your catalog. For instance, you can have one "Diamond sword" product and offer it with a "Buy" purchase option in the US for $10 and a "Rent" purchase option in the UK for £5. This new taxonomy will also allow Play to better understand what the catalogue means, helping developers to further amplify their impact in Play surfaces.

More flexibility to reach more users

The new model unlocks significant flexibility to help you reach a wider audience and cater to different user preferences.

  • Sell in multiple ways: Once you've migrated to PBL 8, you can set up different ways of selling the same product. This reduces the complexity of managing numerous individual products for slightly different scenarios.
  • Introducing rentals: We're introducing the ability to configure items that are sold as rentals. Users have access to the item for a set duration of time. You can define the rental period, which is the amount of time a user has the entitlement after completing the purchase, and an optional expiration period, which is the time after starting consumption before the entitlement is revoked.
  • Pre-order capabilities: You can now set up one-time products to be bought before their release through pre-order offers. You can configure the start date, end date, and the release date for these offers, and even include a discount. Users who pre-order agree to pay on the release date unless they cancel beforehand.
  • No default price: we will remove the concept of default price for a product. Now you can set and manage the prices in bulk or individually for each region.
  • Regional pricing and availability: Price changes can now be applied to purchase options and offers, allowing you to set different prices in different regions. Furthermore, you can also configure the regional availability for both purchase options and offers. This functionality is available for paid apps in addition to one-time products.
  • Offers for promotions: Leverage offers to create various promotions, such as discounts on your base purchase price or special conditions for early access through pre-orders.

To use these new features you first need to upgrade to PBL 8.0. Then, you'll need to utilize the new monetization.onetimeproducts service of the Play Developer API or use the Play Developer Console. You'll also need to integrate with the queryProductDetailsAsync API to take advantage of these new capabilities. And while querySkuDetailsAsync and inappproducts service are not supported with the new model, they will continue to be supported as long as PBL 7 is supported.

Important considerations

  • With this change, we will offer a backwards compatible way to port your existing SKUs into this new model. The migration will happen differently depending on how you decide to interact with your catalogue the first time you change the metadata for one or more products.
  • New products created through Play Console UI are normalized. And products created or managed with the existing inappproducts service won't support these new features. To access them, you'll need to convert existing ones in the Play Developer Console UI. Once converted, a product can only be managed through the new Play Developer API or Play Developer Console. Products created through the new monetization.onetimeproducts service or through the Play Developer Console are already converted.
  • Buy purchase options marked as 'Backwards compatible' will be returned as response for calls through querySkuDetailsAsync API. At launch, all existing products have a backwards compatible purchase option.
  • At the time of this post, the pre-orders capability is available through the Early Access Program (EAP) only. If you are interested, please sign-up.
  • One-time products will be reflected in the earnings reports at launch (Base plan ID and Offer ID columns will be populated for newly configured one-time products). To minimise the potential for breaking changes, we will be updating these column names in the earnings reports later this year.

We encourage you to explore the new Play Developer API and the updated Play Console interface to see how this enhanced flexibility can help you better manage your catalog and grow your business.

We're excited to see how you leverage these new tools to connect with your users in innovative ways.



Google Play logo

15 Jul 2025 4:00pm GMT

10 Jul 2025

feedAndroid Developers Blog

Transition to using 16 KB page sizes for Android apps and games using Android Studio

Posted by Mayank Jain - Product Manager and Jomo Fisher - Software Engineer

Get ready to upgrade your app's performance as Android embraces 16 KB memory page sizes

Android's transition to 16 KB Page size

Traditionally, Android has operated with the 4 KB memory page size. However many ARM CPUs (the most common processors for Android phones) support the larger 16 KB page size, offering improved performance gains. With Android 15, the Android operating system is page-size-agnostic, allowing devices to run efficiently with either 4 KB or 16 KB page size.

Starting November 1st, 2025, all new apps and app updates that use native C/C++ code targeting Android 15+ devices submitted to Google Play must support 16 KB page sizes. This is a crucial step towards ensuring your app delivers the best possible performance on the latest Android hardware. Apps without native C/C++ code or dependencies, that just use the Kotlin and Java programming languages, are already compatible, but if you're using native code, now is the time to act.

This transition to larger 16 KB page sizes translates directly into a better user experience. Devices configured with 16 KB page size can see an overall performance boost of 5-10%. This means faster app launch times (up to 30% for some apps, 3.16% on average), improved battery usage (4.56% reduction in power draw), quicker camera starts (4.48-6.60% faster), and even speedier system boot-ups (around 0.8 seconds faster). While there is a marginal increase in memory use, a faster reclaim path is worth it.

The native code challenge - and how Android Studio equips you

If your app uses native C/C++ code from the Android NDK or relies on SDKs that do, you'll need to recompile and potentially adjust your code for 16 KB compatibility. The good news? Once your application is updated for the 16 KB page size, the same application binary can run seamlessly on both 4 KB and 16 KB devices.

This table describes who needs to transition and recompile their apps

A table describes who needs to transition or recomplie their apps based on native codebase and device size

We've created several Android Studio tools and guides that can help you prepare for migrating to using 16 KB page size.

Detect compatibility issues

APK Analyzer: Easily identify if your app contains native libraries by checking for .so files in the lib folder. The APK Analyzer can also visually indicate your app's 16 KB compatibility. You can then determine and update libraries as needed for 16 KB compliance.

Screenshot of the APK Analyzer in Android Studio


Alignment Checks: Android Studio also provides warnings if your prebuilt libraries or APKs are not 16 KB compliant. You should then use the APK Analyzer tool to review which libraries need to be updated or if any code changes are required. If you want to detect the 16 KB page size compatibility checks in your CI (continuous integration) pipeline, you can leverage scripts and command line tools.

Screenshot of Android 16 KB Alignment check in Android Studio


Lint in Android Studio now also highlights the native libraries which are not 16 KB aligned.

Screenshot of Lint performing a 16 KB alignment check in Android Studio


Build with 16 KB alignment

Tools Updates: Rebuild your native code with 16 KB alignment. Android Gradle Plugin (AGP) version 8.5.1 or higher automatically enables 16 KB alignment by default (during packaging) for uncompressed shared libraries. Similarly, Android NDK r28 and higher compile 16 KB-aligned by default. If you depend on other native SDK's, they also need to be 16 KB aligned. You might need to reach out to the SDK developer to request a 16 KB compliant SDK.

Fix code for page-size agnosticism

Eliminate Hardcoded Assumptions: Identify and remove any hardcoded dependencies on PAGE_SIZE or assumptions that the page size is 4 KB (e.g., 4096). Instead, use getpagesize() or sysconf(_SC_PAGESIZE) to query the actual page size at runtime.

Test in a 16 KB environment

Android Emulator Support: Android Studio offers a 16 KB emulator target (for both arm64 and x86_64) directly in the Android Studio SDK Manager, allowing you to test your applications before uploading to Google Play.

Screenshot of the 16 KB emulator in Android Studio


On-Device Testing: For compatible devices like Pixel 8 and 8 Pro onwards (starting with Android 15 QPR1), a new developer option allows you to switch between 4 KB and 16 KB page sizes for real-device testing. You can verify the page size using adb shell getconf PAGE_SIZE.

Screenshot of the 16 KB emulator in Android Studio


Don't wait - prepare your apps today

Leverage Android Studio's powerful tools to detect issues, build compatible binaries, fix your code, and thoroughly test your app for the new 16 KB memory page sizes. By doing so, you'll ensure an improved end user experience and contribute to a more performant Android ecosystem.

As always, your feedback is important to us - check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X.

10 Jul 2025 9:00pm GMT