21 Dec 2025
TalkAndroid
Boba Story Lid Recipes – 2025
Look no further for all the latest Boba Story Lid Recipes. They are all right here!
21 Dec 2025 3:16am GMT
Dice Dreams Free Rolls – Updated Daily
Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.
21 Dec 2025 3:15am GMT
20 Dec 2025
TalkAndroid
The Best OnePlus 15 Screen Protectors
Need screen protectors to match your OnePlus 15's case? Look no further than these products.
20 Dec 2025 5:46pm GMT
How AI and Ultra-Fast Connectivity Are Reshaping the Android Experience
Android is entering a faster, smarter phase where intelligence happens in the moment. On-device AI, powered by modern…
20 Dec 2025 5:44pm GMT
Why This 2025 Western Is Being Called the Year’s Absolute Must-See Series
The time has come: Netflix's 2025 titan of television isn't a glossy sci-fi or a twisty thriller-it's a…
20 Dec 2025 5:30pm GMT
He stunned in a Netflix smash—now discover him in this shocking 18+ western
From an unforgettable role in a Netflix smash hit to a gritty, no-holds-barred western, Nick Robinson is back…
20 Dec 2025 4:30pm GMT
Sci-fi fans shocked: this new series outranks all competitors in 80 countries
Think Netflix still wears the streaming crown without a challenge? Well, think again. There's a new sci-fi juggernaut…
20 Dec 2025 7:30am GMT
First images from Malcolm’s comeback: Fans stunned by these unexpected details
Can you hear that? It's the unmistakable rumble of nostalgia, as Malcolm and his brilliantly chaotic family gear…
20 Dec 2025 7:30am GMT
19 Dec 2025
Android Developers Blog
Media3 1.9.0 - What’s new
Posted by Kristina Simakova, Engineering Manager
Media3 1.9.0 - What's new?
-
media3-inspector - Extract metadata and frames outside of playback
-
media3-ui-compose-material3 - Build a basic Material3 Compose Media UI in just a few steps
-
media3-cast - Automatically handle transitions between Cast and local playbacks
-
media3-decoder-av1 - Consistent AV1 playback with the rewritten extension decoder based on the dav1d library
We also added caching and memory management improvements to PreloadManager, and provided several new ExoPlayer, Transformer and MediaSession simplifications.
This release also gives you the first experimental access to CompositionPlayer to preview media edits.
Read on to find out more, and as always please check out the full release notes for a comprehensive overview of changes in this release.
Extract metadata and frames outside of playback
There are many cases where you want to inspect media without starting a playback. For example, you might want to detect which formats it contains or what its duration is, or to retrieve thumbnails.The new media3-inspector module combines all utilities to inspect media without playback in one place:
-
MetadataRetriever to read duration, format and static metadata from a MediaItem.
-
FrameExtractor to get frames or thumbnails from an item.
-
MediaExtractorCompat as a direct replacement for the Android platform MediaExtractor class, to get detailed information about samples in the file.
suspend fun extractThumbnail(mediaItem: MediaItem) { FrameExtractor.Builder(context, mediaItem).build().use { val thumbnail = frameExtractor.getThumbnail().await() } }
Build a basic Material3 Compose Media UI in just a few steps
In previous releases we started providing connector code between Compose UI elements and your Player instance. With Media3 1.9.0, we added a new module media3-ui-compose-material3 with fully-styled Material3 buttons and content elements. They allow you to build a media UI in just a few steps, while providing all the flexibility to customize style. If you prefer to build your own UI style, you can use the building blocks that take care of all the update and connection logic, so you only need to concentrate on designing the UI element. Please check out our extended guide pages for the Compose UI modules.We are also still working on even more Compose components, like a prebuilt seek bar, a complete out-of-the-box replacement for PlayerView, as well as subtitle and ad integration.
@Composable fun SimplePlayerUI(player: Player, modifier: Modifier = Modifier) { Column(modifier) { ContentFrame(player) // Video surface and shutter logic Row (Modifier.align(Alignment.CenterHorizontally)) { SeekBackButton(player) // Simple controls PlayPauseButton(player) SeekForwardButton(player) } } }
Simple Compose player UI with out-of-the-box elements
Automatically handle transitions between Cast and local playbacks
When you set up your MediaSession, simply build a CastPlayer around your ExoPlayer and add a MediaRouteButton to your UI and you're done!
// MediaSession setup with CastPlayer val exoPlayer = ExoPlayer.Builder(context).build() val castPlayer = CastPlayer.Builder(context).setLocalPlayer(exoPlayer).build() val session = MediaSession.Builder(context, player) // MediaRouteButton in UI @Composable fun UIWithMediaRouteButton() { MediaRouteButton() }
New CastPlayer integration in Media3 session demo app
Consistent AV1 playback with the rewritten extension based on dav1d
The 1.9.0 release contains a completely rewritten AV1 extension module based on the popular dav1d library.As with all extension decoder modules, please note that it requires building from source to bundle the relevant native code correctly. Bundling a decoder provides consistency and format support across all devices, but because it runs the decoding in your process, it's best suited for content you can trust.
Integrate caching and memory management into PreloadManager
-
Caching support - When defining how far to preload, you can now choose PreloadStatus.specifiedRangeCached(0, 5000) as a target state for preloaded items. This will add the specified range to your cache on disk instead of loading the data to memory. With this, you can provide a much larger range of items for preloading as the ones further away from the current item no longer need to occupy memory. Note that this requires setting a Cache in DefaultPreloadManager.Builder.
-
Automatic memory management - We also updated our LoadControl interface to better handle the preload case so you are now able to set an explicit upper memory limit for all preloaded items in memory. It's 144 MB by default, and you can configure the limit in DefaultLoadControl.Builder. The DefaultPreloadManager will automatically stop preloading once the limit is reached, and automatically releases memory of lower priority items if required.
Rely on new simplified default behaviors in ExoPlayer
As always, we added lots of incremental improvements to ExoPlayer as well. To name just a few:-
Mute and unmute - We already had a setVolume method, but have now added the convenience mute and unmute methods to easily restore the previous volume without keeping track of it yourself.
-
Stuck player detection - In some rare cases the player can get stuck in a buffering or playing state without making any progress, for example, due to codec issues or misconfigurations. Your users will be annoyed, but you never see these issues in your analytics! To make this more obvious, the player now reports a StuckPlayerException when it detects a stuck state.
-
Wakelock by default - The wake lock management was previously opt-in, resulting in hard to find edge cases where playback progress can be delayed a lot when running in the background. Now this feature is opt-out, so you don't have to worry about it and can also remove all manual wake lock handling around playback.
-
Simplified setting for CC button logic - Changing TrackSelectionParameters to say "turn subtitles on/off" was surprisingly hard to get right, so we added a simple boolean selectTextByDefault option for this use case.
Simplify your media button preferences in MediaSession
Until now, defining your preferences for which buttons should show up in the media notification drawer on Android Auto or WearOS required defining custom commands and buttons, even if you simply wanted to trigger a standard player method.Media3 1.9.0 has new functionality to make this a lot simpler - you can now define your media button preferences with a standard player command, requiring no custom command handling at all.
session.setMediaButtonPreferences(listOf(
CommandButton.Builder(CommandButton.ICON_FAST_FORWARD) // choose an icon
.setDisplayName(R.string.skip_forward)
.setPlayerCommand(Player.COMMAND_SEEK_FORWARD) // choose an action
.build()
))
Media button preferences with fast forward button
CompositionPlayer for real-time preview
The 1.9.0 release introduces CompositionPlayer under a new @ExperimentalApi annotation. The annotation indicates that it is available for experimentation, but is still under development.CompositionPlayer is a new component in the Media3 editing APIs designed for real-time preview of media edits. Built upon the familiar Media3 Player interface, CompositionPlayer allows users to see their changes in action before committing to the export process. It uses the same Composition object that you would pass to Transformer for exporting, streamlining the editing workflow by unifying the data model for preview and export.
We encourage you to start using CompositionPlayer and share your feedback, and keep an eye out for forthcoming posts and updates to the documentation for more details.
InAppMuxer as a default muxer in Transformer
New speed adjustment APIs
val speedProvider = object : SpeedProvider {
override fun getSpeed(presentationTimeUs: Long): Float {
return speed
}
override fun getNextSpeedChangeTimeUs(timeUs: Long): Long {
return C.TIME_UNSET
}
}
EditedMediaItem speedEffectItem = EditedMediaItem.Builder(mediaItem)
.setSpeed(speedProvider)
.build()
This new approach replaces the previous method of using Effects#createExperimentalSpeedChangingEffects(), which we've deprecated and will remove in a future release.
Introducing track types for EditedMediaItemSequence
This is done via a new EditedMediaItemSequence.Builder constructor that accepts a set of track types (e.g., C.TRACK_TYPE_AUDIO, C.TRACK_TYPE_VIDEO).
To simplify creation, we've added new static convenience methods:
-
EditedMediaItemSequence.withAudioFrom(List<EditedMediaItem>)
-
EditedMediaItemSequence.withVideoFrom(List<EditedMediaItem>)
-
EditedMediaItemSequence.withAudioAndVideoFrom(List<EditedMediaItem>)
We encourage you to migrate to the new constructor or the convenience methods for clearer and more reliable sequence definitions.
Example of creating a video-only sequence:
EditedMediaItemSequence videoOnlySequence =
EditedMediaItemSequence.Builder(setOf(C.TRACK_TYPE_VIDEO))
.addItem(editedMediaItem)
.build()
---
Please get in touch via the Media3 issue Tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!
19 Dec 2025 10:00pm GMT
TalkAndroid
You won’t believe how quickly you can free up your photo storage today
Is your phone begging for mercy under the overwhelming weight of blurry gigabytes? Do you get that chilling…
19 Dec 2025 5:30pm GMT
Android Developers Blog
Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps
Posted by Fahd Imtiaz - Product Manager, Android Developer
Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps
In 2025 the Android ecosystem has grown far beyond the phone. Today, developers have the opportunity to reach over 500 million active devices, including foldables, tablets, XR, Chromebooks, and compatible cars.
These aren't just additional screens; they represent a higher-value audience. We've seen that users who own both a phone and a tablet spend 9x more on apps and in-app purchases than those with just a phone. For foldable users, that average spend jumps to roughly 14x more*.
This engagement signals a necessary shift in development: goodbye mobile apps, hello adaptive apps.
To help you build for that future, we spent this year releasing tools that make adaptive the default way to build. Here are three key updates from 2025 designed to help you build these experiences.
Standardizing adaptive behavior with Android 16
To support this shift, Android 16 introduced significant changes to how apps can restrict orientation and resizability. On displays of at least 600dp, manifest and runtime restrictions are ignored, meaning apps can no longer lock themselves to a specific orientation or size. Instead, they fill the entire display window, ensuring your UI scales seamlessly across portrait and landscape modes.
Because this means your app context will change more frequently, it's important to verify that you are preserving UI state during configuration changes. While Android 16 offers a temporary opt-out to help you manage this transition, Android 17 (SDK37) will make this behavior mandatory. To ensure your app behaves as expected under these new conditions, use the resizable emulator in Android Studio to test your adaptive layouts today.
Supporting screens beyond the tablet with Jetpack WindowManager 1.5.0
As devices evolve, our existing definitions of "large" need to evolve with them. In October, we released Jetpack WindowManager 1.5.0 to better support the growing number of very large screens and desktop environments.
On these surfaces, the standard "Expanded" layout, which usually fits two panes comfortably, often isn't enough. On a 27-inch monitor, two panes can look stretched and sparse, leaving valuable screen real estate unused. To solve this, WindowManager 1.5.0 introduced two new width window size classes: Large (1200dp to 1600dp) and Extra-large (1600dp+).
These new breakpoints signal when to switch to high-density interfaces. Instead of stretching a typical list-detail view, you can take advantage of the width to show three or even four panes simultaneously. Imagine an email client that comfortably displays your folders, the inbox list, the open message, and a calendar sidebar, all in a single view. Support for these window size classes was added to Compose Material 3 adaptive in the 1.2 release.
Rethinking user journeys with Jetpack Navigation 3
Building a UI that morphs from a single phone screen to a multi-pane tablet layout used to require complex state management. This often meant forcing a navigation graph designed for single destinations to handle simultaneous views. First announced at I/O 2025, Jetpack Navigation 3 is now stable, introducing a new approach to handling user journeys in adaptive apps.
Built for Compose, Nav3 moves away from the monolithic graph structure. Instead, it provides decoupled building blocks that give you full control over your back stack and state. This solves the single source of truth challenge common in split-pane layouts. Because Nav3 uses the Scenes API, you can display multiple panes simultaneously without managing conflicting back stacks, simplifying the transition between compact and expanded views.
A foundation for an adaptive future
This year delivered the tools you need, from optimizing for expansive layouts to the granular controls of WindowManager and Navigation 3. And, Android 16 began the shift toward truly flexible UI, with updates coming next year to deliver excellent adaptive experiences across all form factors. To learn more about adaptive development principles and get started, head over to d.android.com/adaptive-apps.
The tools are ready, and the users are waiting. We can't wait to see what you build!
*Source: internal Google data
19 Dec 2025 5:00pm GMT
TalkAndroid
This overlooked trick doubles any Android phone’s speed in just 30 seconds
If someone had told you that your sluggish Android could get an instant facelift without buying a new…
19 Dec 2025 4:30pm GMT
FiiO’s Snowsky Disc revives the CD era with a modern digital twist
Get your retro groove on...
19 Dec 2025 4:07pm GMT
Last-Minute Mobile Gaming Controller Deals (Still Time for Delivery)
Get them delivered in time for Christmas
19 Dec 2025 3:52pm GMT
Last-Minute Christmas Gaming Controllers to Gift (Still Time for Delivery)
Make a saving on these gaming controllers
19 Dec 2025 3:05pm GMT
Samsung’s £300 Galaxy Reward makes holiday upgrades far more tempting
Samsung is offering up to £300 back via Samsung Wallet
19 Dec 2025 12:40pm GMT
The Best OnePlus 15 Cases
Protect your onePlus 15's magnificent back design with these premium cases.
19 Dec 2025 12:12pm GMT
Unlock This Hidden Feature: Use Street View and Maps Side by Side Now
Have you ever felt like you're missing out on a secret shortcut in Google Maps-some hidden gem that…
19 Dec 2025 7:30am GMT
18 Dec 2025
Android Developers Blog
Bringing Androidify to Wear OS with Watch Face Push

Posted by Garan Jenkin - Developer Relations Engineer
A few months ago we relaunched Androidify as an app for generating personalized Android bots. Androidify transforms your selfie photo into a playful Android bot using Gemini and Imagen.
However, given that Android spans multiple form factors, including our most recent addition, XR, we thought, how could we bring the fun of Androidify to Wear OS?
An Androidify watch face
As Androidify bots are highly-personalized, the natural place to showcase them is the watch face. Not only is it the most frequently visible surface but also the most personal surface, allowing you to represent who you are.

Personalized Androidify watch face, generated from selfie image
Androidify now has the ability to generate a watch face dynamically within the phone app and then send it to your watch, where it will automatically be set as your watch face. All of this happens within seconds!
High-level design
End-to-end flow for watch face creation and installation
In order to achieve the end-to-end experience, a number of technologies need to be combined together, as shown in this high-level design diagram.
First of all, the user's avatar is combined with a pre-existing Watch Face Format template, which is then packaged into an APK. This is validated - for reasons which will be explained! - and sent to the watch.
On being received by the watch, the new Watch Face Push API - part of Wear OS 6- is used to install and activate the watch face.
Let's explore the details:
Creating the watch face templates
The watch face is created from a template, itself designed in Watch Face Designer. This is our new Figma plugin that allows you to create Watch Face Format watch faces directly within Figma.
An Androidify watch face template in Watch Face Designer
The plugin allows the watch face to be exported in a range of different ways, including as Watch Face Format (WFF) resources. These can then be easily incorporated as assets within the Androidify app, for dynamically building the finalized watch face.
Packaging and validation
Once the template and avatar have been combined, the Portable Asset Compiler Kit (Pack) is used to assemble an APK.
In Androidify, Pack is used as a native library on the phone. For more details on how Androidify interfaces with the Pack library, see the GitHub repository.
As a final step before transmission, the APK is checked by the Watch Face Push validator.
This validator checks that the APK is suitable for installation. This includes checking the contents of the APK to ensure it is a valid watch face, as well as some performance checks. If it is valid, then the validator produces a token.
This token is required by the watch for installation.
Sending the watch face
The Androidify app on Wear OS uses WearableListenerService to listen for events on the Wearable Data Layer.
The phone app transfers the watch face by using a combination of MessageClient to set up the process, then ChannelClient to stream the APK.
Installing the watch face on the watch
Once the watch face is received on the Wear OS device, the Androidify app uses the new Watch Face Push API to install the watch face:
val wfpManager =
WatchFacePushManagerFactory.createWatchFacePushManager(context)
val response = wfpManager.listWatchFaces()
try {
if (response.remainingSlotCount > 0) {
wfpManager.addWatchFace(apkFd, token)
} else {
val slotId = response.installedWatchFaceDetails.first().slotId
wfpManager.updateWatchFace(slotId, apkFd, token)
}
} catch (a: WatchFacePushManager.AddWatchFaceException) {
return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR
} catch (u: WatchFacePushManager.UpdateWatchFaceException) {
return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR
}
Androidify uses either the addWatchFace or updateWatchFace method, depending on the scenario: Watch Face Push defines a concept of "slots" - how many watch faces a given app can have installed at any time. For Wear OS 6, this value is in fact 1.
Androidify's approach is to install the watch face if there is a free slot, and if not, any existing watch face is swapped out for the new one.
Setting the active watch face
Installing the watch face programmatically is a great step, but Androidify seeks to ensure the watch face is also the active watch face.
Watch Face Push introduces a new runtime permission which must be granted in order for apps to be able to achieve this:
com.google.wear.permission.SET_PUSHED_WATCH_FACE_AS_ACTIVE
Once this permission has been acquired, the wfpManager.setWatchFaceAsActive() method can be called, to set an installed watch face to being the active watch face.
However, there are a number of considerations that Androidify has to navigate:
-
setWatchFaceAsActive can only be used once.
-
SET_PUSHED_WATCH_FACE_AS_ACTIVE cannot be re-requested after being denied by the user.
-
Androidify might already be in control of the active watch face.
For more details see how Androidify implements the set active logic.
Get started with Watch Face Push for Wear OS
Watch Face Push is a versatile API, equally suited to enhancing Androidify as it is to building fully-featured watch face marketplaces.
Perhaps you have an existing phone app and are looking for opportunities to further engage and delight your users?
Or perhaps you're an existing watch face developer looking to create your own community and gallery through releasing a marketplace app?
Take a look at these resources:
And also check out the accompanying video for a greater-depth look at how we brought Androidify to Wear OS!
We're looking forward to what you'll create with Watch Face Push!
18 Dec 2025 5:00pm GMT
17 Dec 2025
Android Developers Blog
Brighten Your Real-Time Camera Feeds with Low Light Boost
Posted by Donovan McMurray, Developer Relations Engineer
Today, we're diving into Low Light Boost (LLB), a powerful feature designed to brighten real-time camera streams. Unlike Night Mode, which requires a hold-still capture duration, Low Light Boost works instantaneously on your live preview and video recordings. LLB automatically adjusts how much brightening is needed based on available light, so it's optimized for every environment.
With a recent update, LLB allows Instagram users to line up the perfect shot, and then their existing Night Mode implementation results in the same high quality low-light photos their users have been enjoying for over a year.
Why Real-time Brightness Matters
While Night Mode aims to improve final image quality, Low Light Boost is intended for usability and interactivity in dark environments. Another important factor to consider is that - even though they work together very well - you can use LLB and Night Mode independently, and you'll see with some of these use cases, LLB has value on its own when Night Mode photos aren't needed. Here is how LLB improves the user experience:
-
Better Framing & Capture: In dimly lit scenes, a standard camera preview can be pitch black. LLB brightens the viewfinder, allowing users to actually see what they are framing before they hit the shutter button. For this experience, you can use Night Mode for the best quality low-light photo result, or you can let LLB give the user a "what you see is what you get" photo result.
-
Reliable Scanning: QR codes are ubiquitous, but scanning them in a dark restaurant or parking garage is often frustrating. With a significantly brighter camera feed, scanning algorithms can reliably detect and decode QR codes even in very dim environments.
-
Enhanced Interactions: For apps involving live video interactions (like AI assistants or video calls) LLB increases the amount of perceivable information, ensuring the computer vision models have enough data to work with
The Difference in Instagram
It's easy to imagine the difference this makes in the user experience. If users aren't able to see what they're capturing, then there's a higher chance they'll abandon the capture.
Choosing Your Implementation
There are two ways to implement Low Light Boost to provide the best experience across the widest range of devices:
-
Low Light Boost AE Mode: This is a hardware-layer auto-exposure mode. It offers the highest quality and performance because it fine-tunes the Image Signal Processor (ISP) pipeline directly. Always check for this first.
-
Google Low Light Boost: If the device doesn't support the AE mode, you can fall back to this software-based solution provided by Google Play services. It applies post-processing to the camera stream to brighten it. As an all-software solution, it is available on more devices, so this implementation helps you reach more devices with LLB.
Low Light Boost AE Mode (Hardware)
Mechanism:
This mode is supported on devices running Android 15 and newer and requires the OEM to have implemented the support in HAL (currently available on Pixel 10 devices). It integrates directly with the camera's Image Signal Processor (ISP). If you set CaptureRequest.CONTROL_AE_MODE to CameraMetadata.CONTROL_AE_MODE_ON_LOW_LIGHT_BOOST_BRIGHTNESS_PRIORITY, the camera system takes control.
Behavior:
The HAL/ISP analyzes the scene and adjusts sensor and processing parameters, often including increasing exposure time, to brighten the image. This can yield frames with a significantly improved signal-to-noise ratio (SNR) because the extended exposure time, rather than an increase in digital sensor gain (ISO), allows the sensor to capture more light information.
Advantage:
Potentially better image quality and power efficiency as it leverages dedicated hardware pathways.
Trade off:
May result in a lower frame rate in very dark conditions as the sensor needs more time to capture light. The frame rate can drop to as low as 10 FPS in very low light conditions.
Google Low Light Boost (Software via Google Play Services)
Mechanism:
This solution, distributed as an optional module via Google Play services, applies post-processing to the camera stream. It uses a sophisticated realtime image enhancement technology called HDRNet.
Google HDRNet:
This deep learning model analyzes the image at a lower resolution to predict a compact set of parameters (a bilateral grid). This grid then guides the efficient, spatially-varying enhancement of the full-resolution image on the GPU. The model is trained to brighten and improve image quality in low-light conditions, with a focus on face visibility.
Process Orchestration:
The HDRNet model and its accompanying logic are orchestrated by the Low Light Boost processor. This includes:
-
Scene Analysis:
A custom calculator that estimates the true scene brightness using camera metadata (sensor sensitivity, exposure time, etc.) and image content. This analysis determines the boost level. -
HDRNet Processing:
Applies the HDRNet model to brighten the frame. The model used is tuned for low light scenes and optimized for realtime performance. -
Blending:
The original and HDRNet processed frames are blended. The amount of blending applied is dynamically controlled by the scene brightness calculator, ensuring a smooth transition between boosted and unboosted states.
Advantage:
Works on a broader range of devices (currently supports Samsung S22 Ultra, S23 Ultra, S24 Ultra, S25 Ultra, and Pixel 6 through Pixel 9) without requiring specific HAL support. Maintains the camera's frame rate as it's a post-processing effect.
Trade-off:
As a post-processing method, the quality is limited by the information present in the frames delivered by the sensor. It cannot recover details lost due to extreme darkness at the sensor level.
By offering both hardware and software pathways, Low Light Boost provides a scalable solution to enhance low-light camera performance across the Android ecosystem. Developers should prioritize the AE mode where available and use the Google Low Light Boost as a robust fallback.
Implementing Low Light Boost in Your App
Now let's look at how to implement both LLB offerings. You can implement the following whether you use CameraX or Camera2 in your app. For the best results, we recommend implementing both Step 1 and Step 2.
Step 1: Low Light Boost AE Mode
Available on select devices running Android 15 and higher, LLB AE Mode functions as a specific Auto-Exposure (AE) mode.
1. Check for Availability
First, check if the camera device supports LLB AE Mode.
val cameraInfo = cameraProvider.getCameraInfo(cameraSelector) val isLlbSupported = cameraInfo.isLowLightBoostSupported
2. Enable the Mode
If supported, you can enable LLB AE Mode using CameraX's CameraControl object.
// After setting up your camera, use the CameraInfo object to enable LLB AE Mode. camera = cameraProvider.bindToLifecycle(...) if (isLlbSupported) { try { // The .await() extension suspends the coroutine until the // ListenableFuture completes. If the operation fails, it throws // an exception which we catch below. camera?.cameraControl.enableLowLightBoostAsync(true).await() } catch (e: IllegalStateException) { Log.e(TAG, "Failed to enable low light boost: not available on this device or with the current camera configuration", e) } catch (e: CameraControl.OperationCanceledException) { Log.e(TAG, "Failed to enable low light boost: camera is closed or value has changed", e) } }
3. Monitor the State
Just because you requested the mode doesn't mean it's currently "boosting." The system only activates the boost when the scene is actually dark. You can set up an Observer to update your UI (like showing a moon icon) or convert to a Flow using the extension function asFlow().
if (isLlbSupported) { camera?.cameraInfo.lowLightBoostState.asFlow().collectLatest { state -> // Update UI accordingly updateMoonIcon(state == LowLightBoostState.ACTIVE) } }
You can read the full guide on Low Light Boost AE Mode here.
Step 2: Google Low Light Boost
For devices that don't support the hardware AE mode, Google Low Light Boost acts as a powerful fallback. It uses a LowLightBoostSession to intercept and brighten the stream.
1. Add Dependencies
This feature is delivered via Google Play services.
implementation("com.google.android.gms:play-services-camera-low-light-boost:16.0.1-beta06") // Add coroutines-play-services to simplify Task APIs implementation("org.jetbrains.kotlinx:kotlinx-coroutines-play-services:1.10.2")
2. Initialize the Client
Before starting your camera, use the LowLightBoostClient to ensure the module is installed and the device is supported.
val llbClient = LowLightBoost.getClient(context) // Check support and install if necessary val isSupported = llbClient.isCameraSupported(cameraId).await() val isInstalled = llbClient.isModuleInstalled().await() if (isSupported && !isInstalled) { // Trigger installation llbClient.installModule(installCallback).await() }
3. Create a LLB Session
Google LLB processes each frame, so you must give your display Surface to the LowLightBoostSession, and it gives you back a Surface that has the brightening applied. For Camera2 apps, you can add the resulting Surface with CaptureRequest.Builder.addTarget(). For CameraX, this processing pipeline aligns best with the CameraEffect class, where you can apply the effect with a SurfaceProcessor and provide it back to your Preview with a SurfaceProvider, as seen in this code.
// With a SurfaceOutput from SurfaceProcessor.onSurfaceOutput() and a // SurfaceRequest from Preview.SurfaceProvider.onSurfaceRequested(), // create a LLB Session. suspend fun createLlbSession(surfaceRequest: SurfaceRequest, outputSurfaceForLlb: Surface) { // 1. Create the LLB Session configuration val options = LowLightBoostOptions( outputSurfaceForLlb, cameraId, surfaceRequest.resolution.width, surfaceRequest.resolution.height, true // Start enabled ) // 2. Create the session. val llbSession = llbClient.createSession(options, callback).await() // 3. Get the surface to use. val llbInputSurface = llbSession.getCameraSurface() // 4. Provide the surface to the CameraX Preview UseCase. surfaceRequest.provideSurface(llbInputSurface, executor, resultListener) // 5. Set the scene detector callback to monitor how much boost is being applied. val onSceneBrightnessChanged = object : SceneDetectorCallback { override fun onSceneBrightnessChanged( session: LowLightBoostSession, boostStrength: Float ) { // Monitor the boostStrength from 0 (no boosting) to 1 (maximum boosting) } } llbSession.setSceneDetectorCallback(onSceneBrightnessChanged, null) }
4. Pass in the Metadata
For the algorithm to work, it needs to analyze the camera's auto-exposure state. You must pass capture results back to the LLB session. In CameraX, this can be done by extending your Preview.Builder with Camera2Interop.Extender.setSessionCaptureCallback().
Camera2Interop.Extender(previewBuilder).setSessionCaptureCallback( object : CameraCaptureSession.CaptureCallback() { override fun onCaptureCompleted( session: CameraCaptureSession, request: CaptureRequest, result: TotalCaptureResult ) { super.onCaptureCompleted(session, request, result) llbSession?.processCaptureResult(result) } } )
Detailed implementation steps for the client and session can be found in the Google Low Light Boost guide.
Next Steps
By implementing these two options, you ensure that your users can see clearly, scan reliably, and interact effectively, regardless of the lighting conditions.
To see these features in action within a complete, production-ready codebase, check out the Jetpack Camera App on GitHub. It implements both LLB AE Mode and Google LLB, giving you a reference for your own integration.17 Dec 2025 5:00pm GMT
Build smarter apps with Gemini 3 Flash
Posted by Thomas Ezan, Senior Developer Relations Engineer
Gemini 3 optimized for low-latency
Gemini 3 is our most intelligent model family to date. With the launch of Gemini 3 Flash, we are making that intelligence more accessible for low-latency and cost-effective use cases. While Gemini 3 Pro is designed for complex reasoning, Gemini 3 Flash is engineered to be significantly faster and more cost-effective for your production apps.
Seamless integration with Firebase AI Logic
Just like the Pro model, Gemini 3 Flash is available in preview directly through the Firebase AI Logic SDK. This means you can integrate it into your Android app without needing to do any complex server side setup.Here is how to add it to your Kotlin code:
val model = Firebase.ai(backend = GenerativeBackend.googleAI())
.generativeModel(
modelName = "gemini-3-flash-preview")
Scale with Confidence
In addition, Firebase enables you to keep your growth secure and manageable with:AI Monitoring
The Firebase AI monitoring dashboard gives you visibility into latency, success rates, and costs, allowing you to slice data by model name to see exactly how the model performs.Server Prompt Templates
You can use server prompt templates to store your prompt and schema securely on Firebase servers instead of hardcoding them in your app binary. This capability ensures your sensitive prompts remain secure, prevents unauthorized prompt extraction, and allows for faster iteration without requiring app updates.--- model: 'gemini-3-flash-preview' input: schema: topic: type: 'string' minLength: 2 maxLength: 40 length: type: 'number' minimum: 1 maximum: 200 language: type: 'string' --- {{role "system"}} You're a storyteller that tells nice and joyful stories with happy endings. {{role "user"}} Create a story about {{topic}} with the length of {{length}} words in the {{language}} language.
Prompt template defined on the Firebase Console
val generativeModel = Firebase.ai.templateGenerativeModel() val response = generativeModel.generateContent("storyteller-v10", mapOf( "topic" to topic, "length" to length, "language" to language ) ) _output.value = response.text
Code snippet to access to the prompt template
Gemini 3 Flash for AI development assistance in Android Studio
Gemini 3 Flash is also available for AI assistance in Android Studio. While Gemini 3 Pro Preview is our best model for coding and agentic experiences, Gemini 3 Flash is engineered for speed, and great for common development tasks and questions.The new model is rolling out to developers using Gemini in Android Studio at no-cost (default model) starting today. For higher usage rate limits and longer sessions with Agent Mode, you can use an AI Studio API key to leverage the full capabilities of either Gemini 3 Flash or Gemini 3 Pro. We're also rolling out Gemini 3 model family access with higher usage rate limits to developers who have Gemini Code Assist Standard or Enterprise licenses. Your IT administrator will need to enable access to preview models through the Google Cloud console.
Get Started Today
You can start experimenting with Gemini 3 Flash via Firebase AI Logic today. Learn more about it in the Android and Firebase documentation. Try out any of the new Gemini 3 models in Android Studio for development assistance, and let us know what you think! As always you can follow us across LinkedIn, Blog, YouTube, and X.17 Dec 2025 4:13pm GMT
15 Dec 2025
Android Developers Blog
Notes from Google Play: A look back at the tools that powered your growth in 2025
Posted by Sam Bright - VP & GM, Google Play + Developer Ecosystem
Hi everyone,
Thank you for making 2025 another amazing year for Google Play.
Together, we've built Play into something much more than a store-it's a dynamic ecosystem powered by your creativity. This year, our focus was ensuring Play continues to be the best destination for people to discover incredible content and enjoy rewarding gaming experiences.
We're incredibly proud of the progress we've made alongside you, and we're excited to celebrate those who pushed the boundaries of what's possible-like the winners of our Best of 2025 awards. Watch our recap video to see how we've made Play even more rewarding for your business, or read on for a more in-depth look back on the year.
Evolving Play to be more than a store
This year, we focused on evolving Play into a true destination for discovery where billions of people around the world can find and enjoy experiences that make life more productive and delightful.
Making Play the best destination for your games business
Just a few months ago, we shared our vision for a more unified experience that brings more fun to gaming. Today, players often jump between different platforms to discover, play, and get rewarded. Our goal is to connect these journeys to create the best experience for players and, consequently, grow your business. Our first steps include these key updates:
-
A new Gamer Profile that tracks cross-game stats, streaks, and achievements, customizable with a Gen AI Avatar.
-
Integrated Rewards across mobile and PC that give players access to VIP experiences like our Four Days of Fantastic Rewards at San Diego Comic-Con, Diamond District experience on Roblox, and Play's own treasure-hunt mini-game Diamond Valley alongside new Play Games Leagues where players can compete in their favorite games, climb leaderboards, and win Play Points rewards.
-
The new Play Games Sidekick, a helpful in-game overlay that curates and organizes relevant gaming info, and provides direct access to Gemini Live for real-time AI-powered guidance in the game. We recently rolled out the open beta to developers, and we encourage you to start testing the sidekick in your games and share your feedback.
-
Integrated gameplay across devices is now fully realized as Google Play Games on PC has graduated from beta to general availability, solidifying our commitment to cross-platform play and making our catalog of over 200,000 titles available across mobile and PC.
Play Games Sidekick is a new in-game overlay that gives players instant access to their rewards, offers, and achievements, driving higher engagement for your game.
To help you get the most out of this unified gaming experience, we introduced the Google Play Games Level Up program, a new way to unlock greater success for your business. For titles that meet core user experience guidelines, you can unlock a powerful suite of benefits including the ability to:
-
Re-engage players on the new You tab, a new personalized destination on the Play Store that is designed to help you re-engage and retain players by showcasing content and rewards from recently played games in one dedicated space. You can utilize engagement tools in Play Console to feature your latest events, offers, and updates.
-
Maximize your game's reach with prominent boosts across the store, including featuring opportunities, Play Points boosters and quests, and enhanced visibility on editorial surfaces like the Games Home and Play Points Home.
You tab is a personalized destination designed to help you re-engage and
retain players by showcasing your latest events, offers, and updates.
Unlocking more discovery and engagement for your apps and its content
Last year, we shared our vision for a content-rich Google Play that has already delivered strong results. Year-over-year, Apps Home has seen over an 18% increase in average monthly visitors with apps seeing a 9% growth in acquisitions and double-digit growth* in app spend for those monetizing on Google Play. We introduced even more updates to elevate discovery and engagement on and off the store.
-
Curated spaces, launched last year, have been a success, fostering routine engagement by delivering daily, locally relevant content (such as football highlights in Brazil, cricket in India, and comics in Japan) directly to the Apps Home. Building on this, we expanded to new categories and locations, including a new entertainment-focused space in Korea.
Curated spaces make it easier to find and engage with local interests.
-
We significantly increased timely, relevant content on Google Play through Spotlight and new topic browse pages. Spotlight, located at the top of Apps Home, offers seasonal content feeds-like Taylor Swift's recent album launch or holiday movie guides-in a new, immersive way to connect users with current cultural moments. Concurrently, new topic browse pages were integrated across the store in the U.S., Japan, and South Korea, allowing content deep dives into over 100,000 shows and movies.
Spotlight offers an immersive experience connecting users
with relevant apps during current cultural moments.
Last year, we introduced Engage SDK to help you deliver personalized content to users across surfaces and seamlessly guide them into the relevant in-app experiences. Integrating it unlocks surfaces like Collections, our immersive full-screen experience bringing content directly to the user's home screen. This year, we rolled out updates to expand your content's reach even further:
-
Engage SDK content expanded to the Play Store this summer, enabling seamless re-engagement across Apps Home and the new You tab.
-
Rolled out to more markets, including Brazil, Germany, India, Japan, and Korea.
Supporting you throughout your app lifecycle
In addition to evolving the store, we've continued to build on our powerful toolset to support you at every stage, from testing and release to growth and monetization.
Helping you deliver high-quality, trusted user experiences
We launched key updates in Android Studio and Play Console to help you build more stable and compliant apps.
-
Policy insights in Android Studio help you catch potential violations early by showing in-context guidance, such as policy summaries and best practices, whenever code related to a Google Play policy is detected.
-
You can now halt fully rolled-out releases to stop the distribution of problematic app versions through Play Console and the Publishing API.
-
We also added new Android vitals performance metrics, including low memory kill metrics which provides device-specific insights to resolve stability problems and excessive partial wake lock metrics to help you address battery drain.
New Android vitals metrics help you resolve stability problems and address battery drain.
Boosting your productivity and workflow
We refined the Play Console experience to make managing your app and your marketing content more efficient.
-
We put your most essential insights front and center with a redesigned app dashboard and overview pages.
-
To repurpose creative content across Play Console more easily, we launched an asset library that lets you upload from Google Drive, organize with tags, and crop existing visuals.
-
You can now automatically translate app strings with Gemini at no cost. This feature eliminates manual translation work for new releases, making updates seamless. You remain in full control with the ability to preview translations using a built-in emulator, and can easily edit or disable the service.
Translate app strings automatically with Gemini, while maintaining full control for previewing and editing.
Maximizing your revenue with secure, frictionless payments
We introduced new features focused on driving purchases and maximizing subscription revenue globally.
-
We're improving purchase conversion globally with over 800 million users now ready to buy. We launched features that encourage users to set up payment methods early, provide AI-powered payment method recommendations, and expanded our payment library to support more local payment methods globally.
-
To maximize recurring revenue from over 400 million paid subscriptions, we introduced multi-product checkout, allowing you to sell base subscriptions and add-ons under a simple, single transaction.
-
To combat churn, we began showcasing subscription benefits in more places and provided you with more flexible options like extended grace periods and account holds for declined payments, which has proven effective in reducing involuntary churn by an average of 10%*.
To help reduce voluntary churn, we're showcasing your subscriptions benefits across Play.
Investing in our app and game community with developer programs
We're proud to invest in programs for app and game companies around the world to help you grow and succeed on Play.
-
Google Play Apps Accelerator: We've opened submissions for our program that will help early-stage app companies scale their business. Selected companies from over 80 eligible countries will join a 12-week accelerator starting in March 2026, where they can learn more about creating high-quality apps, go-to-market strategies, user acquisition, and more.
Submissions are still open for our 12-week accelerator, which starts in March 2026.
Apply by January 7, 2026 for consideration.
-
Indie Games Fund (Latin America): Now in its fourth year, this fund provides support to 10 promising game studios in Latin America with funding and hands-on support from Google Play. In October, we announced the 2025 recipients.
-
ChangGoo Program (South Korea): Now in its seventh year, this program works with over 100 Korean mobile app and game startups to foster their growth and expansion in collaboration with the Ministry of SMEs and Startups and the Korean Institute of Startup and Entrepreneurship Development (KISED).
-
Google Play x Unity Game Developer Training Program (Indonesia): The third edition launched in April, offering a 6-month online curriculum, meetups, and mentorship for Indonesian game developers in partnership with Indonesia's Ministry of Creative Economy and the Indonesian Gaming Association.
-
Google Play x Unity Game Developer Training Program (India): The first India cohort kicked off in November with 500 aspiring and professional game developers. The 6-month journey provides online curriculum and meetups in partnership with GDAI and the govt of Tamil Nadu and Maharashtra.
-
Google for Startups Accelerator program (India): We provided Seed to Series-A AI-powered app startups in India with insights on the latest AI advancements, mentorship, and expert guidance.
Protecting your business and our ecosystem
At the heart of all the progress we've made this year is a foundation of trust and security. We're always innovating to make Play safer for everyone-so users can trust every app they download and so you can keep building a thriving business.
To offer stronger protection for your business and users, we continued to enhance the Play Integrity API and our anti-fraud systems. On average, apps using Play Integrity features see 80% lower unauthorized usage, and our efforts have safeguarded top apps using Play Billing from $2.9 billion in fraud and abuse in the last year.
-
Automatically fix user issues: New Play in-app remediation prompts in Play Integrity API automatically guide users to fix common problems like network issues, outdated Google Play Services, or device integrity flags, reducing integration complexity and getting users back to a good state faster.
-
Combat repeat bad actors: Device recall is a powerful new tool that lets you store and recall limited data associated with a device, even if the device is reset, helping protect your business model from repeat bad actors.
-
Strengthen revenue protection: We've introduced stronger protections against abuse, including refining pricing arbitrage detection and enhancing protection against free trial and intro pricing abuse for subscriptions, helping your business models remain profitable.
a wide range of issues to guide your users back to a good state.
For a full breakdown of new ways we're keeping the ecosystem safe, check out our deep-dive blog post here.
Thank you for your partnership
This is an incredible time for Google Play. We've made huge strides together - your passion, creativity, and feedback throughout 2025 has made Play that much stronger. We're grateful to work alongside the best developer community in the world, and we look forward to unlocking even greater success together in the new year.
Happy holidays!
Sam Bright
VP & GM, Google Play + Developer Ecosystem
* Source: Internal Google data
15 Dec 2025 5:00pm GMT
18% Faster Compiles, 0% Compromises

Posted by Santiago Aboy Solanes - Software Engineer, Vladimír Marko - Software Engineer
The Android Runtime (ART) team has reduced compile time by 18% without compromising the compiled code or any peak memory regressions. This improvement was part of our 2025 initiative to improve compile time without sacrificing memory usage or the quality of the compiled code.
Optimizing compile-time speed is crucial for ART. For example, when just-in-time (JIT) compiling it directly impacts the efficiency of applications and overall device performance. Faster compilations reduce the time before the optimizations kick in, leading to a smoother and more responsive user experience. Furthermore, for both JIT and ahead-of-time (AOT), improvements in compile-time speed translate to reduced resource consumption during the compilation process, benefiting battery life and device thermals, especially on lower-end devices.
Some of these compile-time speed improvements launched in the June 2025 Android release, and the rest will be available in the end-of-year release of Android. Furthermore, all Android users on versions 12 and above are eligible to receive these improvements through mainline updates.
Optimizing the optimizing compiler
Optimizing a compiler is always a game of trade-offs. You can't just get speed for free; you have to give something up. We set a very clear and challenging goal for ourselves: make the compiler faster, but do it without introducing memory regressions and, crucially, without degrading the quality of the code it produces. If the compiler is faster but the apps run slower, we've failed.
The one resource we were willing to spend was our own development time to dig deep, investigate, and find clever solutions that met these strict criteria. Let's take a closer look at how we work to find areas to improve, as well as finding the right solutions to the various problems.
Finding worthwhile possible optimizations
Before you can begin to optimize a metric, you have to be able to measure it. Otherwise, you can't ever be sure if you improved it or not. Luckily for us, compile time speed is fairly consistent as long as you take some precautions like using the same device you use for measuring before and after a change, and making sure you don't thermal throttle your device. On top of that, we also have deterministic measurements like compiler statistics that help us understand what's going on under the hood.
Since the resource we were sacrificing for these improvements was our development time, we wanted to be able to iterate as fast as we could. This meant that we grabbed a handful of representative apps (a mix of first-party apps, third-party apps, and the Android operating system itself) to prototype solutions. Later, we verified that the final implementation was worth it with both manual and automated testing in a widespread manner.
With that set of hand-picked apks we would trigger a manual compile locally, get a profile of the compilation, and use pprof to visualize where we are spending our time.
Example of a profile's flame graph in pprof
The pprof tool is very powerful and allows us to slice, filter, and sort the data to see, for example, which compiler phases or methods are taking most of the time. We will not go into detail about pprof itself; just know that if the bar is bigger then it means it took more time of the compilation.
One of these views is the "bottom up" one where you can see which methods are taking most of the time. In the image below we can see a method called Kill, accounting for over a 1% of the compile time. Some of the other top methods will also be discussed later in the blog post.
Bottom up view of a profile
In our optimizing compiler, there's a phase called Global Value Numbering (GVN). You don't have to worry about what it does as a whole, but the relevant part is to know that it has a method called `Kill` that it will delete some nodes according to a filter. This is time consuming as it has to iterate through all the nodes and check one by one. We noticed that there are some cases in which we know in advance that the check will be false, no matter the nodes we have alive at that point. In these cases, we can skip iterating altogether, bringing it from 1.023% down to ~0.3% and improving GVN's runtime by ~15%.
Implementing worthwhile optimizations
We covered how to measure and how to detect where the time is being spent, but this is only the beginning. The next step is how to optimize the time being spent compiling.
Usually, in a case like the `Kill` one above we would take a look at how we iterate through the nodes and do it faster by, for example, doing things in parallel or improving the algorithm itself. In fact, that's what we tried at first and only when we couldn't find anything to do we had a "Wait a minute…" moment and saw that the solution was to (in some cases) not iterate at all! When doing these kinds of optimizations, it is easy to miss the forest for the trees.
In other cases, we used a handful of different techniques including:
-
using heuristics to decide whether an optimization will fail to produce worthwhile results and therefore can be skipped
-
using extra data structures to cache computed data
-
changing the current data structures to get a speed boost
-
lazily computing results to avoid cycles in some cases
-
use the right abstraction - unnecessary features can slow down the code
-
avoid chasing a frequently used pointer through many loads
How do we know if the optimizations are worth pursuing?
That's the neat part, you don't. After detecting that an area is consuming a lot of compile time and after devoting development time to try to improve it, sometimes you can't just find a solution. Maybe there's nothing to do, it will take too long to implement, it will regress another metric significantly, increase code base complexity, etc. For every successful optimization that you can see in this blog post, know that there are countless others that just didn't come to fruition.
If you are in a similar situation, try to estimate how much you are going to improve the metric by doing as little work as you can. This means, in order:
-
Estimating with a metrics you have already collected, or just a gut feeling
-
Estimating with a quick and dirty prototype
-
Implement a solution.
Don't forget to consider estimating the drawbacks of your solution. For example, if you are going to rely on extra data structures, how much memory are you willing to use?
Diving deeper
Without further ado, let's look at some of the changes we implemented.
We implemented a change to optimize a method called FindReferenceInfoOf. This method was doing a linear search of a vector to find an entry. We updated that data structure to be indexed by the instruction's id so that FindReferenceInfoOf would be O(1) instead of O(n). Also, we pre-allocated the vector to avoid resizing. We slightly increased memory as we had to add an extra field which counted how many entries we inserted in the vector, but it was a small sacrifice to make as the peak memory didn't increase. This sped up our LoadStoreAnalysis phase by 34-66% which in turns gives ~0.5-1.8% compile time improvement.
We have a custom implementation of HashSet that we use in several places. Creating this data structure was taking a considerable amount of time and we found out why. Many years ago, this data structure was used in only a few places that were using very big HashSets and it was tweaked to be optimized for that. However, nowadays it was used in the opposite direction with only a few entries and with a short lifespan. This meant that we were wasting cycles by creating this huge HashSet but we only used it for a few entries before discarding it. With this change, we improved ~1.3-2% of compile time. As an added bonus, memory usage decreased by ~0.5-1% since we weren't using as big data structures as before.
We improved ~0.5-1% of compile time by passing data structures by reference to the lambda to avoid copying them around. This was something that was missed in the original review and sat in our codebase for years. It was thanks to taking a look at the profiles in pprof that we noticed that these methods were creating and destroying a lot of data structures, which led us to investigate and optimize them.
We sped up the phase that writes the compiled output by caching computed values, which translated to ~1.3-2.8% of total compile time improvement. Sadly, the extra bookkeeping was too much and our automated testing alerted us of the memory regression. Later, we took a second look at the same code and implemented a new version which not only took care of the memory regression but also improved the compile time a further ~0.5-1.8%! In this second change we had to refactor and reimagine how this phase should work, in order to get rid of one of the two data structures.
We have a phase in our optimizing compiler which inlines function calls in order to get better performance. To choose which methods to inline we use both heuristics before we do any computation, and final checks after doing work but right before we finalize the inlining. If any of those detect that the inlining is not worth it (for example, too many new instructions would be added), then we don't inline the method call.
We moved two checks from the "final checks" category to the "heuristic" category to estimate whether an inlining will succeed or not before we do any time-expensive computation. Since this is an estimate it is not perfect, but we verified that our new heuristics cover 99.9% of what was inlined before without affecting performance. One of these new heuristics was about the needed DEX registers (~0.2-1.3% improvement), and the other one about number of instructions (~2% improvement).
We have a custom implementation of a BitVector that we use in several places. We replaced the resizable BitVector class with a simpler BitVectorView for certain fixed-size bit vectors. This eliminates some indirections and run-time range checks and speeds up the construction of the bit vector objects.
Furthermore, the BitVectorView class was templatized on the underlying storage type (instead of always using uint32_t as the old BitVector). This allows some operations, for example Union(), to process twice as many bits together on 64-bit platforms. The samples of the affected functions were reduced by more than 1% in total when compiling the Android OS. This was done across several changes [1, 2, 3, 4, 5, 6]
If we talked in detail about all the optimizations we would be here all day! If you are interested in some more optimizations, take a look at some other changes we implemented:
-
Add bookkeeping to improve compilation times by ~0.6-1.6%.
-
Lazily compute data to avoid cycles, if possible.
-
Refactor our code to skip precomputing work when it will not be used.
-
Avoid some dependent load chains when the allocator can be readily obtained from other places.
-
Another case of adding a check to avoid unnecessary work.
-
Avoid frequent branching on register type (core/FP) in register allocator.
-
Make sure some arrays are initialized at compile time. Don't rely on clang to do it.
-
Clean up some loops. Use range loops that clang can optimize better because it does not need to reload the internal pointers of the container due to loop side effects. Avoid calling the virtual function `HInstruction::GetInputRecords()` in the loop via the inlined `InputAt(.)` for each input.
-
Avoid Accept() functions for the visitor pattern by exploiting a compiler optimization.
Conclusion
Our dedication to improving ART's compile-time speed has yielded significant improvements, making Android more fluid and efficient while also contributing to better battery life and device thermals. By diligently identifying and implementing optimizations, we've demonstrated that substantial compile-time gains are possible without compromising memory usage or code quality.
Our journey involved profiling with tools like pprof, a willingness to iterate, and sometimes even abandon less fruitful avenues. The collective efforts of the ART team have not only reduced compile time by a noteworthy percentage, but have also laid the groundwork for future advancements.
All of these improvements are available in the 2025 end-of-year Android update, and for Android 12 and above through mainline updates. We hope this deep dive into our optimization process provides valuable insights into the complexities and rewards of compiler engineering!
15 Dec 2025 5:00pm GMT
11 Dec 2025
Android Developers Blog
Building a safer Android and Google Play, together
Posted by Matthew Forsythe , Director, Product Management, App & Ecosystem Trust and Ron Aquino Sr. Director, Trust and Safety, Chrome, Android and Play
Earlier this year, we reiterated our commitment to keeping Android and Google Play safe for everyone and maintaining a thriving environment where users can trust the apps they download and your business can flourish. We've heard your feedback clearly, from excited conversations at Play events around the world to the honest concerns on social media. You want simpler ways to make sure your apps are compliant and pass review, and need strong protections for your business so you can focus on growth and innovation. We are proud of the steps we've taken together this year, but know this is ongoing work in a complex, ever-changing market.
Here are key actions we've taken this year to simplify your development journey and strengthen protection.
Simpler ways to build safer apps from the start
This year, we focused on making improvements to the app publishing experience by reducing friction points, from the moment you write code to submitting your app for review.-
Policy guidance right where you code: We rolled out Play Policy Insights to all developers using Android Studio. This feature provides real-time, in-context guidance and policy warnings as you code, helping you proactively identify and resolve potential issues before you even submit your app for review.
-
Pre-review checks to help prevent app review surprises: Last year, we launched pre-review checks in Play Console so you can identify issues early, like incomplete policy declarations or crashes, and avoid rejections. This year, we expanded these checks for privacy policy links, login credential requirements, data deletion request links, inaccuracies in your Data safety form, and more.
Stronger protection for your business and users
We are committed to providing you with powerful ways to protect your apps and users from abuse. Beyond existing tools, programs, and the performance and security enhancement that comes with every Android release, we've also launched:-
Advanced abuse and fraud protection: We made the Play Integrity API faster and more resilient, and introduced new features like Play remediation prompts and device recall in beta. Device recall is a powerful new tool that lets you store and recall limited data associated with a device, even if the device is reset, helping protect your business model from repeat bad actors.
-
Tools to keep kids safe:
-
-
We continued to invest in protecting children across Google products, including Google Play. New Play policy helps keep our youngest users safe globally by requiring apps with dating and gambling features to use Play Console tools to prevent minors from accessing them. Our enhanced Restrict Minor Access feature now blocks the users who we determine to be minors from searching for, downloading, or making purchases in apps that they shouldn't have access to.
-
We've also been providing tools to developers to help meet significant new age verification regulatory requirements in applicable US states.
-
-
More ways to stop malware from snooping on your app: Android 16 provides a new, powerful defense in a single line of code: accessibilityDataSensitive. This flag lets you explicitly mark views in your app as containing sensitive data and block malicious apps from seeing or performing interactions on it. If you already use setFilterTouchesWhenObscured(true) to protect your app from tapjacking, your views are automatically treated as sensitive data for accessibility for an instant additional layer of defense with no extra work.
Smoother policy compliance experience
We're listening to your concerns and proactively working to make the experience of Play policy compliance and Android security requirements more transparent, predictable, and accessible for all developers. You asked for clarity, fairness, and speed, and here is what we launched:-
More support when you need it: Beyond the webinars and resources that we share, you told us you needed more direct policy help to understand requirements and get answers. Next week, we'll add a direct way for you to reach our team about policy questions in your Play Console. You'll be able to find this new, integrated support experience directly within your Play Console via the "Help" section. We also expanded the Google Play Developer Help Community to more languages, like Indonesian, Japanese, Korean, and Portuguese.
-
Clearer documentation: You asked for policy that's easier to understand. To help you quickly grasp essential requirements, we've introduced a new Key Considerations section across several policies (like Permissions and Target API Level) and included concise "Do's & Don'ts" and easier-to-read summaries.
-
More transparent appeals process: We introduced a 180-day appeal window for account terminations. This allows us to prioritize and make decisions faster for developers who file appeals.
-
Android developer verification design changes: To support a diverse range of users and developers, we're taking action on your feedback.
-
-
First, we're creating a dedicated free account type to support students and hobbyists who want to build apps just for a small group, like family and friends. This means that you can share your creations to a limited number of devices without needing to go through the full developer verification process.
-
We're also building a flow for experienced users to be able to install unverified apps. This is being carefully designed to balance providing choice with prioritizing security, including clear warnings so users fully understand the risks before choosing to bypass standard safety checks.
-
The improvements we made this year are only the beginning. Your feedback helps drive our roadmap, and it will continue to inform future refinements to our policies, tools, experiences, and ensuring Android and Google Play remain the safest and most trusted place for you to innovate and grow your business.
Thank you for being our partner in building the future of Android.
11 Dec 2025 10:26pm GMT
Enhancing Android security: Stop malware from snooping on your app data
Posted by Bennet Manuel, Product Management, Android App Safety and Rob Clifford, Developer Relations
Security is foundational to Android. We partner with you to keep the platform safe and protect user data by offering powerful security tools and features, like Credential Manager and FLAG_SECURE. Every Android release brings performance and security enhancements, and with Android 16, you can take simple, significant steps to strengthen your app's defenses. Check out our video or continue reading to learn more about our enhanced protections for accessibility APIs.
Protect your app from snooping with a single line of code
The accessibilityDataSensitive flag allows you to explicitly mark a view or composable as containing sensitive data. When you set this flag to true on your app, you are essentially blocking potentially malicious apps from accessing your sensitive view data or performing interactions on it. Here is how it works: any app requesting accessibility permission that hasn't explicitly declared itself as a legitimate accessibility tool (isAccessibilityTool=true) is denied access to that view.
This simple but effective change helps to prevent malware from stealing information and performing unauthorized actions, all without impacting users' experience of legitimate accessibility tools. Note: If an app is not an accessibility tool but requests accessibility permissions and sets isAccessibilityTool=true, Play will reject it and Google Play Protect will block it on user devices.
Automatic, enhanced security for setFilterTouchesWhenObscured protection
If you already use setFilterTouchesWhenObscured(true) to protect your app from tapjacking, your views are automatically treated as sensitive data for accessibility. By enhancing the setFilterTouchesWhenObscured method with accessibilityDataSensitive protections, we're instantly giving everyone an additional layer of defense with no extra work.
Getting started
We recommend that you use setFilterTouchesWhenObscured, or alternatively the accessibilityDataSensitive flag, on any screen that contains sensitive information, including login pages, payment flows, and any view displaying personal or financial data.
For Jetpack Compose
|
setFilterTouchesWhenObscured |
accessibilityDataSensitive |
val composeView = LocalView.current DisposableEffect(Unit) { composeView.filterTouchesWhenObscured = true onDispose { composeView.filterTouchesWhenObscured = false } } |
Use the semantics modifier to apply the sensitiveData property to a composable. BasicText { text = "Your password", modifier = Modifier.semantics { sensitiveData = true }} |
For View-based apps
In your XML layout, add the relevant attribute to the sensitive view.
|
setFilterTouchesWhenObscured |
accessibilityDataSensitive |
<TextView android:filterTouchesWhenObscured="true" /> |
<TextView android:accessibilityDataSensitive="true" /> |
Alternatively, you can set the property programmatically in Java or Kotlin:
|
setFilterTouchesWhenObscured |
accessibilityDataSensitive |
myView.filterTouchesWhenObscured = true; |
myView.isAccessibilityDataSensitive = true; |
myView.setFilterTouchesWhenObscured(true) |
myView.setAccessibilityDataSensitive(true); |
Read more about the accessibilityDataSensitive and setFilterTouchesWhenObscured flags in the Tapjacking guide.
Partnering with developers to keep users safe
We worked with developers early to ensure this feature meets real-world needs and integrates smoothly into your workflow.
"We've always prioritized protecting our customers' sensitive financial data, which required us to build our own protection layer against accessibility-based malware. Revolut strongly supports the introduction of this new, official Android API, as it allows us to gradually move away from our custom code in favor of a robust, single-line platform defense."
- Vladimir Kozhevnikov, Android Engineer at Revolut
Together, we can build a more secure and trustworthy experience for everyone.
11 Dec 2025 5:00pm GMT
#WeArePlay: How Matraquina helps non-verbal kids communicate
Posted by Robbie McLachlan, Developer Marketing
In our latest #WeArePlay film, we meet Adriano, Wagner and Grazyelle. The trio are behind Matraquinha, an app helping thousands of non-verbal children in more than 80 countries communicate. Discover more about their inspiring story and the impact on their own son, Gabriel.
Wagner, you developed Matraquinha for a deeply personal reason: your son, Gabriel. Can you tell us what inspired you to create this app for him?
My wife and I adopted our son at 10 months. We later found out he couldn't speak and received a diagnosis of Autism, so we started researching ways to communicate with him and vice versa. The idea started with drawings of objects and phrases on cards for him to point to things he wanted. We wanted to make this more digital and so, with my brother Adriano's help, we developed the Matraquinha app.
How does the app work?
Wagner: The app has almost 250 drawings, like digital flashcards. The child points to a card and the app announces the name of the object, place or feeling. Parents then more clearly understand what their child needs.
Grazyelle: As a mom, after Gabriel started using the app, he was able to communicate and that reduced his feeling of crisis a lot. Before, he would be frustrated. Now with the app, my son can tell me what he needs.
Matraquinha started as a personal app for your family, but is now helping users in over 77 countries. How did you achieve this scale?
Adriano: When my brother came to me with the idea, we thought it would be for our family and had no idea it would turn into a global resource for more families. In the first week, we had 1 download. By the next year, we had 100,000 downloads, all organic with no ads. It showed us how important the app was to help families communicate with their non-verbal children.
Adriano: It's truly incredible for us to be on Google Play because, even without being senior engineers, this tool gave us an opportunity-an entry point-to bring communication to other families. We use other tools like Firebase Analytics which lets us see which cards and categories people are using the most, this helps us when developing new versions.
What is next for Matraquinha, and what features are you most excited about bringing to the community?
We are adding an extra 500 real images to the app, because kids are growing and no longer want drawings as they become teenagers. We're also creating a board that has pronouns, nouns, and verbs. So say, a child wants to let the parents know they like to eat hamburgers, they can tap on the different words and create a sentence. This gives them even more independence. We are also exploring ways to use AI to make the app even more personal and pursuing the same goal: ensuring every child can be heard.
Discover other inspiring app and game founders featured in #WeArePlay.11 Dec 2025 5:00pm GMT
08 Dec 2025
Android Developers Blog
#WeArePlay: How Miksapix Interactive is bringing ancient Sámi Mythology to gamers worldwide
Posted by Robbie McLachlan, Developer Marketing
In our latest #WeArePlay film, which celebrates the people behind apps and games on Google Play, we meet Mikkel - the founder and CEO of Miksapix Interactive. Mikkel is on a mission to share the rich stories and culture of the Sámi people through gaming. Discover how he is building a powerful platform for cultural preservation using a superheroine.
You went from a career in broadcasting to becoming a founder in the games industry. What inspired that leap?
I've had an interest in games for a long time and always found the medium interesting. While I was working for a broadcast corporation in Karasjok, I was thinking, "Why aren't there any Sámi games or games with Sámi content?". Sámi culture is quite rich in lore and mythology. I wanted to bring that to a global stage. That's how Miksapix Interactive was born.
Your game, Raanaa - The Shaman Girl, is deeply rooted in Sámi culture. What is the significance of telling these specific stories?
Because these are our stories to tell! Our mission is to tell them to a global audience to create awareness about Sámi identity and culture. Most people in the world don't know about the Sámi and the Sámi cultures and lore. With our languages at risk, I hope to use storytelling as a way to inspire Sámi children to value their language, celebrate their identity, and take pride in their cultural heritage. Sámi mythology is rich with powerful matriarchs and goddesses, which inspired us to create a superheroine. Through her journey of self-discovery and empowerment, Raanaa finds her true strength - a story we hope will inspire hope and resilience in pre-teens and teens around the world. Through games like Raanaa - The Shaman Girl, we get to convey our stories in new formats.
How did growing up with rich storytelling affect your games?
I was raised in a reindeer herders family, which means we spent a lot of time in nature and the fields with the reindeers. Storytelling was a big part of the family. We would eat supper in the Lavvu tent sitting around a bonfire with relatives and parents telling stories. With Miksapix Interactive, I am taking my love for storytelling and bringing it to the world, using my first hand experience of the Sámi culture.
How has Google Play helped you achieve global reach from your base in the Arctic?
For us, Google Play was a no-brainer. It was the easiest part to just release it on Google Play, no hassle. We have more downloads from Google Play than anywhere else, and it has definitely helped us getting abroad in markets like Brazil, India and the US and beyond. The positive Play Store reviews motivated and inspired us during the development of Raanaa. We use Google products like Google Sheets for collaboration when we do a localization or translation.
What is next for Miksapix Interactive?
Now, our sights are set on growth. We are very focused on the Raanaa IP. For the mobile game, we are looking into localizing it to different Sámi languages. In Norway, we have six Sámi languages, so we are now going to translate it to Lule Sámi and Southern Sámi. We're planning to have these new Sámi languages available this year.
Discover other inspiring app and game founders featured in #WeArePlay.08 Dec 2025 10:00pm GMT
Start building for glasses, new devices for Android XR and more in The Android Show | XR Edition


Posted by Matthew McCullough - VP of Product Management, Android Developer
Today, during The Android Show | XR Edition, we shared a look at the expanding Android XR platform, which is fundamentally evolving to bring a unified developer experience to the entire XR ecosystem. The latest announcements, from Developer Preview 3 to exciting new form factors, are designed to give you the tools and platform you need to create the next generation of XR experiences. Let's dive into the details!
A spectrum of new devices ready for your apps
The Android XR platform is quickly expanding, providing more users and more opportunities for your apps. This growth is anchored by several new form factors that expand the possibilities for XR experiences.
A major focus is on lightweight, all-day wearables. At I/O, we announced we are working with Samsung and our partners Gentle Monster and Warby Parker to design stylish, lightweight AI glasses and Display AI glasses that you can wear comfortably all day. The integration of Gemini on glasses is set to unlock helpful, intelligent experiences like live translation and searching what you see.

And, partners like Uber are already exploring how AI Glasses can streamline the rider experience by providing simple, contextual directions and trip status right in the user's view

The ecosystem is simultaneously broadening its scope to include wired XR glasses, exemplified by Project Aura from XREAL. This device blends the immersive experiences typically found in headsets with portability and real-world presence. Project Aura is scheduled for launch next year.
New tools unlock development for all form factors
If you are developing for Android, you are already developing for Android XR. The release of Android XR SDK Developer Preview 3 brings increased stability for headset APIs and, most significantly, opens up development for AI Glasses.

You can now build augmented experiences for AI glasses using new libraries like Jetpack Compose Glimmer, a UI toolkit for transparent displays , and Jetpack Projected, which lets you extend your Android mobile app directly to glasses. Furthermore, the SDK now includes powerful ARCore for Jetpack XR updates, such as Geospatial capabilities for wayfinding.

For immersive experiences on headsets and wired XR glasses like Project Aura from XREAL, this release also provides new APIs for detecting a device's field-of-view, helping your adaptive apps adjust their UI.
Check out our post on the Android XR Developer Preview 3 to learn more about all the latest updates.
Expanding your reach with new engine ecosystems
The Android XR platform is built on the OpenXR standard, enabling integration with the tools you already use so you can build with your preferred engine.
Developers can utilize Unreal Engine's native Android and OpenXR capabilities, today, to build for Android XR leveraging the existing VR Template for immersive experiences. To provide additional, optimized extensions for the Android XR platform, a Google vendor plug, including support for hand tracking, hand mesh, and more, will be released early next year.
Godot now includes Android XR support, leveraging its focus on OpenXR to enable development for devices like Samsung Galaxy XR. The new Godot OpenXR vendor plugin v4.2.2 stable allows developers to port their existing projects to the platform.
Watch The Android Show | XR Edition
Thank you for tuning into the The Android Show | XR Edition. Start building differentiated experiences today using the Developer Preview 3 SDK and test your apps with the XR Emulator in Android Studio. Your feedback is crucial as we continue to build this platform together. Head over to developer.android.com/xr to learn more and share your feedback.
08 Dec 2025 6:00pm GMT
Build for AI Glasses with the Android XR SDK Developer Preview 3 and unlock new features for immersive experiences


Posted by Matthew McCullough - VP of Product Management, Android Developer
In October, Samsung launched Galaxy XR - the first device powered by Android XR. And it's been amazing seeing what some of you have been building! Here's what some of our developers have been saying about their journey into Android XR.
Android XR gave us a whole new world to build our app within. Teams should ask themselves: What is the biggest, boldest version of your experience that you could possibly build? This is your opportunity to finally put into action what you've always wanted to do, because now, you have the platform that can make it real.
You've also seen us share a first look at other upcoming devices that work with Android XR like Project Aura from XREAL and stylish glasses from Gentle Monster and Warby Parker.
To support the expanding selection of XR devices, we are announcing Android XR SDK Developer Preview 3!
With Android XR SDK Developer Preview 3, on top of building immersive experiences for devices such as Galaxy XR, you can also now build augmented experiences for upcoming AI Glasses with Android XR.
New tools and libraries for augmented experiences
With developer preview 3, we are unlocking the tools and libraries you need to build intelligent and hands-free augmented experiences for AI Glasses. AI Glasses are lightweight and portable for all day wear. You can extend your existing mobile app to take advantage of the built-in speakers, camera, and microphone to provide new, thoughtful and helpful user interactions. With the addition of a small display on display AI Glasses, you can privately present information to users. AI Glasses are perfect for experiences that can help enhance a user's focus and presence in the real world.
To power augmented experiences on AI Glasses, we are introducing two new, purpose-built libraries to the Jetpack XR SDK:
-
Jetpack Projected - built to bridge mobile devices and AI Glasses with features that allow you to access sensors, speakers, and displays on glasses
-
Jetpack Compose Glimmer - new design language and UI components for crafting and styling your augmented experiences on display AI Glasses
Jetpack Compose Glimmer is a demonstration of design best practices for beautiful, optical see-through augmented experiences. With UI components optimized for the input modality and styling requirements of display AI Glasses, Jetpack Compose Glimmer is designed for clarity, legibility, and minimal distraction.
To help visualize and test your Jetpack Compose Glimmer UI we are introducing the AI Glasses emulator in Android Studio. The new AI Glasses emulator can simulate glasses-specific interactions such as touchpad and voice input.

Beyond the new Jetpack Projected and Jetpack Compose Glimmer libraries, we are also expanding ARCore for Jetpack XR to support AI Glasses. We are starting off with motion tracking and geospatial capabilities for augmented experiences - the exact features that enable you to create helpful navigation experiences perfect for all-day-wear devices like AI Glasses.

Expanding support for immersive experiences
We continue to invest in the libraries and tooling that power immersive experiences for XR Headsets like Samsung Galaxy XR and wired XR Glasses like the upcoming Project Aura from XREAL. We've been listening to your feedback and have added several highly-requested features to the Jetpack XR SDK since developer preview 2.
Jetpack SceneCore now features dynamic glTF model loading via URIs and improved materials support for creating new PBR materials at runtime. Additionally, the SurfaceEntity component has been enhanced with full Widevine Digital Rights Management (DRM) support and new shapes, allowing it to render 360-degree and 180-degree videos in spheres and hemispheres.
In Jetpack Compose for XR, you'll find new features like the UserSubspace component for follow behavior, ensuring content remains in the user's view regardless of where they look. Additionally, you can now use spatial animations for smooth transitions like sliding or fading. And to support an expanding ecosystem of immersive devices with diverse display capabilities, you can now specify layout sizes as fractions of the user's comfortable field of view.
In Material Design for XR, new components automatically adapt spatially via overrides. These include dialogs that elevate spatially, and navigation bars, which pop out into an Orbiter. Additionally, there is a new SpaceToggleButton component for easily transitioning to and from full space.
And in ARCore for Jetpack XR, new perception capabilities have been added, including face tracking with 68 blendshape values unlocking a world of facial gestures. You can also use eye tracking to power virtual avatars, and depth maps to enable more-realistic interactions with a user's environment.
For devices like Project Aura from XREAL, we are introducing the XR Glasses emulator in Android Studio. This essential tool is designed to give you accurate content visualization, while matching real device specifications for Field of View (FoV), Resolution, and DPI to accelerate your development.
.gif)
If you build immersive experiences with Unity, we're also expanding your perception capabilities in the Android XR SDK for Unity. In addition to lots of bug fixes and other improvements, we are expanding tracking capabilities to include: QR and ArUco codes, planar images, and body tracking (experimental). We are also introducing a much-requested feature: scene meshing. It enables you to have much deeper interactions with your user's environment - your digital content can now bounce off of walls and climb up couches!
And that's just the tip of the iceberg! Be sure to check out our immersive experiences page for more information.
Get Started Today!
The Android XR SDK Developer Preview 3 is available today! Download the latest Android Studio Canary (Otter 3, Canary 4 or later) and upgrade to the latest emulator version (36.4.3 Canary or later) and then visit developer.android.com/xr to get started with the latest libraries and samples you need to build for the growing selection of Android XR devices. We're building Android XR together with you! Don't forget to share your feedback, suggestions, and ideas with our team as you progress on your journey in Android XR.
08 Dec 2025 6:00pm GMT
04 Dec 2025
Android Developers Blog
Android Studio Otter 2 Feature Drop is stable!
Posted by Sandhya Mohan - Product Manager, Trevor Johns - Developer Relations Engineer

The Android Studio Otter 2 Feature Drop is here to supercharge your productivity.
This final stable release for '25 powers up Agent Mode, equipping it with the new Android Knowledge Base for improved accuracy, and giving you the option to try out the new Gemini 3 model. You'll also be able to take advantage of new settings such as the ability to keep your personalized IDE environment consistent across all of your machines. We've also incorporated all of the latest stability and performance improvements from the IntelliJ IDEA 2025.2 platform, including Kotlin compiler and terminal improvements, making this a significant enhancement for your development workflow.
Updates to Agent Mode
We recently introduced the ability to use our latest model, Gemini 3 Pro Preview, within Android Studio. This is our best model for coding and agentic capabilities. It'll give you superior performance in Agent Mode and advanced problem-solving capabilities so you can focus on what you do best: creating high quality apps for your users.
We are beginning to roll out limited Gemini 3 access (with a 1 million token size window) to developers who are using the no-cost default model. For higher usage rate limits and longer sessions with Agent Mode, you can add a paid Gemini API Key or use a Gemini Code Assist Enterprise plan. Learn more about how to get started with Gemini 3.
While the training of large language models provides deep knowledge that is excellent for common tasks-like creating Compose UIs-training concludes on a fixed date, resulting in gaps for new libraries and updated best practices. They are also less effective with niche APIs because the necessary training data is scarce. To fix this, Android Studio's Agent Mode is now equipped with the Android Knowledge Base, a new feature designed to significantly improve accuracy and reduce hallucinations by grounding responses with authoritative documentation. This means that instead of just relying on its training data, the agent can actively consult fresh documentation from official sources like the Android developer docs, Firebase, Google Developers, and Kotlin docs before it answers you.
The information in the Android Knowledge Base is stored in Android Studio and its content is automatically updated in the background on a periodic basis, so this feature is available regardless of which LLM you're using for AI assistance.
Gemini searching documentation before it answers you
This feature will be invoked automatically when Agent Mode detects a need for additional context, and you'll see additional explanatory text. However, if you'd like Agent Mode to reference documentation more frequently, you can include a line such as "Refer to Android documentation for guidance" in your Rules configuration.
Requested settings updates
Backup and Sync
Backup and Sync is a new way to keep your personalized Android Studio environment consistent across all your installations. You can now back up your settings-including your preferred keymaps, Code Editor settings, system settings, and more-to cloud storage using your Google Account, giving you a seamless experience wherever you code. We also support Backup and Sync using JetBrains accounts for developers using both IntelliJ and Android Studio installs simultaneously.
Backup and Sync
Getting started is simple. Just sign into your Google Account by clicking the avatar in the top-right corner of the IDE, or navigate to Settings > Backup and Sync. Once you authorize Android Studio to access your account's storage, you have full control over which categories of app data you want to sync. If you're syncing for the first time on a new machine, Android Studio will give you the option to either download your existing remote settings or upload your current local settings to the cloud. Of course, if you change your mind, you can easily disable Backup and Sync at any time from the settings menu. This feature has been available since the first Android Studio Otter release.
You can now opt in to receive communications directly from the Android Studio team. This enables you to get emails and notifications about important product updates, new features, and new libraries as soon as they're available.
You'll see this option when you sign in, and you can change your preference at any time by going to Settings > Tools > Google Accounts > Communications.
Your option to receive emails and notifications
-
Kotlin K2 Mode: Following its rapid adoption after being enabled by default, the K2 Kotlin mode is now more stable and performant. This version improves Kotlin code analysis stability, adds new inspections, and enhances the reliability of Kotlin script execution.
-
Terminal Performance: The integrated terminal is significantly faster, with major improvements in rendering. For Bash and Zsh, this update also introduces minor visual refinements without compromising or altering core shell behavior.
04 Dec 2025 6:37pm GMT
03 Dec 2025
Android Developers Blog
What's new in the Jetpack Compose December '25 release
Posted by Nick Butcher, Jetpack Compose Product Manager
Today, the Jetpack Compose December '25 release is stable. This contains version 1.10 of the core Compose modules and version 1.4 of Material 3 (see the full BOM mapping), adding new features and major performance improvements.
To use today's release, upgrade your Compose BOM version to 2025.12.00:
|
implementation(platform("androidx.compose:compose-bom:2025.12.00")) |
Performance improvements
We know that the runtime performance of your app is hugely important to you and your users, so performance has been a major priority for the Compose team. This release brings a number of improvements-and you get them all by just upgrading to the latest version. Our internal scroll benchmarks show that Compose now matches the performance you would see if using Views:
Scroll performance benchmark comparing Views and Jetpack Compose across different versions of Compose
Pausable composition in lazy prefetch
Pausable composition in lazy prefetch is now enabled by default. This is a fundamental change to how the Compose runtime schedules work, designed to significantly reduce jank during heavy UI workloads.
Previously, once a composition started, it had to run to completion. If a composition was complex, this could block the main thread for longer than a single frame, causing the UI to freeze. With pausable composition, the runtime can now "pause" its work if it's running out of time and resume the work in the next frame. This is particularly effective when used with lazy layout prefetch to prepare frames ahead of time. The Lazy layout CacheWindow APIs introduced in Compose 1.9 are a great way to prefetch more content and benefit from pausable composition to produce much smoother UI performance.
Pausable composition combined with Lazy prefetch help reduce jank
We've also optimized performance elsewhere, with improvements to Modifier.onPlaced, Modifier.onVisibilityChanged, and other modifier implementations. We'll continue to invest in improving the performance of Compose.
New features
Retain
Compose offers a number of APIs to hold and manage state across different lifecycles; for example, remember persists state across compositions, and rememberSavable/rememberSerializable to persist across activity or process recreation. retain is a new API that sits between these APIs, enabling you to persist values across configuration changes without being serialized, but not across process death. As retain does not serialize your state, you can persist objects like lambda expressions, flows, and large objects like bitmaps, which cannot be easily serialized. For example, you may use retain to manage a media player (such as ExoPlayer) to ensure that media playback doesn't get interrupted by a configuration change.
@Composable
fun MediaPlayer() {
val applicationContext = LocalContext.current.applicationContext
val exoPlayer = retain { ExoPlayer.Builder(applicationContext).apply { ... }.build() }
...
}
We want to extend our thanks to the AndroidDev community (especially the Circuit team), who have influenced and contributed to the design of this feature.
Material 1.4
Version 1.4.0 of the material3 library adds a number of new components and enhancements:
-
TextField now offers an experimental TextFieldState based version, which provides a more robust method for managing text's state. In addition, new SecureTextField and OutlinedSecureTextField variants are now offered. The material Text composable now supports autoSize behaviour.
-
The carousel component now offers a new HorizontalCenteredHeroCarousel variant.
-
TimePicker now supports switching between the picker and input modes.
-
A vertical drag handle helps users to change an adaptive pane's size and/or position.
Horizontal centered hero carousel
Note that Material 3 Expressive APIs continue to be developed in the alpha releases of the material3 library. To learn more, see this recent talk:
New animation features
We continue to expand on our animation APIs, including updates for customizing shared element animations.
Dynamic shared elements
By default, sharedElement() and sharedBounds() animations attempt to animate
layout changes whenever a matching key is found in the target state. However, you may want to disable this animation dynamically based on certain conditions, such as the direction of navigation or the current UI state.
To control whether the shared element transition occurs, you can now customize the
SharedContentConfig passed to rememberSharedContentState(). The isEnabled
property determines if the shared element is active.
SharedTransitionLayout {
val transition = updateTransition(currentState)
transition.AnimatedContent { targetState ->
// Create the configuration that depends on state changing.
fun animationConfig() : SharedTransitionScope.SharedContentConfig {
return object : SharedTransitionScope.SharedContentConfig {
override val SharedTransitionScope.SharedContentState.isEnabled: Boolean
get() =
// determine whether to perform a shared element transition
}
}
}
See the documentation for more.
Modifier.skipToLookaheadPosition()
A new modifier, Modifier.skipToLookaheadPosition(), has been added in this release, which keeps the final position of a composable when performing shared element animations. This allows for performing transitions like "reveal" type animation, as can be seen in the Androidify sample with the progressive reveal of the camera. See the video tip here for more information:
Initial velocity in shared element transitions
This release adds a new shared element transition API, prepareTransitionWithInitialVelocity, which lets you pass an initial velocity (e.g. from a gesture) to a shared element transition:
Modifier.fillMaxSize()
.draggable2D(
rememberDraggable2DState { offset += it },
onDragStopped = { velocity ->
// Set up the initial velocity for the upcoming shared element
// transition.
sharedContentStateForDraggableCat
?.prepareTransitionWithInitialVelocity(velocity)
showDetails = false
},
)
A shared element transition that starts with an initial velocity from a gesture
Veiled transitions
EnterTransition and ExitTransition define how an AnimatedVisibility/AnimatedContent composable appears or disappears. A new experimental veil option allows you to specify a color to veil or scrim content; e.g., fading in/out a semi-opaque black layer over content:
Veiled animated content - note the semi-opaque veil (or scrim) over the grid content during the animation
AnimatedContent(
targetState = page,
modifier = Modifier.fillMaxSize().weight(1f),
transitionSpec = {
if (targetState > initialState) {
(slideInHorizontally { it } togetherWith
slideOutHorizontally { -it / 2 } + veilOut(targetColor = veilColor))
} else {
slideInHorizontally { -it / 2 } +
unveilIn(initialColor = veilColor) togetherWith slideOutHorizontally { it }
}
},
) { targetPage ->
...
}
Upcoming changes
Deprecation of Modifier.onFirstVisible
Compose 1.9 introduced Modifier.onVisibilityChanged and Modifier.onFirstVisible. After reviewing your feedback, it became apparent that the contract of Modifier.onFirstVisible was not possible to honor deterministically; specifically, when an item first becomes visible. For example, a Lazy layout may dispose of items that scroll out of the viewport, and then compose them again if they scroll back into view. In this circumstance, the onFirstVisible callback would fire again, as it is a newly composed item. Similar behavior would also occur when navigating back to a previously visited screen containing onFirstVisible. As such, we have decided to deprecate this modifier in the next Compose release (1.11) and recommend migrating to onVisibilityChanged. See the documentation for more information.
Coroutine dispatch in tests
We plan to change coroutine dispatch in tests to improve test flakiness and catch more issues. Currently, tests use the UnconfinedTestDispatcher, which differs from production behavior; e.g., effects may run immediately rather than being enqueued. In a future release, we plan to introduce a new API that uses StandardTestDispatcher by default to match production behaviours. You can try the new behavior now in 1.10:
@get:Rule // also createAndroidComposeRule, createEmptyComposeRule
val rule = createComposeRule(effectContext = StandardTestDispatcher())
Using the StandardTestDispatcher will queue tasks, so you must use synchronization mechanisms like composeTestRule.waitForIdle() or composeTestRule.runOnIdle(). If your test uses runTest, you must ensure that runTest and your Compose rule share the same StandardTestDispatcher instance for synchronization.
// 1. Create a SINGLE dispatcher instance
val testDispatcher = StandardTestDispatcher()
// 2. Pass it to your Compose rule
@get:Rule
val composeRule = createComposeRule(effectContext = testDispatcher)
@Test
// 3. Pass the *SAME INSTANCE* to runTest
fun myTest() = runTest(testDispatcher) {
composeRule.setContent { /* ... */ }
}
Tools
Great APIs deserve great tools, and Android Studio has a number of recent additions for Compose developers:
-
Transform UI: Iterate on your designs by right clicking on the @Preview, selecting Transform UI, and then describing the change in natural language.
-
Generate @Preview: Right-click on a composable and select Gemini > Generate [Composable name] Preview.
-
Customize Material Symbols with new support for icon variations in the Vector Asset wizard.
-
Generate code from a screenshot or ask Gemini to match your existing UI to a target image. This can be combined with remote MCP support e.g. to connect to a Figma file and generate Compose UI from designs.
-
Fix UI quality issues audits your UI for common problems, such as accessibility issues, and then proposes fixes.
To see these tools in action, watch this recent demonstration:
Happy Composing
We continue to invest in Jetpack Compose to provide you with the APIs and tools you need to create beautiful, rich UIs. We value your input, so please share your feedback on these changes or what you'd like to see next in our issue tracker.
03 Dec 2025 8:34pm GMT
02 Dec 2025
Android Developers Blog
Android 16 QPR2 is Released
Posted by Matthew McCullough, VP of Product Management, Android Developer
Faster Innovation with Android's first Minor SDK Release
Today we're releasing Android 16 QPR2, bringing a host of enhancements to user experience, developer productivity, and media capabilities. It marks a significant milestone in the evolution of the Android platform as the first release to utilize a minor SDK version.A Milestone for Platform Evolution: The Minor SDK Release
To support this, we have introduced new fields to the Build class as of Android 16, allowing your app to check for these new APIs using SDK_INT_FULL and VERSION_CODES_FULL.
if ((Build.VERSION.SDK_INT >= Build.VERSION_CODES.BAKLAVA) && (Build.VERSION.SDK_INT_FULL >= Build.VERSION_CODES_FULL.BAKLAVA_1)) { // Call new APIs from the Android 16 QPR2 release }
Enhanced User Experience and Customization
QPR2 improves Android's personalization and accessibility, giving users more control over how their devices look and feel.Expanded Dark Theme
When the expanded dark theme setting is enabled by a user, the system uses your app's isLightTheme theme attribute to determine whether to apply inversion. If your app inherits from one of the standard DayNight themes, this is done automatically for you. If it does not, make sure to declare isLightTheme="false" in your dark theme to ensure your app is not inadvertently inverted. Standard Android Views, Composables, and WebViews will be inverted, while custom rendering engines like Flutter will not.
This is largely intended as an accessibility feature. We strongly recommend implementing a native dark theme, which gives you full control over your app's appearance; you can protect your brand's identity, ensure text is readable, and prevent visual glitches from happening when your UI is automatically inverted, guaranteeing a polished, reliable experience for your users.
Custom Icon Shapes & Auto-Theming
In QPR2, users can select specific shapes for their app icons, which apply to all icons and folder previews. Additionally, if your app does not provide a dedicated themed icon, the system can now automatically generate one by applying a color filtering algorithm to your existing launcher icon.|
|
|
|
Custom Icon Shapes |
|
|
|
|
|
Test Icon Shape & Color in Android Studio |
Automatic system icon color filtering |
Interactive Chooser Sessions
The sharing experience is now more dynamic. Apps can keep the UI interactive even when the system sharesheet is open, allowing for real-time content updates within the Chooser.Boosting Your Productivity and App Performance
We are introducing tools and updates designed to streamline your workflow and improve app performance.Linux Development Environment with GUI Applications
The Linux development environment feature has been expanded to support running Linux GUI applications directly within the terminal environment.Generational Garbage Collection
The Android Runtime (ART) now includes a Generational Concurrent Mark-Compact (CMC) Garbage Collector. This focuses collection on newly allocated objects, resulting in reduced CPU usage and improved battery efficiency.Widget Engagement Metrics
You can now query user interaction events-such as clicks, scrolls, and impressions-to better understand how users engage with your widgets.16KB Page Size Readiness
To help prepare for future architecture requirements, we have added early warning dialogs for debuggable apps that are not 16KB page-aligned.Media, Connectivity, and Health
QPR2 brings robust updates to media standards and device connectivity.IAMF and Audio Sharing
We have added software decoding support for Immersive Audio Model and Formats (IAMF), an open-source spatial audio format. Additionally, Personal Audio Sharing for Bluetooth LE Audio is now integrated directly into the system Output Switcher.Health Connect Updates
Health Connect now automatically tracks steps using the device's sensors. If your app has the READ_STEPS permission, this data will be available from the "android" package. Not only does this simplify the code needed to do step tracking, it's also more power efficient. It also can now track weight, set index, and Rate of Perceived Exertion (RPE) in exercise segments.Smoother Migrations
A new 3rd-party Data Transfer API enables more reliable data migration between Android and iOS devices.Strengthening Privacy and Security
Security remains a top priority with new features designed to protect user data and device integrity.Developer Verification
We introduced APIs to support developer verification during app installation along with new ADB commands to simulate verification outcomes. As a developer, you are free to install apps without verification by using ADB, so you can continue to test apps that are not intended or not yet ready to distribute to the wider consumer population.SMS OTP Protection
The delivery of messages containing an SMS retriever hash will be delayed for most apps for three hours to help prevent OTP hijacking. The RECEIVE_SMS broadcast will be withheld and sms provider database queries will be filtered. The SMS will be available to these apps after the three hour delay.Secure Lock Device
A new system-level security state, Secure Lock Device, is being introduced. When enabled (e.g., remotely via "Find My Device"), the device locks immediately and requires the primary PIN, pattern, or password to unlock, heightening security. When active, notifications and quick affordances on the lock screen will be hidden, and biometric unlock may be temporarily disabled.Get Started
If you're not in the Beta or Canary programs, your Pixel device should get the Android 16 QPR2 release shortly. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on the Android 16 QPR2 Beta and have not yet installed the Android 16 QPR3 beta, you can opt out of the program and you will then be offered the release version of Android 16 QPR2 over the air.For the best development experience with Android 16 QPR2, we recommend that you use the latest Canary build of Android Studio Otter.
Thank you again to everyone who participated in our Android beta program. We're looking forward to seeing how your apps take advantage of the updates in Android 16 QPR2.
02 Dec 2025 7:00pm GMT







.png)


.png)













%20(1)%20(1).jpg)





.png)





