19 Dec 2025
Android Developers Blog
Media3 1.9.0 - What’s new
Posted by Kristina Simakova, Engineering Manager
Media3 1.9.0 - What's new?
-
media3-inspector - Extract metadata and frames outside of playback
-
media3-ui-compose-material3 - Build a basic Material3 Compose Media UI in just a few steps
-
media3-cast - Automatically handle transitions between Cast and local playbacks
-
media3-decoder-av1 - Consistent AV1 playback with the rewritten extension decoder based on the dav1d library
We also added caching and memory management improvements to PreloadManager, and provided several new ExoPlayer, Transformer and MediaSession simplifications.
This release also gives you the first experimental access to CompositionPlayer to preview media edits.
Read on to find out more, and as always please check out the full release notes for a comprehensive overview of changes in this release.
Extract metadata and frames outside of playback
There are many cases where you want to inspect media without starting a playback. For example, you might want to detect which formats it contains or what its duration is, or to retrieve thumbnails.The new media3-inspector module combines all utilities to inspect media without playback in one place:
-
MetadataRetriever to read duration, format and static metadata from a MediaItem.
-
FrameExtractor to get frames or thumbnails from an item.
-
MediaExtractorCompat as a direct replacement for the Android platform MediaExtractor class, to get detailed information about samples in the file.
suspend fun extractThumbnail(mediaItem: MediaItem) { FrameExtractor.Builder(context, mediaItem).build().use { val thumbnail = frameExtractor.getThumbnail().await() } }
Build a basic Material3 Compose Media UI in just a few steps
In previous releases we started providing connector code between Compose UI elements and your Player instance. With Media3 1.9.0, we added a new module media3-ui-compose-material3 with fully-styled Material3 buttons and content elements. They allow you to build a media UI in just a few steps, while providing all the flexibility to customize style. If you prefer to build your own UI style, you can use the building blocks that take care of all the update and connection logic, so you only need to concentrate on designing the UI element. Please check out our extended guide pages for the Compose UI modules.We are also still working on even more Compose components, like a prebuilt seek bar, a complete out-of-the-box replacement for PlayerView, as well as subtitle and ad integration.
@Composable fun SimplePlayerUI(player: Player, modifier: Modifier = Modifier) { Column(modifier) { ContentFrame(player) // Video surface and shutter logic Row (Modifier.align(Alignment.CenterHorizontally)) { SeekBackButton(player) // Simple controls PlayPauseButton(player) SeekForwardButton(player) } } }
Simple Compose player UI with out-of-the-box elements
Automatically handle transitions between Cast and local playbacks
When you set up your MediaSession, simply build a CastPlayer around your ExoPlayer and add a MediaRouteButton to your UI and you're done!
// MediaSession setup with CastPlayer val exoPlayer = ExoPlayer.Builder(context).build() val castPlayer = CastPlayer.Builder(context).setLocalPlayer(exoPlayer).build() val session = MediaSession.Builder(context, player) // MediaRouteButton in UI @Composable fun UIWithMediaRouteButton() { MediaRouteButton() }
New CastPlayer integration in Media3 session demo app
Consistent AV1 playback with the rewritten extension based on dav1d
The 1.9.0 release contains a completely rewritten AV1 extension module based on the popular dav1d library.As with all extension decoder modules, please note that it requires building from source to bundle the relevant native code correctly. Bundling a decoder provides consistency and format support across all devices, but because it runs the decoding in your process, it's best suited for content you can trust.
Integrate caching and memory management into PreloadManager
-
Caching support - When defining how far to preload, you can now choose PreloadStatus.specifiedRangeCached(0, 5000) as a target state for preloaded items. This will add the specified range to your cache on disk instead of loading the data to memory. With this, you can provide a much larger range of items for preloading as the ones further away from the current item no longer need to occupy memory. Note that this requires setting a Cache in DefaultPreloadManager.Builder.
-
Automatic memory management - We also updated our LoadControl interface to better handle the preload case so you are now able to set an explicit upper memory limit for all preloaded items in memory. It's 144 MB by default, and you can configure the limit in DefaultLoadControl.Builder. The DefaultPreloadManager will automatically stop preloading once the limit is reached, and automatically releases memory of lower priority items if required.
Rely on new simplified default behaviors in ExoPlayer
As always, we added lots of incremental improvements to ExoPlayer as well. To name just a few:-
Mute and unmute - We already had a setVolume method, but have now added the convenience mute and unmute methods to easily restore the previous volume without keeping track of it yourself.
-
Stuck player detection - In some rare cases the player can get stuck in a buffering or playing state without making any progress, for example, due to codec issues or misconfigurations. Your users will be annoyed, but you never see these issues in your analytics! To make this more obvious, the player now reports a StuckPlayerException when it detects a stuck state.
-
Wakelock by default - The wake lock management was previously opt-in, resulting in hard to find edge cases where playback progress can be delayed a lot when running in the background. Now this feature is opt-out, so you don't have to worry about it and can also remove all manual wake lock handling around playback.
-
Simplified setting for CC button logic - Changing TrackSelectionParameters to say "turn subtitles on/off" was surprisingly hard to get right, so we added a simple boolean selectTextByDefault option for this use case.
Simplify your media button preferences in MediaSession
Until now, defining your preferences for which buttons should show up in the media notification drawer on Android Auto or WearOS required defining custom commands and buttons, even if you simply wanted to trigger a standard player method.Media3 1.9.0 has new functionality to make this a lot simpler - you can now define your media button preferences with a standard player command, requiring no custom command handling at all.
session.setMediaButtonPreferences(listOf(
CommandButton.Builder(CommandButton.ICON_FAST_FORWARD) // choose an icon
.setDisplayName(R.string.skip_forward)
.setPlayerCommand(Player.COMMAND_SEEK_FORWARD) // choose an action
.build()
))
Media button preferences with fast forward button
CompositionPlayer for real-time preview
The 1.9.0 release introduces CompositionPlayer under a new @ExperimentalApi annotation. The annotation indicates that it is available for experimentation, but is still under development.CompositionPlayer is a new component in the Media3 editing APIs designed for real-time preview of media edits. Built upon the familiar Media3 Player interface, CompositionPlayer allows users to see their changes in action before committing to the export process. It uses the same Composition object that you would pass to Transformer for exporting, streamlining the editing workflow by unifying the data model for preview and export.
We encourage you to start using CompositionPlayer and share your feedback, and keep an eye out for forthcoming posts and updates to the documentation for more details.
InAppMuxer as a default muxer in Transformer
New speed adjustment APIs
val speedProvider = object : SpeedProvider {
override fun getSpeed(presentationTimeUs: Long): Float {
return speed
}
override fun getNextSpeedChangeTimeUs(timeUs: Long): Long {
return C.TIME_UNSET
}
}
EditedMediaItem speedEffectItem = EditedMediaItem.Builder(mediaItem)
.setSpeed(speedProvider)
.build()
This new approach replaces the previous method of using Effects#createExperimentalSpeedChangingEffects(), which we've deprecated and will remove in a future release.
Introducing track types for EditedMediaItemSequence
This is done via a new EditedMediaItemSequence.Builder constructor that accepts a set of track types (e.g., C.TRACK_TYPE_AUDIO, C.TRACK_TYPE_VIDEO).
To simplify creation, we've added new static convenience methods:
-
EditedMediaItemSequence.withAudioFrom(List<EditedMediaItem>)
-
EditedMediaItemSequence.withVideoFrom(List<EditedMediaItem>)
-
EditedMediaItemSequence.withAudioAndVideoFrom(List<EditedMediaItem>)
We encourage you to migrate to the new constructor or the convenience methods for clearer and more reliable sequence definitions.
Example of creating a video-only sequence:
EditedMediaItemSequence videoOnlySequence =
EditedMediaItemSequence.Builder(setOf(C.TRACK_TYPE_VIDEO))
.addItem(editedMediaItem)
.build()
---
Please get in touch via the Media3 issue Tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!
19 Dec 2025 10:00pm GMT
Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps
Posted by Fahd Imtiaz - Product Manager, Android Developer
Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps
In 2025 the Android ecosystem has grown far beyond the phone. Today, developers have the opportunity to reach over 500 million active devices, including foldables, tablets, XR, Chromebooks, and compatible cars.
These aren't just additional screens; they represent a higher-value audience. We've seen that users who own both a phone and a tablet spend 9x more on apps and in-app purchases than those with just a phone. For foldable users, that average spend jumps to roughly 14x more*.
This engagement signals a necessary shift in development: goodbye mobile apps, hello adaptive apps.
To help you build for that future, we spent this year releasing tools that make adaptive the default way to build. Here are three key updates from 2025 designed to help you build these experiences.
Standardizing adaptive behavior with Android 16
To support this shift, Android 16 introduced significant changes to how apps can restrict orientation and resizability. On displays of at least 600dp, manifest and runtime restrictions are ignored, meaning apps can no longer lock themselves to a specific orientation or size. Instead, they fill the entire display window, ensuring your UI scales seamlessly across portrait and landscape modes.
Because this means your app context will change more frequently, it's important to verify that you are preserving UI state during configuration changes. While Android 16 offers a temporary opt-out to help you manage this transition, Android 17 (SDK37) will make this behavior mandatory. To ensure your app behaves as expected under these new conditions, use the resizable emulator in Android Studio to test your adaptive layouts today.
Supporting screens beyond the tablet with Jetpack WindowManager 1.5.0
As devices evolve, our existing definitions of "large" need to evolve with them. In October, we released Jetpack WindowManager 1.5.0 to better support the growing number of very large screens and desktop environments.
On these surfaces, the standard "Expanded" layout, which usually fits two panes comfortably, often isn't enough. On a 27-inch monitor, two panes can look stretched and sparse, leaving valuable screen real estate unused. To solve this, WindowManager 1.5.0 introduced two new width window size classes: Large (1200dp to 1600dp) and Extra-large (1600dp+).
These new breakpoints signal when to switch to high-density interfaces. Instead of stretching a typical list-detail view, you can take advantage of the width to show three or even four panes simultaneously. Imagine an email client that comfortably displays your folders, the inbox list, the open message, and a calendar sidebar, all in a single view. Support for these window size classes was added to Compose Material 3 adaptive in the 1.2 release.
Rethinking user journeys with Jetpack Navigation 3
Building a UI that morphs from a single phone screen to a multi-pane tablet layout used to require complex state management. This often meant forcing a navigation graph designed for single destinations to handle simultaneous views. First announced at I/O 2025, Jetpack Navigation 3 is now stable, introducing a new approach to handling user journeys in adaptive apps.
Built for Compose, Nav3 moves away from the monolithic graph structure. Instead, it provides decoupled building blocks that give you full control over your back stack and state. This solves the single source of truth challenge common in split-pane layouts. Because Nav3 uses the Scenes API, you can display multiple panes simultaneously without managing conflicting back stacks, simplifying the transition between compact and expanded views.
A foundation for an adaptive future
This year delivered the tools you need, from optimizing for expansive layouts to the granular controls of WindowManager and Navigation 3. And, Android 16 began the shift toward truly flexible UI, with updates coming next year to deliver excellent adaptive experiences across all form factors. To learn more about adaptive development principles and get started, head over to d.android.com/adaptive-apps.
The tools are ready, and the users are waiting. We can't wait to see what you build!
*Source: internal Google data
19 Dec 2025 5:00pm GMT
18 Dec 2025
Android Developers Blog
Bringing Androidify to Wear OS with Watch Face Push

Posted by Garan Jenkin - Developer Relations Engineer
A few months ago we relaunched Androidify as an app for generating personalized Android bots. Androidify transforms your selfie photo into a playful Android bot using Gemini and Imagen.
However, given that Android spans multiple form factors, including our most recent addition, XR, we thought, how could we bring the fun of Androidify to Wear OS?
An Androidify watch face
As Androidify bots are highly-personalized, the natural place to showcase them is the watch face. Not only is it the most frequently visible surface but also the most personal surface, allowing you to represent who you are.

Personalized Androidify watch face, generated from selfie image
Androidify now has the ability to generate a watch face dynamically within the phone app and then send it to your watch, where it will automatically be set as your watch face. All of this happens within seconds!
High-level design
End-to-end flow for watch face creation and installation
In order to achieve the end-to-end experience, a number of technologies need to be combined together, as shown in this high-level design diagram.
First of all, the user's avatar is combined with a pre-existing Watch Face Format template, which is then packaged into an APK. This is validated - for reasons which will be explained! - and sent to the watch.
On being received by the watch, the new Watch Face Push API - part of Wear OS 6- is used to install and activate the watch face.
Let's explore the details:
Creating the watch face templates
The watch face is created from a template, itself designed in Watch Face Designer. This is our new Figma plugin that allows you to create Watch Face Format watch faces directly within Figma.
An Androidify watch face template in Watch Face Designer
The plugin allows the watch face to be exported in a range of different ways, including as Watch Face Format (WFF) resources. These can then be easily incorporated as assets within the Androidify app, for dynamically building the finalized watch face.
Packaging and validation
Once the template and avatar have been combined, the Portable Asset Compiler Kit (Pack) is used to assemble an APK.
In Androidify, Pack is used as a native library on the phone. For more details on how Androidify interfaces with the Pack library, see the GitHub repository.
As a final step before transmission, the APK is checked by the Watch Face Push validator.
This validator checks that the APK is suitable for installation. This includes checking the contents of the APK to ensure it is a valid watch face, as well as some performance checks. If it is valid, then the validator produces a token.
This token is required by the watch for installation.
Sending the watch face
The Androidify app on Wear OS uses WearableListenerService to listen for events on the Wearable Data Layer.
The phone app transfers the watch face by using a combination of MessageClient to set up the process, then ChannelClient to stream the APK.
Installing the watch face on the watch
Once the watch face is received on the Wear OS device, the Androidify app uses the new Watch Face Push API to install the watch face:
val wfpManager =
WatchFacePushManagerFactory.createWatchFacePushManager(context)
val response = wfpManager.listWatchFaces()
try {
if (response.remainingSlotCount > 0) {
wfpManager.addWatchFace(apkFd, token)
} else {
val slotId = response.installedWatchFaceDetails.first().slotId
wfpManager.updateWatchFace(slotId, apkFd, token)
}
} catch (a: WatchFacePushManager.AddWatchFaceException) {
return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR
} catch (u: WatchFacePushManager.UpdateWatchFaceException) {
return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR
}
Androidify uses either the addWatchFace or updateWatchFace method, depending on the scenario: Watch Face Push defines a concept of "slots" - how many watch faces a given app can have installed at any time. For Wear OS 6, this value is in fact 1.
Androidify's approach is to install the watch face if there is a free slot, and if not, any existing watch face is swapped out for the new one.
Setting the active watch face
Installing the watch face programmatically is a great step, but Androidify seeks to ensure the watch face is also the active watch face.
Watch Face Push introduces a new runtime permission which must be granted in order for apps to be able to achieve this:
com.google.wear.permission.SET_PUSHED_WATCH_FACE_AS_ACTIVE
Once this permission has been acquired, the wfpManager.setWatchFaceAsActive() method can be called, to set an installed watch face to being the active watch face.
However, there are a number of considerations that Androidify has to navigate:
-
setWatchFaceAsActive can only be used once.
-
SET_PUSHED_WATCH_FACE_AS_ACTIVE cannot be re-requested after being denied by the user.
-
Androidify might already be in control of the active watch face.
For more details see how Androidify implements the set active logic.
Get started with Watch Face Push for Wear OS
Watch Face Push is a versatile API, equally suited to enhancing Androidify as it is to building fully-featured watch face marketplaces.
Perhaps you have an existing phone app and are looking for opportunities to further engage and delight your users?
Or perhaps you're an existing watch face developer looking to create your own community and gallery through releasing a marketplace app?
Take a look at these resources:
And also check out the accompanying video for a greater-depth look at how we brought Androidify to Wear OS!
We're looking forward to what you'll create with Watch Face Push!
18 Dec 2025 5:00pm GMT




