15 Jan 2026
Android Developers Blog
LLM flexibility, Agent Mode improvements, and new agentic experiences in Android Studio Otter 3 Feature Drop
Posted by Sandhya Mohan, Senior Product Manager and Trevor Johns, Developer Relations Engineer
- Bring Your Own Model: You can now use any LLM to power the AI functionality in Android Studio.
- Agent Mode Enhancements: You can now more easily have Agent Mode interact with your app on devices, review and accept suggested changes, and have multiple conversations threads.
- Run user journey tests using natural language: with Journeys in Android Studio.
- Enable Agent Mode to connect to more tools: including the ability to connect to remote servers via MCP.
- Build, iterate and test your UI: with UI agentic experiences in Android Studio.
- Build deep links using natural language: with the new app links assistant.
- Debug R8 optimized code: with Automatic Logcat retracing.
- Simplify Android library modules: with the Fused library plugin.
Here's a deep dive into what's new:
Bring Your Own Model (BYOM)
Every developer has a unique workflow when using AI, and different companies have different policies on AI model usage. With this release, Android Studio now brings you more flexibility by allowing you to choose the LLM that powers the AI functionality in Android Studio, giving you more control over performance, privacy, and cost.
Use a remote model
You can now integrate remote models-such as OpenAI's GPT, Anthropic's Claude, or a similar model-directly into Android Studio. This allows you to leverage your preferred model provider without changing your IDE. To get started, configure a remote model provider in Settings by adding your API endpoint and key. Once configured, you can select your custom model directly from the picker in the AI chat window.
Use a local model
Use your Gemini API key
While Android Studio includes access to a default Gemini model with generous quotas at no cost, some developers need more. By adding your Gemini API key, Android Studio can directly access all the latest Gemini models available from Google.
For example, this allows you to use the most recent Gemini 3 Pro and Gemini 3 Flash models (among others) with expanded context windows and quota. This is especially useful for developers who are using Agent Mode for extended coding sessions, where this additional processing power can provide higher fidelity responses.
Agent Mode enhancements
Run your app and interact with it on devices
Agent Mode can now deploy an application to the connected device, inspect what is currently shown on the screen, take screenshots, check Logcat for errors, and interact with the running application. This lets the agent help you with changes or fixes that involve re-running the application, checking for errors, and verifying that a particular update was made successfully (for example, by taking and reviewing screenshots).
Find and review changes using the changes drawer
Manage multiple conversation threads
Journeys for Android Studio
Support for remote MCP servers
Supercharge your UI development with Agent Mode
Create new UI from a design mock
Match your UI with a target image
Iterate on your UI with natural language
Find and fix UI quality issues
Beyond iterating on your UI, Gemini also helps streamline your development environment.
To accelerate your setup, you can:- Generate Compose Previews: This feature is now enhanced by Agent Mode to provide more accurate results. When working in a file that has Composable functions but no @Preview annotations, you can right-click on the Composable and select Gemini > Generate [Composable name] Preview. The agent will now better analyze your Composable to generate the necessary boilerplate with correct parameters, to help verify that a successfully rendered preview is added.
- Fix Preview rendering errors: When a Compose Preview fails to render, Gemini can now analyze the error message and your code to find the root cause and apply a fix.
App Links Assistant
The App Links Assistant now integrates with Agent Mode to automate the creation of deep link logic, simplifying one of the most time-consuming steps of implementation. Instead of manually writing code to parse incoming intents and navigate users to the correct screen, you can now let Gemini generate the necessary code and tests. Gemini presents a diff view of the suggested code changes for your review and approval, streamlining the process of handling deep links and ensuring users are seamlessly directed to the right content in your app.
To get started, open the App Links Assistant through the tools menu, then choose Create Applink. In the second step, Add logic to handle the intent, select Generate code with AI assistance. If a sample URL is available, enter it, and then click Insert Code.
Automatic Logcat Retracing
Debugging R8-optimized code just became seamless. Previously, when R8 was enabled (minifyEnabled = true in your build.gradle.kts file), it would obfuscate stack traces, changing class names, methods, and line numbers. To find the source of a crash, developers had to manually use the R8 retrace command line tool.
Starting with Android Studio Otter 3 Feature Drop with AGP versions 8.12 and above, this extra step is no longer necessary. Logcat now automatically detects and retraces R8-processed stack traces, so you can see the original, human-readable stack trace directly in the IDE. This provides a much-improved debugging experience with no extra work required.Fused Library Plugin: Publish multiple Android libraries as one
Get started
Ready to dive in and accelerate your development? Download Android Studio Otter 3 Feature Drop and start exploring these powerful new features today!
As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!
15 Jan 2026 5:18pm GMT
08 Jan 2026
Android Developers Blog
Ultrahuman launches features 15% faster with Gemini in Android Studio
Posted by Amrit Sanjeev, Developer Relations Engineer and Trevor Johns, Developer Relations Engineer
Ultrahuman is a consumer health-tech startup that provides daily well-being insights to users based on biometric data from the company's wearables, like the RING Air and the M1 Live Continuous Glucose Monitor (CGM). The Ultrahuman team leaned on Gemini in Android Studio's contextually aware tools to streamline and accelerate their development process.
Ultrahuman's app is maintained by a lean team of just eight developers. They prioritize building features that their users love, and have a backlog of bugs and needed performance improvements that take a lot of time. The team needed to scale up their output of feature improvements, and also needed to handle their performance improvements, without increasing headcount. One of their biggest opportunities was reducing the amount of time and effort for their backlog: every hour saved on maintenance could be reinvested into working on features for their users.
Solving technical hurdles and boosting performance with Gemini
The team integrated Gemini in Android Studio to see if the AI enhanced tools could improve their workflow by handling many Android tasks. First, the team turned to the Gemini chat inside Android Studio. The goal was to prototype a GATT Server implementation for their application's Bluetooth Low Energy (BLE) connectivity.
As Ultrahuman's Android Development Lead, Arka, noted, "Gemini helped us reach a working prototype in under an hour-something that would have otherwise taken us several hours." The BLE implementation provided by Gemini worked perfectly for syncing large amounts of health sensor data while the app ran in the background, improving the data syncing process and saving battery life on both the user's Android phone and Ultrahuman's paired wearable device.
Beyond this core challenge, Gemini also proved invaluable for finding algorithmic optimizations in a custom open-source library, pointing to helpful documentation, assisting with code commenting, and analyzing crash logs. The Ultrahuman team also used code completion to help them breeze through writing otherwise repetitive code, Jetpack Compose Preview Generation to enable rapid iteration during UI design, and Agent Mode for managing complex, project-wide changes, such as rendering a new stacked bar graph that mapped to backend data models and UI models.
Transforming productivity and accelerating feature delivery
These improvements have saved the team dozens of hours each week. This reclaimed time is being used to deliver new features to Ultrahuman's beta users 10-15% faster. For example, the team built a new in-app AI assistant for users, powered by Gemini 2.5 Flash. The UI design, architecture, and parts of the user experience for this new feature were initially suggested by Gemini in Android Studio-showcasing a full-circle AI-assisted development process.
Accelerate your Android development with Gemini
Gemini's expert Android advice, closely integrated throughout Android Studio, helps Android developers spend less time digging through documentation and writing boilerplate code-freeing up more time to innovate.
Learn how Gemini in Android Studio can help your team resolve complex issues, streamline workflows, and ship new features faster.
08 Jan 2026 10:00pm GMT
19 Dec 2025
Android Developers Blog
Media3 1.9.0 - What’s new
Posted by Kristina Simakova, Engineering Manager
Media3 1.9.0 - What's new?
-
media3-inspector - Extract metadata and frames outside of playback
-
media3-ui-compose-material3 - Build a basic Material3 Compose Media UI in just a few steps
-
media3-cast - Automatically handle transitions between Cast and local playbacks
-
media3-decoder-av1 - Consistent AV1 playback with the rewritten extension decoder based on the dav1d library
We also added caching and memory management improvements to PreloadManager, and provided several new ExoPlayer, Transformer and MediaSession simplifications.
This release also gives you the first experimental access to CompositionPlayer to preview media edits.
Read on to find out more, and as always please check out the full release notes for a comprehensive overview of changes in this release.
Extract metadata and frames outside of playback
There are many cases where you want to inspect media without starting a playback. For example, you might want to detect which formats it contains or what its duration is, or to retrieve thumbnails.The new media3-inspector module combines all utilities to inspect media without playback in one place:
-
MetadataRetriever to read duration, format and static metadata from a MediaItem.
-
FrameExtractor to get frames or thumbnails from an item.
-
MediaExtractorCompat as a direct replacement for the Android platform MediaExtractor class, to get detailed information about samples in the file.
suspend fun extractThumbnail(mediaItem: MediaItem) { FrameExtractor.Builder(context, mediaItem).build().use { val thumbnail = frameExtractor.getThumbnail().await() } }
Build a basic Material3 Compose Media UI in just a few steps
In previous releases we started providing connector code between Compose UI elements and your Player instance. With Media3 1.9.0, we added a new module media3-ui-compose-material3 with fully-styled Material3 buttons and content elements. They allow you to build a media UI in just a few steps, while providing all the flexibility to customize style. If you prefer to build your own UI style, you can use the building blocks that take care of all the update and connection logic, so you only need to concentrate on designing the UI element. Please check out our extended guide pages for the Compose UI modules.We are also still working on even more Compose components, like a prebuilt seek bar, a complete out-of-the-box replacement for PlayerView, as well as subtitle and ad integration.
@Composable fun SimplePlayerUI(player: Player, modifier: Modifier = Modifier) { Column(modifier) { ContentFrame(player) // Video surface and shutter logic Row (Modifier.align(Alignment.CenterHorizontally)) { SeekBackButton(player) // Simple controls PlayPauseButton(player) SeekForwardButton(player) } } }
Simple Compose player UI with out-of-the-box elements
Automatically handle transitions between Cast and local playbacks
When you set up your MediaSession, simply build a CastPlayer around your ExoPlayer and add a MediaRouteButton to your UI and you're done!
// MediaSession setup with CastPlayer val exoPlayer = ExoPlayer.Builder(context).build() val castPlayer = CastPlayer.Builder(context).setLocalPlayer(exoPlayer).build() val session = MediaSession.Builder(context, castPlayer).build() // MediaRouteButton in UI @Composable fun UIWithMediaRouteButton() { MediaRouteButton() }
New CastPlayer integration in Media3 session demo app
Consistent AV1 playback with the rewritten extension based on dav1d
The 1.9.0 release contains a completely rewritten AV1 extension module based on the popular dav1d library.As with all extension decoder modules, please note that it requires building from source to bundle the relevant native code correctly. Bundling a decoder provides consistency and format support across all devices, but because it runs the decoding in your process, it's best suited for content you can trust.
Integrate caching and memory management into PreloadManager
-
Caching support - When defining how far to preload, you can now choose PreloadStatus.specifiedRangeCached(0, 5000) as a target state for preloaded items. This will add the specified range to your cache on disk instead of loading the data to memory. With this, you can provide a much larger range of items for preloading as the ones further away from the current item no longer need to occupy memory. Note that this requires setting a Cache in DefaultPreloadManager.Builder.
-
Automatic memory management - We also updated our LoadControl interface to better handle the preload case so you are now able to set an explicit upper memory limit for all preloaded items in memory. It's 144 MB by default, and you can configure the limit in DefaultLoadControl.Builder. The DefaultPreloadManager will automatically stop preloading once the limit is reached, and automatically releases memory of lower priority items if required.
Rely on new simplified default behaviors in ExoPlayer
As always, we added lots of incremental improvements to ExoPlayer as well. To name just a few:-
Mute and unmute - We already had a setVolume method, but have now added the convenience mute and unmute methods to easily restore the previous volume without keeping track of it yourself.
-
Stuck player detection - In some rare cases the player can get stuck in a buffering or playing state without making any progress, for example, due to codec issues or misconfigurations. Your users will be annoyed, but you never see these issues in your analytics! To make this more obvious, the player now reports a StuckPlayerException when it detects a stuck state.
-
Wakelock by default - The wake lock management was previously opt-in, resulting in hard to find edge cases where playback progress can be delayed a lot when running in the background. Now this feature is opt-out, so you don't have to worry about it and can also remove all manual wake lock handling around playback.
-
Simplified setting for CC button logic - Changing TrackSelectionParameters to say "turn subtitles on/off" was surprisingly hard to get right, so we added a simple boolean selectTextByDefault option for this use case.
Simplify your media button preferences in MediaSession
Until now, defining your preferences for which buttons should show up in the media notification drawer on Android Auto or WearOS required defining custom commands and buttons, even if you simply wanted to trigger a standard player method.Media3 1.9.0 has new functionality to make this a lot simpler - you can now define your media button preferences with a standard player command, requiring no custom command handling at all.
session.setMediaButtonPreferences(listOf(
CommandButton.Builder(CommandButton.ICON_FAST_FORWARD) // choose an icon
.setDisplayName(R.string.skip_forward)
.setPlayerCommand(Player.COMMAND_SEEK_FORWARD) // choose an action
.build()
))
Media button preferences with fast forward button
CompositionPlayer for real-time preview
The 1.9.0 release introduces CompositionPlayer under a new @ExperimentalApi annotation. The annotation indicates that it is available for experimentation, but is still under development.CompositionPlayer is a new component in the Media3 editing APIs designed for real-time preview of media edits. Built upon the familiar Media3 Player interface, CompositionPlayer allows users to see their changes in action before committing to the export process. It uses the same Composition object that you would pass to Transformer for exporting, streamlining the editing workflow by unifying the data model for preview and export.
We encourage you to start using CompositionPlayer and share your feedback, and keep an eye out for forthcoming posts and updates to the documentation for more details.
InAppMuxer as a default muxer in Transformer
New speed adjustment APIs
val speedProvider = object : SpeedProvider {
override fun getSpeed(presentationTimeUs: Long): Float {
return speed
}
override fun getNextSpeedChangeTimeUs(timeUs: Long): Long {
return C.TIME_UNSET
}
}
EditedMediaItem speedEffectItem = EditedMediaItem.Builder(mediaItem)
.setSpeed(speedProvider)
.build()
This new approach replaces the previous method of using Effects#createExperimentalSpeedChangingEffects(), which we've deprecated and will remove in a future release.
Introducing track types for EditedMediaItemSequence
This is done via a new EditedMediaItemSequence.Builder constructor that accepts a set of track types (e.g., C.TRACK_TYPE_AUDIO, C.TRACK_TYPE_VIDEO).
To simplify creation, we've added new static convenience methods:
-
EditedMediaItemSequence.withAudioFrom(List<EditedMediaItem>)
-
EditedMediaItemSequence.withVideoFrom(List<EditedMediaItem>)
-
EditedMediaItemSequence.withAudioAndVideoFrom(List<EditedMediaItem>)
We encourage you to migrate to the new constructor or the convenience methods for clearer and more reliable sequence definitions.
Example of creating a video-only sequence:
EditedMediaItemSequence videoOnlySequence =
EditedMediaItemSequence.Builder(setOf(C.TRACK_TYPE_VIDEO))
.addItem(editedMediaItem)
.build()
---
Please get in touch via the Media3 issue Tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!
19 Dec 2025 10:00pm GMT



























