15 Jan 2026

feedAndroid Developers Blog

LLM flexibility, Agent Mode improvements, and new agentic experiences in Android Studio Otter 3 Feature Drop

Posted by Sandhya Mohan, Senior Product Manager and Trevor Johns, Developer Relations Engineer


We are excited to announce that Android Studio Otter 3 Feature Drop is now stable! This feature-packed release brings a huge update to your agentic workflows in Android Studio, and offers you more flexibility and control for how you use AI to help you build Android apps.
  • Bring Your Own Model: You can now use any LLM to power the AI functionality in Android Studio.
  • Agent Mode Enhancements: You can now more easily have Agent Mode interact with your app on devices, review and accept suggested changes, and have multiple conversations threads.
  • Run user journey tests using natural language: with Journeys in Android Studio.
  • Enable Agent Mode to connect to more tools: including the ability to connect to remote servers via MCP.
  • Build, iterate and test your UI: with UI agentic experiences in Android Studio.
  • Build deep links using natural language: with the new app links assistant.
  • Debug R8 optimized code: with Automatic Logcat retracing.
  • Simplify Android library modules: with the Fused library plugin.


Here's a deep dive into what's new:

Bring Your Own Model (BYOM)

Every developer has a unique workflow when using AI, and different companies have different policies on AI model usage. With this release, Android Studio now brings you more flexibility by allowing you to choose the LLM that powers the AI functionality in Android Studio, giving you more control over performance, privacy, and cost.

Use a remote model

You can now integrate remote models-such as OpenAI's GPT, Anthropic's Claude, or a similar model-directly into Android Studio. This allows you to leverage your preferred model provider without changing your IDE. To get started, configure a remote model provider in Settings by adding your API endpoint and key. Once configured, you can select your custom model directly from the picker in the AI chat window.

Enter the remote model provider information.

Use a local model

If you have limited internet connectivity, strict data privacy requirements, or a desire to experiment with open-source research, Android Studio now supports local models via providers like LM Studio or Ollama. While Gemini in Android Studio remains the default recommendation-tuned specifically for Android development with full context awareness-if you have a specific model preference, Android Studio supports it.
Model picker in Android Studio.

A local model offers an alternative to the LLM support built into Android Studio, and typically requires significant local system RAM and hard drive space to run well. However, Gemini in Android Studio provides the best Android development experience because Gemini is tuned for Android and supports all features of Android Studio. With Gemini, you can choose from a variety of models for your Android development tasks, including the no-cost default model or models accessed with a paid Gemini API key.

Use your Gemini API key


While Android Studio includes access to a default Gemini model with generous quotas at no cost, some developers need more. By adding your Gemini API key, Android Studio can directly access all the latest Gemini models available from Google.


For example, this allows you to use the most recent Gemini 3 Pro and Gemini 3 Flash models (among others) with expanded context windows and quota. This is especially useful for developers who are using Agent Mode for extended coding sessions, where this additional processing power can provide higher fidelity responses.


You can also read more about how we're rolling out Gemini 3 to all Android Studio users, including Gemini Code Assist subscribers and developers accessing the default Gemini in Android Studio model at no-cost.

Agent Mode enhancements

Agent Mode is the semi-autonomous AI assistant in Android Studio that aids in your software development, used by many developers, including the Ultrahuman team. Get more out of Agent Mode with these new updates.

Run your app and interact with it on devices

Agent Mode can now deploy an application to the connected device, inspect what is currently shown on the screen, take screenshots, check Logcat for errors, and interact with the running application. This lets the agent help you with changes or fixes that involve re-running the application, checking for errors, and verifying that a particular update was made successfully (for example, by taking and reviewing screenshots).


Agent mode uses device actions to deploy and verify changes.

Find and review changes using the changes drawer

You can now see and manage all changes made by the AI agent using the changes drawer. When the agent makes changes to your codebase, you can see the files that were edited in Files to review. From there, you can keep or revert the changes individually or all together. Click an individual file in the drawer to see the code diff in the editor and make refinements if needed. With the changes drawer, you can keep track of edits made by the agent during your chat and revisit specific changes without scrolling back through your conversation history.


See all the files that the agent has proposed edits to in the changes drawer.
Note: If the Don't ask to edit files setting is disabled in Agent Options , Agent Mode will request permission for every individual change. Each change must be accepted before it appears in the changes drawer. To allow multiple file edits to appear in the drawer simultaneously, enable the Don't ask to edit files option.


Accept a change to add it to the changes drawer.

Manage multiple conversation threads


You can now organize your conversations with Gemini in Android Studio into multiple threads. This lets you create a new chat or agent thread when you need to start with a clean slate, and you can go back to older conversations in the history tab. Using separate threads for each distinct task can improve response quality by limiting the scope of the AI's context to only the topic at hand.



To start a new thread, click New Conversation The New Chat plus sign.. To see your conversation history, click Recent Chats. The Recent Chats word bubble.


See prior conversations in the "Recent Chats" tab.



Your conversation history is saved to your account, so if you have to sign out or switch accounts you can resume right where you left off when you come back.

Journeys for Android Studio

Running end-to-end UI tests can improve confidence that you're shipping a high-quality app to production, but writing and maintaining those tests can be difficult, brittle, and limited in what you're able to test. Journeys for Android Studio leverages the reasoning and vision capabilities of Gemini to enable you to write and maintain end-to-end UI tests using natural language instructions-and it's now available in the latest stable release of Android Studio when you enable it from Studio Labs in your Android Studio Settings.

Journeys for Android Studio.

These natural language instructions are converted into interactions that Gemini performs directly on your app. This not only makes your tests easier to write and understand, but also enables you to define complex assertions that Gemini evaluates based on what it "sees" on the device screen. Because Gemini reasons about how to achieve your goals, these tests are more resilient to subtle changes in your app's layout, significantly reducing flaky tests when running against different app versions or device configurations.

Journeys for Android Studio.

You can write and run journeys directly from Android Studio against any local or remote device. The IDE provides a new editor experience for crafting your test steps in an XML file, using either a code view or a dedicated design view. When you run a journey, Android Studio provides rich, detailed results that help you follow Gemini's execution. The test panel breaks down the entire journey into its discrete steps, showing you screenshots for each action, what action was taken, and Gemini's reasoning for why it took that action, making debugging and validation clearer than ever. And because journeys are run as Gradle tasks, you can run them from the command line after you authenticate with a Google Cloud Project.

Support for remote MCP servers

Android Studio now lets you connect directly to remote Model Context Protocol (MCP) servers such as Figma, Notion, Canva, Linear, and more. This significantly reduces your context switching since it enables the AI agent in Android Studio to leverage external tools, helping you stay in your flow. For example, you can connect to Figma's remote MCP server to access files and provide this information to Agent Mode, generating more accurate code from your designs. To learn more about how to add an MCP server, see Add an MCP server.


Connect to the Figma remote MCP server in Android Studio Settings.

Quickly add a screen to your app using the Figma remote MCP server.

Supercharge your UI development with Agent Mode

Gemini in Android Studio is now integrated into the UI development workflow directly from within the Compose Preview panel, helping you go from design to a high-quality implementation faster. These new agentic capabilities are designed to assist you at every stage of development, from initial code generation to iteration, refinement, and debugging, with entry points in the context of your work.

Create new UI from a design mock


Accelerate your initial UI implementation by generating Compose code directly from a design mock. Simply click Generate Code From Screenshot in an empty Preview panel, and Gemini will use the image to generate a starting implementation, saving you from writing boilerplate from scratch.

Generate code from a screenshot in an empty Preview panel.

Example turning design into Compose code.


Match your UI with a target image


Once you have an initial implementation, you can iteratively refine it to be pixel-perfect. Right-click your Compose Preview and select AI Actions > Match UI to Target Image. Upload a reference design, and the agent will suggest code changes to make your UI match the design as closely as possible.

Example of using "Match UI to Target Image"

Iterate on your UI with natural language

For more specific or creative changes, right-click on your preview and use the AI Actions > Change UI. This capability now leverages Agent Mode to validate the results, making it more powerful and accurate. You can use natural language prompts like "change the button color to blue" or "add padding around this text," and Gemini will apply the code modifications instantly.

Example of using "Change UI"

Find and fix UI quality issues

Verifying your UI is high-quality and more accessible is a critical final step. The AI Actions > Fix all UI check tool audits your UI for common problems, such as accessibility issues. The agent will then propose and apply fixes to resolve the detected issues.

Entry point to trigger "Fix all UI check issues"

You can also find the same functionality by using the Fix with AI button in Compose UI check mode:

"Fix with AI" in UI Check mode


The features mentioned above are also accessible by the toolbar icon in the Preview panel:

Second entry point to UI development AI features

Beyond iterating on your UI, Gemini also helps streamline your development environment.

To accelerate your setup, you can:
  • Generate Compose Previews: This feature is now enhanced by Agent Mode to provide more accurate results. When working in a file that has Composable functions but no @Preview annotations, you can right-click on the Composable and select Gemini > Generate [Composable name] Preview. The agent will now better analyze your Composable to generate the necessary boilerplate with correct parameters, to help verify that a successfully rendered preview is added.

Entry point to generate Compose Preview


  • Fix Preview rendering errors: When a Compose Preview fails to render, Gemini can now analyze the error message and your code to find the root cause and apply a fix.

Using "Fix with AI" on Preview render error

App Links Assistant

The App Links Assistant now integrates with Agent Mode to automate the creation of deep link logic, simplifying one of the most time-consuming steps of implementation. Instead of manually writing code to parse incoming intents and navigate users to the correct screen, you can now let Gemini generate the necessary code and tests. Gemini presents a diff view of the suggested code changes for your review and approval, streamlining the process of handling deep links and ensuring users are seamlessly directed to the right content in your app.


To get started, open the App Links Assistant through the tools menu, then choose Create Applink. In the second step, Add logic to handle the intent, select Generate code with AI assistance. If a sample URL is available, enter it, and then click Insert Code.


App Links Assistant

Automatic Logcat Retracing

Debugging R8-optimized code just became seamless. Previously, when R8 was enabled (minifyEnabled = true in your build.gradle.kts file), it would obfuscate stack traces, changing class names, methods, and line numbers. To find the source of a crash, developers had to manually use the R8 retrace command line tool.

Starting with Android Studio Otter 3 Feature Drop with AGP versions 8.12 and above, this extra step is no longer necessary. Logcat now automatically detects and retraces R8-processed stack traces, so you can see the original, human-readable stack trace directly in the IDE. This provides a much-improved debugging experience with no extra work required.

Logcat now automatically detects and retraces R8-processed stack traces

Fused Library Plugin: Publish multiple Android libraries as one


The new Fused Library plugin bundled with Android Gradle Plugin 9.0 allows you to package multiple Android library modules into a single, publishable Android Library (AAR). This was one of the most requested features for Android Gradle Plugin, and we are making it available for you today. This plugin enables you to modularize your code and resources internally while simplifying the integration process for your users by exposing only a single dependency. In addition to streamlining project setup and version management, distributing a fused library can help reduce library size through improved code shrinking and offer better control over your internal implementation details. To learn more about the Fused Library plugin see Publish multiple Android libraries as one with Fused Library.



Get started

Ready to dive in and accelerate your development? Download Android Studio Otter 3 Feature Drop and start exploring these powerful new features today!


As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!



15 Jan 2026 5:18pm GMT

08 Jan 2026

feedAndroid Developers Blog

Ultrahuman launches features 15% faster with Gemini in Android Studio

Posted by Amrit Sanjeev, Developer Relations Engineer and Trevor Johns, Developer Relations Engineer





Ultrahuman is a consumer health-tech startup that provides daily well-being insights to users based on biometric data from the company's wearables, like the RING Air and the M1 Live Continuous Glucose Monitor (CGM). The Ultrahuman team leaned on Gemini in Android Studio's contextually aware tools to streamline and accelerate their development process.


Ultrahuman's app is maintained by a lean team of just eight developers. They prioritize building features that their users love, and have a backlog of bugs and needed performance improvements that take a lot of time. The team needed to scale up their output of feature improvements, and also needed to handle their performance improvements, without increasing headcount. One of their biggest opportunities was reducing the amount of time and effort for their backlog: every hour saved on maintenance could be reinvested into working on features for their users.





Solving technical hurdles and boosting performance with Gemini


The team integrated Gemini in Android Studio to see if the AI enhanced tools could improve their workflow by handling many Android tasks. First, the team turned to the Gemini chat inside Android Studio. The goal was to prototype a GATT Server implementation for their application's Bluetooth Low Energy (BLE) connectivity.





As Ultrahuman's Android Development Lead, Arka, noted, "Gemini helped us reach a working prototype in under an hour-something that would have otherwise taken us several hours." The BLE implementation provided by Gemini worked perfectly for syncing large amounts of health sensor data while the app ran in the background, improving the data syncing process and saving battery life on both the user's Android phone and Ultrahuman's paired wearable device.


Beyond this core challenge, Gemini also proved invaluable for finding algorithmic optimizations in a custom open-source library, pointing to helpful documentation, assisting with code commenting, and analyzing crash logs. The Ultrahuman team also used code completion to help them breeze through writing otherwise repetitive code, Jetpack Compose Preview Generation to enable rapid iteration during UI design, and Agent Mode for managing complex, project-wide changes, such as rendering a new stacked bar graph that mapped to backend data models and UI models.





Transforming productivity and accelerating feature delivery


These improvements have saved the team dozens of hours each week. This reclaimed time is being used to deliver new features to Ultrahuman's beta users 10-15% faster. For example, the team built a new in-app AI assistant for users, powered by Gemini 2.5 Flash. The UI design, architecture, and parts of the user experience for this new feature were initially suggested by Gemini in Android Studio-showcasing a full-circle AI-assisted development process.


Accelerate your Android development with Gemini


Gemini's expert Android advice, closely integrated throughout Android Studio, helps Android developers spend less time digging through documentation and writing boilerplate code-freeing up more time to innovate.


Learn how Gemini in Android Studio can help your team resolve complex issues, streamline workflows, and ship new features faster.

08 Jan 2026 10:00pm GMT

19 Dec 2025

feedAndroid Developers Blog

Media3 1.9.0 - What’s new

Posted by Kristina Simakova, Engineering Manager




Media3 1.9.0 - What's new?

Media3 1.9.0 is out! Besides the usual bug fixes and performance improvements, the latest release also contains four new or largely rewritten modules:
  • media3-inspector - Extract metadata and frames outside of playback

  • media3-ui-compose-material3 - Build a basic Material3 Compose Media UI in just a few steps

  • media3-cast - Automatically handle transitions between Cast and local playbacks

  • media3-decoder-av1 - Consistent AV1 playback with the rewritten extension decoder based on the dav1d library


We also added caching and memory management improvements to PreloadManager, and provided several new ExoPlayer, Transformer and MediaSession simplifications.


This release also gives you the first experimental access to CompositionPlayer to preview media edits.


Read on to find out more, and as always please check out the full release notes for a comprehensive overview of changes in this release.

Extract metadata and frames outside of playback

There are many cases where you want to inspect media without starting a playback. For example, you might want to detect which formats it contains or what its duration is, or to retrieve thumbnails.

The new media3-inspector module combines all utilities to inspect media without playback in one place:

  • MetadataRetriever to read duration, format and static metadata from a MediaItem.

  • FrameExtractor to get frames or thumbnails from an item.

  • MediaExtractorCompat as a direct replacement for the Android platform MediaExtractor class, to get detailed information about samples in the file.


MetadataRetriever and FrameExtractor follow a simple AutoCloseable pattern. Have a look at our new guide pages for more details.

suspend fun extractThumbnail(mediaItem: MediaItem) {
  FrameExtractor.Builder(context, mediaItem).build().use {
    val thumbnail = frameExtractor.getThumbnail().await()
  } 
}

Build a basic Material3 Compose Media UI in just a few steps

In previous releases we started providing connector code between Compose UI elements and your Player instance. With Media3 1.9.0, we added a new module media3-ui-compose-material3 with fully-styled Material3 buttons and content elements. They allow you to build a media UI in just a few steps, while providing all the flexibility to customize style. If you prefer to build your own UI style, you can use the building blocks that take care of all the update and connection logic, so you only need to concentrate on designing the UI element. Please check out our extended guide pages for the Compose UI modules.

We are also still working on even more Compose components, like a prebuilt seek bar, a complete out-of-the-box replacement for PlayerView, as well as subtitle and ad integration.


@Composable
fun SimplePlayerUI(player: Player, modifier: Modifier = Modifier) {
  Column(modifier) {
    ContentFrame(player)  // Video surface and shutter logic
    Row (Modifier.align(Alignment.CenterHorizontally)) {                 
      SeekBackButton(player)   // Simple controls
      PlayPauseButton(player)
      SeekForwardButton(player)
    }
  }
}

Simple Compose player UI with out-of-the-box elements

Automatically handle transitions between Cast and local playbacks

The CastPlayer in the media3-cast module has been rewritten to automatically handle transitions between local playback (for example with ExoPlayer) and remote Cast playback.

When you set up your MediaSession, simply build a CastPlayer around your ExoPlayer and add a MediaRouteButton to your UI and you're done!


// MediaSession setup with CastPlayer 
val exoPlayer = ExoPlayer.Builder(context).build()
val castPlayer = CastPlayer.Builder(context).setLocalPlayer(exoPlayer).build()
val session = MediaSession.Builder(context, castPlayer).build()
// MediaRouteButton in UI 
@Composable fun UIWithMediaRouteButton() {
  MediaRouteButton()
}


New CastPlayer integration in Media3 session demo app


Consistent AV1 playback with the rewritten extension based on dav1d

The 1.9.0 release contains a completely rewritten AV1 extension module based on the popular dav1d library.

As with all extension decoder modules, please note that it requires building from source to bundle the relevant native code correctly. Bundling a decoder provides consistency and format support across all devices, but because it runs the decoding in your process, it's best suited for content you can trust.

Integrate caching and memory management into PreloadManager

We made our PreloadManager even better as well. It already enabled you to preload media into memory outside of playback and then seamlessly hand it over to a player when needed. Although pretty performant, you still had to be careful to not exceed memory limits by accidentally preloading too much. So with Media3 1.9.0, we added two features that makes this a lot easier and more stable:


  1. Caching support - When defining how far to preload, you can now choose PreloadStatus.specifiedRangeCached(0, 5000) as a target state for preloaded items. This will add the specified range to your cache on disk instead of loading the data to memory. With this, you can provide a much larger range of items for preloading as the ones further away from the current item no longer need to occupy memory. Note that this requires setting a Cache in DefaultPreloadManager.Builder.

  2. Automatic memory management - We also updated our LoadControl interface to better handle the preload case so you are now able to set an explicit upper memory limit for all preloaded items in memory. It's 144 MB by default, and you can configure the limit in DefaultLoadControl.Builder. The DefaultPreloadManager will automatically stop preloading once the limit is reached, and automatically releases memory of lower priority items if required.

Rely on new simplified default behaviors in ExoPlayer

As always, we added lots of incremental improvements to ExoPlayer as well. To name just a few:
  • Mute and unmute - We already had a setVolume method, but have now added the convenience mute and unmute methods to easily restore the previous volume without keeping track of it yourself.

  • Stuck player detection - In some rare cases the player can get stuck in a buffering or playing state without making any progress, for example, due to codec issues or misconfigurations. Your users will be annoyed, but you never see these issues in your analytics! To make this more obvious, the player now reports a StuckPlayerException when it detects a stuck state.

  • Wakelock by default - The wake lock management was previously opt-in, resulting in hard to find edge cases where playback progress can be delayed a lot when running in the background. Now this feature is opt-out, so you don't have to worry about it and can also remove all manual wake lock handling around playback.

  • Simplified setting for CC button logic - Changing TrackSelectionParameters to say "turn subtitles on/off" was surprisingly hard to get right, so we added a simple boolean selectTextByDefault option for this use case.

Simplify your media button preferences in MediaSession

Until now, defining your preferences for which buttons should show up in the media notification drawer on Android Auto or WearOS required defining custom commands and buttons, even if you simply wanted to trigger a standard player method.

Media3 1.9.0 has new functionality to make this a lot simpler - you can now define your media button preferences with a standard player command, requiring no custom command handling at all.


session.setMediaButtonPreferences(listOf(
    CommandButton.Builder(CommandButton.ICON_FAST_FORWARD) // choose an icon
      .setDisplayName(R.string.skip_forward)
      .setPlayerCommand(Player.COMMAND_SEEK_FORWARD) // choose an action 
      .build()
))

Media button preferences with fast forward button

CompositionPlayer for real-time preview

The 1.9.0 release introduces CompositionPlayer under a new @ExperimentalApi annotation. The annotation indicates that it is available for experimentation, but is still under development.

CompositionPlayer is a new component in the Media3 editing APIs designed for real-time preview of media edits. Built upon the familiar Media3 Player interface, CompositionPlayer allows users to see their changes in action before committing to the export process. It uses the same Composition object that you would pass to Transformer for exporting, streamlining the editing workflow by unifying the data model for preview and export.

We encourage you to start using CompositionPlayer and share your feedback, and keep an eye out for forthcoming posts and updates to the documentation for more details.

InAppMuxer as a default muxer in Transformer

Transformer now uses InAppMp4Muxer as the default muxer for writing media container files. Internally, InAppMp4Muxer depends on the Media3 Muxer module, providing consistent behaviour across all API versions.
Note that while Transformer no longer uses the Android platform's MediaMuxer by default, you can still provide FrameworkMuxer.Factory via setMuxerFactory if your use case requires it.

New speed adjustment APIs

The 1.9.0 release simplifies speed adjustments APIs for media editing. We've introduced new methods directly on EditedMediaItem.Builder to control speed, making the API more intuitive. You can now change the speed of a clip by calling setSpeed(SpeedProvider provider) on the EditedMediaItem.Builder:

val speedProvider = object : SpeedProvider {
    override fun getSpeed(presentationTimeUs: Long): Float {
        return speed
    }

    override fun getNextSpeedChangeTimeUs(timeUs: Long): Long {
        return C.TIME_UNSET
    }
}

EditedMediaItem speedEffectItem = EditedMediaItem.Builder(mediaItem)
    .setSpeed(speedProvider)
    .build()


This new approach replaces the previous method of using Effects#createExperimentalSpeedChangingEffects(), which we've deprecated and will remove in a future release.

Introducing track types for EditedMediaItemSequence

In the 1.9.0 release, EditedMediaItemSequence requires specifying desired output track types during sequence creation. This change ensures track handling is more explicit and robust across the entire Composition.

This is done via a new EditedMediaItemSequence.Builder constructor that accepts a set of track types (e.g., C.TRACK_TYPE_AUDIO, C.TRACK_TYPE_VIDEO).

To simplify creation, we've added new static convenience methods:

  • EditedMediaItemSequence.withAudioFrom(List<EditedMediaItem>)

  • EditedMediaItemSequence.withVideoFrom(List<EditedMediaItem>)

  • EditedMediaItemSequence.withAudioAndVideoFrom(List<EditedMediaItem>)

We encourage you to migrate to the new constructor or the convenience methods for clearer and more reliable sequence definitions.

Example of creating a video-only sequence:

EditedMediaItemSequence videoOnlySequence =
    EditedMediaItemSequence.Builder(setOf(C.TRACK_TYPE_VIDEO))
        .addItem(editedMediaItem)
        .build()


---


Please get in touch via the Media3 issue Tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!

19 Dec 2025 10:00pm GMT