23 Apr 2026

feedTalkAndroid

Dark Matter Season 2 finally has a release date—here’s why fans are both excited and worried

The wait is finally over for science fiction fans: Dark Matter Season 2 has an official release date…

23 Apr 2026 3:30pm GMT

Govee’s first solar lights bring 281 trillion colours outdoors

Govee's solar lights feature smarter controls, up to 13 hours of runtime.

23 Apr 2026 3:05pm GMT

Google Maps unveils revolutionary 3D navigation—drivers to experience routes like never before

Big news for drivers: Google Maps has changed the way you see the road-literally. In a blog post…

23 Apr 2026 3:00pm GMT

Why Is “The Cleaning Lady” Suddenly the Most Addictive Thriller on Netflix?

Psychological thrillers have never been more popular-and "The Cleaning Lady" is the latest obsession captivating Netflix viewers across…

23 Apr 2026 6:30am GMT

Steven Spielberg calls Dune trilogy his all-time favorite sci-fi films—here’s why he’s obsessed

Let's face it: science fiction blockbusters wouldn't be what they are today without Steven Spielberg. Yet Spielberg hasn't…

23 Apr 2026 6:00am GMT

Boba Story Lid Recipes – 2026

Look no further for all the latest Boba Story Lid Recipes. They are all right here!

23 Apr 2026 2:51am GMT

Dice Dreams Free Rolls – Updated Daily

Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.

23 Apr 2026 2:51am GMT

22 Apr 2026

feedAndroid Developers Blog

What's new in the Jetpack Compose April '26 release

Posted by Meghan Mehta, Android Developer Relations Engineer



Today, the Jetpack Compose April '26 release is stable. This release contains version 1.11 of core Compose modules (see the full BOM mapping), shared element debug tools, trackpad events, and more. We also have a few experimental APIs that we'd love you to try out and give us feedback on.

To use today's release, upgrade your Compose BOM version to:

implementation(platform("androidx.compose:compose-bom:2026.04.01"))

Changes in Compose 1.11.0

Coroutine execution in tests

We're introducing a major update to how Compose handles test timing. Following the opt-in period announced in Compose 1.10, the v2 testing APIs are now the default, and the v1 APIs have been deprecated. The key change is a shift in the default test dispatcher. While the v1 APIs relied on UnconfinedTestDispatcher, which executed coroutines immediately, the v2 APIs use the StandardTestDispatcher. This means that when a coroutine is launched in your tests, it is now queued and does not execute until the virtual clock is advanced.

This better mimics production conditions, effectively flushing out race conditions and making your test suite significantly more robust and less flaky.

To ensure your tests align with standard coroutine behavior and to avoid future compatibility issues, we strongly recommend migrating your test suite. Check out our comprehensive migration guide for API mappings and common fixes.

Shared element improvements and animation tooling


We've also added some handy visual debugging tools for shared elements and Modifier.animatedBounds. You can now see exactly what's happening under the hood-like target bounds, animation trajectories, and how many matches are found-making it much easier to spot why a transition might not be behaving as expected. To use the new tooling, simply surround your SharedTransitionLayout with the LookaheadAnimationVisualDebugging composable.

LookaheadAnimationVisualDebugging(
    overlayColor = Color(0x4AE91E63),
    isEnabled = true,
    multipleMatchesColor = Color.Green,
    isShowKeylabelEnabled = false,
    unmatchedElementColor = Color.Red,
) {
    SharedTransitionLayout {
        CompositionLocalProvider(
            LocalSharedTransitionScope provides this,
        ) {
            // your content
        }
    }
}

Trackpad events

We've revamped Compose support for trackpads, like built-in laptop trackpads, attachable trackpads for tablets, or external/virtual trackpads. Basic trackpad events will now generally be considered PointerType.Mouse events, aligning mouse and trackpad behavior to better match user expectations. Previously, these trackpad events were interpreted as fake touchscreen fingers of PointerType.Touch, which led to confusing user experiences. For example, clicking and dragging with a trackpad would scroll instead of selecting. By changing the pointer type these events have in the latest release of Compose, clicking and dragging with a trackpad will no longer scroll.

We also added support for more complicated trackpad gestures as recognized by the platform since API 34, including two finger swipes and pinches. These gestures are automatically recognized by components like Modifier.scrollable and Modifier.transformable to have better behavior with trackpads.

These changes improve behavior for trackpads across built-in components, with redundant touch slop removed, a more intuitive drag-and-drop starting gesture, double-click and triple-click selection in text fields, and desktop-styled context menus in text fields.

To test trackpad behavior, there are new testing APIs with performTrackpadInput, which allow validating the behavior of your apps when being used with a trackpad. If you have custom gesture detectors, validate behavior across input types, including touchscreens, mice, trackpads, and styluses, and ensure support for mouse scroll wheels and trackpad gestures.

Before After
Before
After

Composition host defaults (Compose runtime)

We introduced HostDefaultProvider, LocalHostDefaultProvider, HostDefaultKey, and ViewTreeHostDefaultKey to supply host-level services directly through compose-runtime. This removes the need for libraries to depend on compose-ui for lookups, better supporting Kotlin Multiplatform. To link these values to the composition tree, library authors can use compositionLocalWithHostDefaultOf to create a CompositionLocal that resolves defaults from the host.

Preview wrappers

Android Studio custom previews is a new feature that allows you to define exactly how the contents of a Compose preview are displayed.

By implementing the PreviewWrapperProvider interface and applying the new @PreviewWrapper annotation, you can easily inject custom logic, such as applying a specific Theme. The annotation can be applied to a function annotated with @Composable and @Preview or @MultiPreview, offering a generic, easy-to-use solution that works across preview features and significantly reduces repetitive code.


class ThemeWrapper: PreviewWrapper {


@Composable


override fun Wrap(content: @Composable (() -> Unit)) {


JetsnackTheme {


content()


}


}


}



@PreviewWrapperProvider(ThemeWrapper::class)


@Preview


@Composable


private fun ButtonPreview() {


// JetsnackTheme in effect


Button(onClick = {}) {


Text(text = "Demo")


}


}

Deprecations and removals

Upcoming APIs

In the upcoming Compose 1.12.0 release, the compileSdk will be upgraded to compileSdk 37, with AGP 9 and all apps and libraries that depend on Compose inheriting this requirement. We recommend keeping up to date with the latest released versions, as Compose aims to promptly adopt new compileSdks to provide access to the latest Android features. Be sure to check out the documentation here for more information on which version of AGP is supported for different API levels.

In Compose 1.11.0, the following APIs are introduced as @Experimental, and we look forward to hearing your feedback as you explore them in your apps. Note that @Experimental APIs are provided for early evaluation and feedback and may undergo significant changes or removal in future releases.

Styles (Experimental)

We are introducing a new experimental foundation API for styling. The Style API is a new paradigm for customizing visual elements of components, which has traditionally been performed with modifiers. It is designed to unlock deeper, easier customization by exposing a standard set of styleable properties with simple state-based styling and animated transitions. With this new API, we're already seeing promising performance benefits. We plan to adopt Styles in Material components once the Style API stabilizes.

A basic example of overriding a pressed state style background:

@Composable
fun LoginButton(modifier: Modifier = Modifier) {
    Button(
        onClick = {
            // Login logic
        },
        modifier = modifier,
        style = {
            background(
                Brush.linearGradient(
                    listOf(lightPurple, lightBlue)
                )
            )
            width(75.dp)
            height(50.dp)
            textAlign(TextAlign.Center)
            externalPadding(16.dp)

            pressed {
                background(
                    Brush.linearGradient(
                        listOf(Color.Magenta, Color.Red)
                    )
                )
            }
        }
    ){
        Text(
            text = "Login",
        )
    }
}




Check out the documentation and file any bugs here.

MediaQuery (Experimental)

The new mediaQuery API provides a declarative and performant way to adapt your UI to its environment. It abstracts complex information retrieval into simple conditions within a UiMediaScope, ensuring recomposition only happens when needed.

With support for a wide range of environmental signals-from device capabilities like keyboard types and pointer precision, to contextual states like window size and posture-you can build deeply responsive experiences. Performance is baked in with derivedMediaQuery to handle high-frequency updates, while the ability to override scopes makes testing and previews seamless across hardware configurations.

Previously, to get access to certain device properties - like if a device was in tabletop mode - you'd need to write a lot of boilerplate to do so:

@Composable
fun isTabletopPosture(
    context: Context = LocalContext.current
): Boolean {
    val windowLayoutInfo by
        WindowInfoTracker
            .getOrCreate(context)
            .windowLayoutInfo(context)
            .collectAsStateWithLifecycle(null)

    return windowLayoutInfo.displayFeatures.any { displayFeature ->
        displayFeature is FoldingFeature &&
            displayFeature.state == FoldingFeature.State.HALF_OPENED &&
            displayFeature.orientation == FoldingFeature.Orientation.HORIZONTAL
    }
}

@Composable
fun VideoPlayer() {
    if(isTabletopPosture()) {
        TabletopLayout()
    } else {
        FlatLayout()
    }
}

Now, with UIMediaQuery, you can add the mediaQuery syntax to query device properties, such as if a device is in tabletop mode:

@OptIn(ExperimentalMediaQueryApi::class)
@Composable
fun VideoPlayer() {
    if (mediaQuery { windowPosture == UiMediaScope.Posture.Tabletop }) {
        TabletopLayout()
    } else {
        FlatLayout()
    }
}

Check out the documentation and file any bugs here.

Grid (Experimental)

Grid is a powerful new API for building complex, two-dimensional layouts in Jetpack Compose. While Row and Column are great for linear designs, Grid gives you the structural control needed for screen-level architecture and intricate components without the overhead of a scrollable list.

Grid allows you to define your layout using tracks, gaps, and cells, offering familiar sizing options like Dp, percentages, intrinsic content sizes, and flexible "Fr" units.


@OptIn(ExperimentalGridApi::class)


@Composable


fun GridExample() {


Grid(


config = {


repeat(4) { column(0.25f) }


repeat(2) { row(0.5f) }


gap(16.dp)


}


) {


Card1(modifier = Modifier.gridItem(rowSpan = 2)


Card2(modifier = Modifier.gridItem(colmnSpan = 3)


Card3(modifier = Modifier.gridItem(columnSpan = 2)


Card4()


}


}


You can place items automatically or explicitly span them across multiple rows and columns for precision. Best of all, it's highly adaptive-you can dynamically reconfigure your grid tracks and spans to respond to device states like tabletop mode or orientation changes, ensuring your UI looks great across form factors.


Check out the documentation and file any bugs here.

FlexBox (Experimental)

FlexBox is a layout container designed for high performance, adaptive UIs. It manages item sizing and space distribution based on available container dimensions. It handles complex tasks like wrapping (wrap) and multi-axis alignment of items (justifyContent, alignItems, alignContent). It allows items to grow (grow) or shrink (shrink) to fill the container.

@OptIn(ExperimentalFlexBoxApi::class)
fun FlexBoxWrapping(){
    FlexBox(
        config = {
            wrap(FlexWrap.Wrap)
            gap(8.dp)
        }
    ) {
        RedRoundedBox()
        BlueRoundedBox()
        GreenRoundedBox(modifier = Modifier.width(350.dp).flex { grow(1.0f) })
        OrangeRoundedBox(modifier = Modifier.width(200.dp).flex { grow(0.7f) })
        PinkRoundedBox(modifier = Modifier.width(200.dp).flex { grow(0.3f) })
    }
}





Check out the documentation and file any bugs here.

New SlotTable implementation (Experimental)

We've introduced a new implementation of the SlotTable, which is disabled by default in this release. SlotTable is the internal data structure that the Compose runtime uses to track the state of your composition hierarchy, track invalidations/recompositions, store remembered values, and track all metadata of the composition at runtime. This new implementation is designed to improve performance, primarily around random edits.

To try the new SlotTable, enable ComposeRuntimeFlags.isLinkBufferComposerEnabled.

Start coding today!

With so many exciting new APIs in Jetpack Compose, and many more coming up, it's never been a better time to migrate to Jetpack Compose. As always, we value your feedback and feature requests (especially on @Experimental features that are still baking) - please file them here. Happy composing!

22 Apr 2026 11:00pm GMT

Streamline User Journeys with Verified Email via Credential Manager

Posted by Niharika Arora, Senior Developer Relations Engineer and Jean-Pierre Pralle, Product Manager, Credential Manager


In the modern digital landscape, the first encounter a user has with an app is often the most critical. Yet, for decades, this initial interaction has been hindered by the friction of traditional verification methods. Today, we're excited to announce a new verified email credential issued by Google, which developers can now retrieve directly from Android's Credential Manager Digital Credential API.

The Problem: Authentication Friction in the Modern Era

The "current era" of authentication is defined by a trade-off between security and convenience. To ensure that a user owns the email address they provide, you typically rely on One-Time Passwords (OTPs) or "magic links" sent by email or SMS.

While effective, these traditional steps introduce significant hurdles:

  • Context switching: Users must leave the app, open their inbox or messaging app, find the code, and return, a process where many potential users simply drop off.
  • Delivery issues: While Emails are free, they can be delayed or diverted to spam folders.
  • Onboarding friction: Every extra second spent in the "verification loop" is a second where a user might lose interest, directly impacting conversion rates.

The Solution: Seamless, Verified Email

Google now issues a cryptographically verified email credential directly to Android devices. This verified email credential is delivered through the Credential Manager API, which is Android's implementation of the W3C's Digital Credential API standard.

For users, this completely removes the need to manually verify their email through external channels. For developers, the API securely delivers these verified user claims for any scenario whether you are building an account creation flow, a recovery process, or a high-risk step-up authentication.

While this specific verified email address is sourced securely from the user's Google Account on their device, the underlying Digital Credentials API is issuer-agnostic. This fosters an open ecosystem, allowing any holder of a digital credential with an email claim to offer that verification to your app.

User Experience

The beauty of this API lies in its simplicity for the end user. Instead of hunting for OTP codes, the experience is integrated directly into the Android OS:

  • Initiation: The process begins when a user focuses on an email input field or taps a "Sign up" or "Recover account" button. You can also initiate the process on page load.
  • Transparency: A native Android bottom sheet appears, clearly detailing exactly what data is being requested (for example, user's verified email address).
  • One-tap consent: The user simply taps "Agree and continue" to share the data.
  • Immediate progress: Once consent is given, the app receives the data instantly. For sign-up or account recovery flows, you can then seamlessly transition the user into passkey creation, ensuring:
    • Users do not have to enter any user information manually, as compared to the traditional username/password registration.
    • Their next login is even faster and more secure.

Use case 1. Sign up

Accelerate onboarding by fetching a verified email the moment the user taps "Sign up". We strongly recommend you pair the verified email retrieval with passkey creation, also part of the Credential Manager API:



Note: You can also fetch other unverified fields such as a user's given name, family name, name, profile picture and the hosted domain connected with the verified email.

Use case 2. Account recovery

Eliminate the frustration of users hunting for recovery codes in their spam folders by allowing them to recover their account using the verified email securely stored on their device.



Use case 3. Re-authentication for sensitive actions

Protect sensitive user actions, such as changing settings or updating profile details, by requiring a quick re-authentication step. Instead of an OTP, you can provide a low-friction verification using the device's verified email.



Important Considerations

As you design your authentication architecture around the Digital Credentials API, keep the following details in mind:

  • Account support: For the specific email credential issued by Google, only regular consumer Google Accounts are supported (Workspace and supervised accounts are currently not supported). Keep in mind that the Credential Manager API itself is issuer-agnostic, meaning other identity providers can issue credentials with their own account support policies.
  • Other user data: Beyond email, you can request the user's given name, family name, full name, and profile picture. However, note that only the email is verified by Google.
  • Auto verify your @gmail accounts: The API provides verified emails for all consumer Google Accounts. We recommend auto-verifying @gmail.com users and routing custom domains to your existing verification flow - for example, an OTP flow. This ensures you maintain long-term access for external domains not directly managed by Google.
  • Complementary to Sign in with Google: While both the new verified email credential & Sign in with Google API provides a verified email, the choice depends on the intended user experience:
    • Use Sign in with Google when your users want to create a federated login session.
    • Use Verified Email when your users want to sign in traditionally with a username/password or passkey but want to auto-verify the email address without the manual chore of an OTP.

Conclusion and Next steps

By integrating the new verified email via Credential Manager API, you can drastically reduce onboarding friction and provide users with a more streamlined, secure authentication journey. This represents a shift toward a future where "verification" is no longer a manual chore for the user, but a seamless, integrated part of the native mobile experience.

Ready to see how this fits into your own app? To get started, update your project to the latest Credential Manager API and explore our Integration Guide. We encourage you to explore how this streamlined verification can simplify your critical user journeys from optimizing account creation, to enhancing re-authentication flows.

22 Apr 2026 8:00pm GMT

feedTalkAndroid

Oppo Launches Pad 5, Enco Clip 2, and More in the UK

Oppo dropped a 12.1-inch tablet, two pairs of earbuds, and a clutch of accessories

22 Apr 2026 4:00pm GMT

Android’s Next Update Could Finally Solve Your Storage Problems—Here’s How

Is your Android phone always running out of space? Google may have given Android users a new way…

22 Apr 2026 3:30pm GMT

Clint Eastwood landed his iconic role after a Hollywood legend walked away—discover the radical reason why Paul Newman said no

Think Clint Eastwood was always destined to play the legendary Detective Harry Callahan? Think again. Behind this cult…

22 Apr 2026 3:00pm GMT

The OPPO Watch X3 Wants to Be the Last Smartwatch You’ll Need

The Watch X3 means business.

22 Apr 2026 2:24pm GMT

OPPO Find X9 Ultra Arrives with World-First 10x Optical Zoom

OPPO has officially unveiled the Find X9 Ultra globally, and it is pitching this one squarely at serious…

22 Apr 2026 12:57pm GMT

WhatsApp launches on CarPlay: messaging and calls now just a tap away on your dashboard

WhatsApp has just arrived on your dashboard-no more relying solely on Siri to send a message from your…

22 Apr 2026 6:30am GMT

Virtual Fitting Room Set to Transform Google Photos Experience: What to Expect from This Upcoming Feature

What if your everyday photo backup app could double as your own private fitting room? According to findings…

22 Apr 2026 6:00am GMT

21 Apr 2026

feedTalkAndroid

Amazon Is Cracking Down On Pirates With Vega OS

So long, sideloading. We'll miss you on our Fire TV Sticks.

21 Apr 2026 7:02pm GMT

Samsung One UI 8.5 Enters Beta 10 Phase With Major Bug Fixes

Whaddaya know? 10 is the lucky number.

21 Apr 2026 6:10pm GMT

feedAndroid Developers Blog

Level up your development with Planning Mode and Next Edit Prediction in Android Studio Panda 4

Panda Statics Metadata Card Panda Statics Metadata Card

Posted by Matt Dyor, Senior Product Manager


Android Studio Panda 4 is now stable and ready for you to use in production. This release brings Planning Mode, Next Edit Prediction, and more, making it easier than ever to build high-quality Android apps.


Here's a deep dive into what's new:

Planning Mode

Before the Agent starts working on complex tasks for you, it would be helpful if it could come up with a detailed plan. Jumping straight into a large coding project without a design often leads to technical debt or logic errors; the same is true for AI. That's why we're adding Planning Mode.

In this mode, the Agent comes up with a detailed project plan before executing tasks. Instead of a single pass where the model directly predicts the next token of code, Planning Mode facilitates a multi-stage reasoning process - giving the agent additional space to evaluate its own proposed logic for potential issues before presenting it to you. This is especially useful for complex and long-running tasks which demand a high degree of architectural precision.

To use Planning Mode, switch your conversation mode to "Planning" in the agent input box and enter your prompt. to "Planning" and enter your prompt.


Switch to Planning Mode

In Planning Mode, the agent examines your request and may generate an implementation plan for large or complex tasks. You have the opportunity to fix mistakes or clarify which approaches to use-all before the agent has spent any time or tokens heading in the wrong direction.

Open Implementation Plan

Add Comments to Implementation Plan

After adding comments, simply hit "Submit Comments" and the agent will use your feedback to revise the implementation plan. To stay on track during execution-which is particularly important with larger changes-the agent organizes its work and generates a "Task List" artifact. You can sit back and watch as the agent methodically completes all of the tasks.

Task List Artifact

After the work is done, the agent produces a "Walkthrough" artifact, giving you a clear summary of exactly what has been changed-making it easy to review the agent's changes. Build with more confidence and control using Planning Mode in the latest release of Android Studio.

Add Comments to Implementation Plan

Next Edit Prediction

Classic autocomplete is great for finishing your sentences, but coding is rarely a linear path. Often, a change in one place requires a secondary change elsewhere-like adding a new parameter to a function and then needing to update its invocations, or a UI preview update when a Composable is changed. Traditionally, this meant breaking your focus to hunt down the related lines of code that need attention.

Next Edit Prediction (NEP) evolves code completion by anticipating your next move, even when it's not at your current cursor position. By analyzing your recent edits, Android Studio recognizes the logical pattern of your workflow. If you modify a data class or update a constructor, NEP can suggest the next relevant edit-perhaps in a distant function-allowing you to jump straight to the fix.

Instead of manually navigating back and forth, you can accept these multi-location suggestions with a single keystroke. This keeps you deep in the "flow state," reducing the cognitive load of routine updates and letting you focus on the complex logic that truly matters to your application. Experience a more intuitive, non-linear way to code in the latest version of Android Studio.

NEP Updating Function Name

NEP Adding New Line

Gemini API Starter Template

Adding powerful AI features to your app just got easier, introducing the Gemini API Starter Template for Android Studio!

Integrating generative AI into your Android application used to mean managing complex backend plumbing and worrying about API key security. With the new Gemini API Starter template in Android Studio, developers can now jump straight into building features rather than spending time configuring infrastructure.

Key benefits include:

  • Zero API key management: Stop worrying about provisioning or rotating keys. By leveraging Firebase AI Logic, the template eliminates the need to embed sensitive credentials in your client-side code.
  • Automated Firebase integration: The backend plumbing is handled for you. The template automatically connects your project to Firebase services, ensuring a secure bridge between your app and Google's Gemini models.
  • Built to scale: This isn't just for prototypes. The production-ready architecture allows you to scale from a local test to a global user base without redesigning your foundation.
  • Multimodal processing: Supports text, image, video, and audio inputs. You can build features like real-time image analysis, video summarization, and audio transcription.

Get Started

  1. Open Android Studio.
  2. Navigate to File > New > New Project.
  3. Select the Gemini API Starter template from the gallery.
Gemini API Starter new project template

Agent Web Search

When you're deep in development, the right answer is often just a search away-but leaving your IDE to find it can snap you out of your flow. Whether you need the exact version number for a dependency or the latest API changes for a third-party library, the Agent Web Search tool is here to help without you ever having to leave Android Studio.

While Android Studio's agent already leverages the Android Knowledge Base for official documentation, modern Android development relies on a vast ecosystem of external libraries. Agent Web Search expands Gemini's reach, allowing it to query Google directly to fetch current reference material from across the web. From checking the latest setup guides for Coil to finding advanced configuration tips for Koin or Moshi, the agent can now pull in the most up-to-date information in real time.

The Agent Web Search tool is designed to be helpful but unobtrusive; it will automatically trigger a web search when it identifies a gap in its local knowledge. You can also take the wheel by asking it to find something specific-simply include "search the web for..." in your prompt. By integrating live web results directly into your workspace, Agent Web Search ensures you're always building with the most current data available, speeding up your workflow and keeping your project on the cutting edge.

Agent Web Search Tool Invocation

Android Studio Panda Releases

Panda 4 continues Android Studio's focus on accelerating developer productivity with AI. Check out Go from prompt to working prototype with Android Studio Panda 2 and Increase Guidance and Control over Agent Mode with Android Studio Panda 3.

Android Studio Panda 2

  • AI-powered New Project flow: Allows you to build a working app prototype with a single prompt. The agent manages initial setup, navigation configuration, and proper dependencies, and features an autonomous generation loop to handle build errors and deploy to an emulator.
  • Version Upgrade Assistant: Automates dependency management and updates, iteratively attempting builds and resolving conflicts until a stable configuration is found.

Android Studio Panda 3

  • Agent skills: Specialized, user-defined instructions (stored in a .skills directory) that teach the AI agent project-specific capabilities, coding standards, or library usage.
  • Agent permissions: Provides fine-grained control over what agents can do, with features like "Always Allow" rules for trusted operations. For even more security, you can also use an optional sandbox to enforce strict, isolated control over the agent.
  • Empty Car App Library App template: Simplifies building driving-optimized apps for Android Auto and Android Automotive OS by handling required boilerplate code.

Get started

Dive in and accelerate your development. Download Android Studio Panda 4 and start exploring these powerful new agentic features today.

As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding!
Android Studio, androidstudio, Android Studio Panda, featured, Gemini in Android Studio, AI-powered new project flow, version upgrade assistant

21 Apr 2026 2:00pm GMT

17 Apr 2026

feedAndroid Developers Blog

Experimental hybrid inference and new Gemini models for Android

Posted by Thomas Ezan, Senior Developer Relations Engineer




If you are an Android developer looking to implement innovative AI features into your app, we recently launched powerful new updates: Hybrid inference, a new API for Firebase AI Logic to leverage both on-device and Cloud inference, and support for new Gemini models including the latest Nano Banana models for image generation.

Let's jump in!

Experiment with hybrid inference

With the new Firebase API for hybrid inference, we implemented a simple rule-based routing approach as an initial solution to let you use both on-device and cloud inference via a unified API. We are planning on providing more sophisticated routing capabilities in the future.

It allows your app to dynamically switch between Gemini Nano running locally on the device and cloud-hosted Gemini models. The on-device execution uses ML Kit's Prompt API. The cloud inference supports all the Gemini models from Firebase AI Logic in both Vertex AI and the Developer API.

To use it, add the firebase-ai-ondevice dependencies to your app along with Firebase AI Logic:

dependencies {
 [...] 
 implementation("com.google.firebase:firebase-ai:17.11.0")
 implementation("com.google.firebase:firebase-ai-ondevice:16.0.0-beta01")
}

During initialization, you create a GenerativeModel instance and configure it with specific inference modes, such as PREFER_ON_DEVICE (falls back to cloud if Gemini Nano is not available on the device) or PREFER_IN_CLOUD (falls back to on-device inference if offline):

val model = Firebase.ai(backend = GenerativeBackend.googleAI())
    .generativeModel(
        modelName = "gemini-3.1-flash-lite",
        onDeviceConfig = OnDeviceConfig(
           mode = InferenceMode.PREFER_ON_DEVICE
        )
    )

val response = model.generateContent(prompt)

The Firebase API for hybrid inference for Android is still experimental, and we encourage you to try it in your app, especially if you are already using Firebase AI Logic. Currently, on-device models are specialized for single-turn text generation based on text or single Bitmap image inputs. Review the limitations for more details.

We just published a new sample in the AI Sample Catalog leveraging the Firebase API for hybrid; it demonstrates how the Firebase API for hybrid inference can be used to generate a review based on a few selected topics and then translating it into various languages. Check out the code to see it in action!



The new hybrid inference sample in action

Try our new models

As part of the new Gemini models, we've released two models particularly helpful to Android developers and easy to integrate in your application via the Firebase AI Logic SDK.

Nano Banana

Last year we released Nano Banana, a state-of-the-art image generation model. And a few weeks ago, we released a couple of new Nano Banana models.

Nano Banana Pro (Gemini 3 Pro Image) is designed for professional asset production and can render high-fidelity text, even in a specific font or simulating different types of handwriting.

Nano Banana 2 (Gemini 3.1 Flash Image) is the high-efficiency counterpart to Nano Banana Pro. It's optimized for speed and high-volume use cases. It can be used for a broad range of use cases (infographics, virtual stickers, contextual illustrations, etc.).

The new Nano Banana models leverage real-world knowledge and deep reasoning capabilities to generate precise and detailed images.

We updated our Magic Selfie sample (use image generation to change the background of your selfie!) to use Nano Banana 2. The background segmentation is now handled directly with the image generation model which makes the implementation easier and lets Nano Banana 2 improved image generation capabilities shine. See it in action here.

The updated Magic Selfie sample uses Nano Banana 2 to update a selfie background

You can use it via Firebase AI Logic SDK. Read more about it in the Android documentation.

Gemini 3.1 Flash-Lite

We also released Gemini 3.1 Flash-Lite, a new version of the Gemini Flash-Lite family. The Gemini Flash-Lite models have been particularly favored by Android developers for its good quality/latency ratio and low inference cost. It's been used by Android developers for various use-cases such as in-app messaging translation or generating a recipe from a picture of a dish.

Gemini 3.1 Flash-Lite, currently in preview, will enable more advanced use cases with latency comparable to Gemini 2.5 Flash-Lite. To learn more about this model, review the Firebase documentation.

Conclusion

It's a great time to explore the new Hybrid sample in our catalog to see these capabilities in action and understand the benefits of routing between on-device and cloud inference. We also encourage you to check out our documentation to test the new Gemini models.

17 Apr 2026 8:00pm GMT

16 Apr 2026

feedAndroid Developers Blog

The Fourth Beta of Android 17

Featured Metadata Image

Posted by Dan Galpin, Developer Relations Engineer


Android 17 has reached beta 4, the last scheduled beta of this release cycle, a critical milestone for app compatibility and platform stability. Whether you're fine-tuning your app's user experience, ensuring smooth edge-to-edge rendering, or leveraging the newest APIs, Beta 4 provides the near-final environment you need to be testing with.

Get your apps, libraries, tools, and game engines ready!

If you develop an Android SDK, library, tool, or game engine, it's critical to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your downstream developers know if updates are needed to fully support Android 17.



Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 17 Beta 4. Work through all your app's flows and look for functional or UI issues. Each release of Android contains platform changes that improve privacy, security, and overall user experience; review the app impacting behavior changes for apps running on and targeting Android 17 to focus your testing, including the following:

  • Resizability on large screens: Once you target Android 17, you can no longer opt out of maintaining orientation, resizability and aspect ratio constraints on large screens.
  • Dynamic code loading: If your app targets Android 17 or higher, the Safer Dynamic Code Loading (DCL) protection introduced in Android 14 for DEX and JAR files now extends to native libraries. All native files loaded using System.load() must be marked as read-only. Otherwise, the system throws UnsatisfiedLinkError.
  • Enable CT by default: Certificate transparency (CT) is enabled by default. (On Android 16, CT is available but apps had to opt in.)
  • Local network protections: Apps targeting Android 17 or higher have local network access blocked by default. Switch to using privacy preserving pickers if possible, and use the new ACCESS_LOCAL_NETWORK permission for broad, persistent access.
  • Background audio hardening: Starting in Android 17, the audio framework enforces restrictions on background audio interactions including audio playback, audio focus requests, and volume change APIs. Based on your feedback, we've made some changes since beta 2, including targetSDK gating while-in-use FGS enforcement and exempting alarm audio. Full details available in updated guidance.

App memory limits

Android is introducing app memory limits based on the device's total RAM to create a more stable and deterministic environment for your applications and Android users. In Android 17, limits are set conservatively to establish system baselines, targeting extreme memory leaks and other outliers before they trigger system-wide instability resulting in UI stuttering, higher battery drain, and apps being killed. While we anticipate minimal impact on the vast majority of app sessions, we recommend the following memory best practices, including establishing a baseline for memory.

In the current implementation, getDescription in ApplicationExitInfo will contain the string "MemoryLimiter" if your app was impacted. You can also use trigger-based profiling with TRIGGER_TYPE_ANOMALY to get heap dumps that are collected when the memory limit is hit.

The LeakCanary task in the Android Studio Profiler

To help you find memory leaks, Android Studio Panda adds LeakCanary integration directly in the Android Studio Profiler as a dedicated task, contextualized within the IDE and fully integrated with your source code.

A lighter memory footprint translates directly to smoother performance, longer battery life, and a premium experience across all form factors. Let's build a faster, more reliable future for the Android ecosystem together!

Profiling triggers for app anomalies

Android introduces an on-device anomaly detection service that monitors for resource-intensive behaviors and potential compatibility regressions. Integrated with ProfilingManager, this service allows your app to receive profiling artifacts triggered by specific system-detected events.

Use the TRIGGER_TYPE_ANOMALY trigger to detect system performance issues such as excessive binder calls and excessive memory usage. When an app breaches OS-defined memory limits, the anomaly trigger allows developers to receive app-specific heap dumps to help identify and fix memory issues. Additionally, for excessive binder spam, the anomaly trigger provides a stack sampling profile on binder transactions.

This API callback occurs prior to any system imposed enforcements. For example, it can help developers collect debug data before the app is terminated by the system due exceeding memory limits. To understand how to use the trigger check out our documentation on trigger based profiling.

val profilingManager = applicationContext.getSystemService(ProfilingManager::class.java)
val triggers = ArrayList<ProfilingTrigger>()  
triggers.add(ProfilingTrigger.Builder(
             ProfilingTrigger.TRIGGER_TYPE_ANOMALY))
val mainExecutor: Executor = Executors.newSingleThreadExecutor()
val resultCallback = Consumer<ProfilingResult> { profilingResult ->
    if (profilingResult.errorCode != ProfilingResult.ERROR_NONE) {
        // upload profile result to server for further analysis          
        setupProfileUploadWorker(profilingResult.resultFilePath)
    } 
}
profilingManager.registerForAllProfilingResults(mainExecutor, resultCallback)
profilingManager.addProfilingTriggers(triggers)

Post-Quantum Cryptography (PQC) in Android Keystore

Android Keystore added support for the NIST-standardized ML-DSA (Module-Lattice-Based Digital Signature Algorithm). On supported devices, you can generate ML-DSA keys and use them to produce quantum-safe signatures, entirely in the device's secure hardware. Android Keystore exposes the ML-DSA-65 and ML-DSA-87 algorithm variants through the standard Java Cryptographic Architecture APIs: KeyPairGenerator, KeyFactory, and Signature. For further details, see our developer documentation.

KeyPairGenerator generator = KeyPairGenerator.getInstance(
        "ML-DSA-65", "AndroidKeyStore");
generator.initialize(
        new KeyGenParameterSpec.Builder(
                "my-key-alias",
                KeyProperties.PURPOSE_SIGN | KeyProperties.PURPOSE_VERIFY)
        .build());
KeyPair keyPair = generator.generateKeyPair();

Get started with Android 17

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.

If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 4. Continue to report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.

For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you're set up, here are some of the things you should do:

  • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
  • Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it.

We'll update the preview/beta system images and SDK regularly throughout the Android 17 release cycle. Once you've installed a beta build, you'll automatically get future updates over-the-air for all later previews and Betas. For complete information, visit the Android 17 developer site.

Join the conversation

Your feedback remains our most valuable asset. Whether you're an early adopter on the Canary channel or an app developer testing on Beta 4, consider joining our communities and filing feedback. We're listening.

16 Apr 2026 8:00pm GMT

Android CLI and skills: Build Android apps 3x faster using any agent

Hours CLI Dark Meta Card

Posted by Adarsh Fernando, Group Product Manager and Esteban de la Canal, Senior Staff Software Engineer






As Android developers, you have many choices when it comes to the agents, tools, and LLMs you use for app development. Whether you are using Gemini in Android Studio, Gemini CLI, Antigravity, or third-party agents like Claude Code or Codex, our mission is to ensure that high-quality Android development is possible everywhere.

Today, we are introducing a new suite of Android tools and resources for agentic workflows - Android CLI with Android skills and the Android Knowledge Base. This collection of tools is designed to eliminate the guesswork of core Android development workflows when you direct an agent's work outside of Android Studio, making your agents more efficient, effective, and capable of following the latest recommended patterns and best practices.

Whether you are just starting your development journey on Android, are a seasoned Android developer, or managing apps across mobile and web platforms, building your apps with the latest guidance, tools, and AI-assistance is easier than ever. No matter which environment you begin with these resources, you can always transition your development experience to Android Studio-where the state-of-the-art tools and agents for Android development are available to help your app experience truly shine.

(Re)Introducing the Android CLI

Your agents perform best when they have a lightweight, programmatic interface to interact with the Android SDK and development environment. So, at the heart of this new workflow is a revitalized Android CLI. The new Android CLI serves as the primary interface for Android development from the terminal, featuring commands for environment setup, project creation, and device management-with more modern capabilities and easy updatability in mind.

The create command makes an Android app project in seconds.

In our internal experiments, Android CLI improved project and environment setup by reducing LLM token usage by more than 70%, and tasks were completed 3X faster than when agents attempted to navigate these tasks using only the standard toolsets.

Key capabilities available to you include:

  • SDK management: Use android sdk install to download only the specific components needed, ensuring a lean development environment.
  • Snappy project creation: The android create command generates new projects from official templates, ensuring the recommended architecture and best practices are applied from the very first line of code.
  • Rapid device creation and deployment: Create and manage virtual devices with android emulator and deploy apps using android run, eliminating the guesswork involved in manual build and deploy cycles.
  • Updatability: Run android update to ensure that you have the latest capabilities available.

Android CLI can create a device, run your app on it, and make it easier for agents to navigate UI.

While Android CLI will empower your agentic development flows, it's also been designed to streamline CI, maintenance, and any other scripted automation for the increasingly distributed nature of Android development. Download and try out the Android CLI today!

Grounding LLMs with official Android Skills

Traditional documentation can be descriptive, conceptual, and high-level. While perfect for learning, LLMs often require precise, actionable instructions to execute complex workflows without using outdated patterns and libraries.

To bridge this gap, we are launching the Android skills GitHub repository. Skills are modular, markdown-based (SKILL.md) instruction sets that provide a technical specification for a task and are designed to trigger automatically when your prompt matches the skill's metadata, saving you the hassle of manually attaching documentation to every prompt.

Android skills cover some of the most common workflows that some Android developers and LLMs may struggle with-they help models better understand and execute specific patterns that follow our best practices and guidance on Android development.

In our initial release, the repository includes skills like:

  • Navigation 3 setup and migration.
  • Implementing edge-to-edge support.
  • AGP 9 and XML-to-Compose migrations.
  • R8 config analysis, and more!

If you're using Android CLI, you can browse and set up your agent workflow with our growing collection of skills using the android skills command. These skills can also live alongside any other skills you create, or third-party skills created by the Android developer community. Learn more about getting started with Android skills.

Install Android skills via the Android CLI to make your agent more effective and efficient.

The latest guidance via the Android Knowledge Base

The third component we are launching today is the Android Knowledge Base. Accessible through the android docs command and already available in the latest version of Android Studio, this specialized data source enables agents to search and fetch the latest authoritative developer guidelines to use as relevant context.

The Android Knowledge Base ensures agents have the latest context, guidance, and best practices for Android.

By accessing the frequently updated knowledge base, agents can ground their responses in the most recent information from Android developer docs, Firebase, Google Developers, and Kotlin docs. This ensures that even if an LLM's training cutoff is a year old, it can still provide guidance on the latest frameworks and patterns we recommend today.

Android Studio: The ultimate destination for premium apps

In addition to empowering developers and agents to handle project setup and boilerplate code, we've also designed these new tools and resources to make it easier to transition to Android Studio. That means you can start a prototype quickly with an agent using Android CLI and then open the project in Android Studio to fine-tune your UI with visual tools for code editing, UI design, deep debugging, and advanced profiling that scale with the growing capabilities of your app.



And when it is time to build a high-quality app for large-scale publication across various device types, our agent in Android Studio is here to help, while leveraging the latest development best practices and libraries. Beyond the powerful Agent and Planning Modes for active development, we have introduced an AI-powered New Project flow, which provides an entry point to rapidly prototyping your next great idea for Android.



These built-in agents make it simple to extend your app ideas across phones, foldables, tablets, Wear OS, Android Auto, and Android TV. Equipped with full context of your project's source code and a comprehensive suite of debugging, profiling, and emulation tools, you have an end-to-end, AI-accelerated toolkit at your disposal.

Get started today

Android CLI is available in preview today, along with a growing set of Android skills and knowledge for agents. To get started, head over to d.android.com/tools/agents to download Android CLI.


16 Apr 2026 5:00pm GMT

15 Apr 2026

feedAndroid Developers Blog

Boosting user privacy and business protection with updated Play policies

Posted by Bennet Manuel, Group Product Manager, App & Ecosystem Trust



We strive to make Google Play the safest and most trusted experience possible. Today, we're announcing a new set of policy updates and an account transfer feature to boost user privacy and protect your business from fraud. By providing better features for users and easy-to-integrate tools for you, we're making it simpler to build safer apps so you can focus on creating great experiences.

We're also expanding our features to help you manage new contact and location policy changes, so you have a smoother, more predictable app review experience. By October, Play policy insights in Android Studio can help you proactively identify if your app should use these new features and guide you on the exact steps to take. Additionally, new pre-review checks in the Play Console will be available starting October 27 to flag potential contacts or location permissions policy issues so you can fix them before you submit your app for review.

Here is what is new and how you can prepare.

Contact Picker: A privacy-friendly way to access contacts



Android is introducing the Android Contact Picker as the new standard for accessing contact information (e.g., for invites, sharing, or one-time lookups). This picker lets users share only the specific contacts they want to, helping build trust and protect privacy. Alongside this tool, we are updating our policy to require that all applicable apps use the picker, or other privacy-focused alternatives like Sharesheet, as the primary way to access users' contacts. READ_CONTACTS will be reserved for apps that can't function without it.

What you'll need to do

Location button: More privacy-friendly way to access location



Android is introducing a new, streamlined location button to make requesting precise data easier for one-time actions, like finding a store or tagging a photo. This feature replaces complex permission dialogs with a single tap, helping users make clearer choices about how much information they share and for how long. We're updating our policy to require apps to use this button for one-time precise location access unless they require persistent, always-on location access. This creates a faster, more predictable experience for your users and reduces the friction of traditional permission requests.

What you'll need to do

Account Transfer: Protecting your business

You asked for a secure way to transfer app ownership during business changes, and we listened. We're launching an official account transfer feature directly in Play Console that's designed to help you easily transfer ownership during sales and mergers while also protecting your business from fraud. Starting May 27, account ownership changes must use this official feature. That means that unofficial transfers (like sharing login credentials or buying and selling accounts on third-party marketplaces) which leave your business vulnerable are not permitted.

What you'll need to do

What's next

We want to give you plenty of time to review these changes and update your apps. For more information, deadlines, and the full list of Google Play policy updates we're announcing today, please visit the Policy Announcements page.

Thank you for your partnership in keeping Play safe for everyone.

15 Apr 2026 5:00pm GMT

14 Apr 2026

feedAndroid Developers Blog

Get ready for Google I/O: Livestream schedule revealed

Posted by The Google I/O team

Google I/O Banner

The Google I/O schedule is here! Tune in May 19-20 as we unveil Google's biggest updates across AI, Android, Chrome, and Cloud. Discover new tools and features designed to unlock the future of development with agentic coding.

We're kicking things off with the Google keynote at 10:00 am PT on May 19, followed by the Developer keynote at 1:30 pm PT. Block your calendars for two days of live sessions, straight from Mountain View, full of announcements, live demos, and new professional development sessions.

Here's a sneak peak at what we'll cover:

  • The agentic era of development: Discover how the next evolution of our developer tools is transforming the way you write software. Learn how to seamlessly transition from rapid ideation to orchestrating powerful, autonomous workflows, allowing AI to handle the heavy lifting while you focus on the big picture.
  • Enabling Android development anywhere: Learn how we are making AI even more helpful for your app workflows. From initial prototyping to final native polish, explore the latest ways we're making it easier and faster to build high quality Android experiences.
  • Building powerful, agentic web applications: The web is accelerating faster than ever, and we are equipping you for what's next. Discover new tools to build agent-ready web applications, automate complex debugging workflows, and ship highly interactive UI directly in the browser.

Join us online May 19-20, followed by a fresh drop of on-demand sessions and codelabs on May 21. Register today to explore the full program and catch all the latest developer updates, featuring sessions like:

14 Apr 2026 12:30pm GMT

13 Apr 2026

feedAndroid Developers Blog

Test Multi-Device Interactions with the Android Emulator

Posted by Steven Jenkins, Product Manager, Android Studio










Testing multi-device interactions is now easier than ever with the Android Emulator. Whether you are building a multiplayer game, extending your mobile application across form factors, or launching virtual devices that require a device connection, the Android Emulator now natively supports these developer experiences.

Previously, interconnecting multiple Android Virtual Devices (AVDs) caused significant friction. It required manually managing complex port forwarding rules just to get two emulators to connect.

Now you can take advantage of a new networking stack for the Android Emulator which brings zero-configuration peer-to-peer connectivity across all your AVDs.

Interconnecting emulator instances

The new networking stack for the Android Emulator transforms how emulators communicate. Previously, each virtual device operated on its own local area network (LAN), effectively isolating it from other AVDs. The new Wi-Fi network stack changes this by creating a shared virtual network backplane that bridges all running instances on the same host machine.

Key Benefits:

  • Zero-configuration: No more manual port forwarding or scripting adb commands. AVDs on the same host appear on the same virtual network.
  • Peer-to-peer connectivity: Critical protocols like Wi-Fi Direct and Network Service Discovery (NSD) work out of the box between emulators.
  • Improved stability: Resolves long-standing stability issues, such as data loss and connection drops found in the legacy stack.
  • Cross-platform consistency: Works the same across Windows, macOS and Linux.

Use Cases

The enhanced emulator networking supports a wide range of multi-device development scenarios:

  • Multi-device apps: Test file sharing, local multiplayer gaming, or control flows between a phone and another Android device.
  • Continuous Integration: Create robust, automated multi-device test pipelines without flaky network scripts.
  • Android XR & AI glasses: Easily test companion app pairing and data streaming between a phone and glasses within Android Studio.
  • Automotive & Wear OS: Validate connectivity flows between a mobile device and a vehicle head unit or smartwatch.



The new emulator networking stack allows multiple AVDs to share a virtual network,
enabling direct peer-to-peer communication with zero configuration.

Get Started

The new networking capability is enabled by default in the latest Android Emulator release (36.5), which is available via the Android Studio SDK Manager. Just update your emulator and launch multiple devices!

If you need to disable this feature or want to learn more, please refer to our documentation.

As always, we appreciate any feedback. If you find a bug or issue, please file an issue. Also you can be part of our vibrant Android developer community on LinkedIn, Medium, Youtube, or X.


13 Apr 2026 1:00pm GMT

02 Apr 2026

feedAndroid Developers Blog

Increase Guidance and Control over Agent Mode with Android Studio Panda 3

Posted by Matt Dyor, Senior Product Manager


Android Studio Panda 3 is now stable and ready for you to use in production. This release gives you even more control and customization over your AI-powered workflows, making it easier than ever to build high-quality Android apps.

Whether you're bringing new capabilities to an existing app or standing up a brand new app, these updates elevate your development experience by allowing your AI Agent in Android Studio to learn your specific practices and giving you granular control over its permissions.

Lastly, in addition to AI skills and Agent Mode enchantments, Android Studio Panda 3 also includes updated support for build Android apps for cars.

Here's a deep dive into what's new:

Agent skills

Create a more helpful AI agent by using agent skills in Android Studio. Agent skills are specialized instructions that teach the agent new capabilities and best practices for a specific workflow, which the agent can then leverage as needed. This significantly reduces the level of detail required for your day-to-day prompts. Agent skills work with Gemini in Android Studio or with other remote 3rd party LLMs you integrate into the agent framework in Android Studio.

You and members of your team can create skills that tell the agent exactly how you want to handle specific tasks in your codebase. For example, you could create a custom "code review" skill tailored to your organization's coding standards, or custom skill to provide the agent with more information on using an in-house library.

Once you have created a skill, the agent will be able to use it automatically, or you can manually trigger it by typing @ followed by the skill name. Check out the documentation to learn more about how to create skills for your codebase, or better yet-ask your agent to help you build a new skill and it will guide you through the details!

Manually Trigger Agent Skill in Android Studio

Getting Started

To build a skill for your project, do the following:

  • Create a .skills directory inside your project's root folder.
  • Place a SKILL.md file inside this new directory.
  • Add a name and description to the file to define your custom workflow, and your skill is ready.
  • Optionally include scripts, assets, and references to provide even more guidance to your agent.
Agent skills in Android Studio

Manage permissions for Agent Mode

You control your codebase, and you can now be more deliberate with which data and capabilities you choose to share with AI agents. The new granular agent permissions in Android Studio let you decide exactly what agents can do for you.

When Agent Mode needs to read files, run shell commands, or access the web, it explicitly asks for your permission. We know that 'approval fatigue' is a real risk in AI workflows-when a tool asks for permission too often, it's easy to start clicking 'Allow' without fully reviewing the action. By offering granular 'Always Allow' rules for trusted operations and an optional sandbox for experimental ones, Android Studio helps you stay focused on the high-stakes decisions that actually require your manual sign-off.

Agent Permissions

Agent permissions are intuitive to set up and use. For example, granting high-level permissions automatically authorizes related sub-tools, while commands you have previously approved will run automatically without interrupting your flow. Rest assured, accessing sensitive files like SSH keys will always require your explicit sign-off.

For even more security, you can also use an optional sandbox to enforce strict, isolated control over the agent.



Agent Shell Sandbox

Empty Car App Library App template

We're making it easier to build Android apps for cars. Building apps for the car used to mean wrestling with complex configurations just to get the project to build successfully.

Now, you can accelerate your development with the new "Empty Car App Library App" template in Android Studio. This template takes care of the required boilerplate code for a driving-optimized app on both Android Auto and Android Automotive OS, saving you significant time and effort. Instead of getting bogged down in setup, you can focus on creating the best experience for your users on the road.

Getting Started

To use the new template:

  • Select New Project on the Welcome to Android Studio screen (or File > New > New Project from within a project).
  • Search for or select the Empty Car App Library App template.
  • Name your app and click Finish to generate your driving-optimized app.



Empty Car App Library App template

Android Studio Panda releases

Panda 3 builds off last month's AI-focused Panda 2 release. Check out Go from prompt to working prototype with Android Studio Panda 2 post to learn more about new Android Studio features, including the AI-powered New Project Flow that takes you from prompt to prototype and the Version Upgrade Assistant that takes the toil out of updating your dependencies.

Get started

Dive in and accelerate your development. Download Android Studio Panda 3 and start exploring these powerful new agentic features today.

As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding!

02 Apr 2026 2:00pm GMT

Gemma 4: The new standard for local agentic intelligence on Android

Posted by Matthew McCullough, VP of Product Management Android Development



Today, we are enhancing Android development with Gemma 4, our latest state-of-the-art open model designed with complex reasoning and autonomous tool-calling capabilities.

Our vision is to enable local agentic AI on Android across the entire software lifecycle, from development to production. Android supports a range of Gemma 4 models, from the most efficient ones running directly on-device in your apps to more powerful ones running on your development machine to help you build apps. We are bringing Gemma 4 to Android developers through two pillars:

  • Local-first Agentic coding: Experience powerful, local AI code assistance with Gemma 4 in Android Studio in your development computer.
  • On-device intelligence: Build intelligent experiences using the ML Kit GenAI Prompt API to run Gemma 4 directly on Android device hardware.

Coding with Gemma 4 in Android Studio

When building Android apps, Android Studio can use Gemma 4 to leverage its state-of-the-art reasoning power and native support for tool use, while keeping the model and inference contained entirely on your local machine.

Gemma 4 was trained on Android development and designed with Agent Mode in mind. This means that when you select Gemma 4 as your local model, you can leverage the full suite of Agent Mode capabilities for a variety of Android development use cases, including refactoring legacy code, building an entire app or new features, and applying fixes iteratively.

Learn more about the possibilities Gemma 4 brings to your app development flow and how to get started.

Prototyping with Gemma 4 on-device

Since the introduction of Gemini Nano as the foundation model on Android, it has become available on over 140 million devices. Gemma 4 is the base model for the next generation of Gemini Nano (Gemini Nano 4) that is optimized for performance and quality on Android devices. This model is up to 4x faster than the previous version and uses up to 60% less battery.

To make it as easy as possible to preview and prototype with Gemma 4 E2B and E4B models directly on AICore-supported devices, we're launching the AICore Developer Preview. While we continue to expand the ML Kit GenAI Prompt API surface to unlock additional advanced capabilities of the model, you can already start exploring new use cases with Gemma 4 using the Prompt API.

Prepare your apps for the launch of the Gemini Nano 4 on the new flagship Android devices later this year by prototyping with Gemma 4 today. Read about the upcoming features and deep dive into AICore Developer Preview and its Gemma 4 support here.

Local agentic intelligence with Gemma 4

Running Gemma 4 locally, you can leverage its advanced reasoning and tool-calling capabilities in your entire workflow, from developing with the AI coding assistant in Android Studio to shipping intelligent features in your app with ML Kit GenAI Prompt API. This local-first approach, available under Gemma's open Apache license, provides an alternative for developers to innovate in a privacy-centric and cost effective manner. In a future release, we will update Android Bench to include Gemma 4 and other open models, providing the quantified data you need to navigate performance trade-offs and select the best model for your use case.

We can't wait to see what you build!

02 Apr 2026 2:00pm GMT

Announcing Gemma 4 in the AICore Developer Preview

Posted by David Chou, Product Manager and Caren Chang, Developer Relations Engineer



At Google, we're committed to bringing the most capable AI models directly to the Android devices in your pocket. Today, we're thrilled to announce the release of our latest state-of-the-art open model: Gemma 4.

These models are the foundation for the next generation of Gemini Nano, so code you write today for Gemma 4 will automatically work on Gemini Nano 4-enabled devices that will be available later this year. With Gemini Nano 4, you'll benefit from our additional performance optimizations so you can ship to production across the Android ecosystem with the most efficient on-device inference.

You can get early access to this model today through the AICore Developer Preview.




















Select the Gemini Nano 4 Fast model in the Developer Preview UI
to see its blazing fast inference speed in action before you write any code

Because Gemma 4 natively supports over 140 languages, you can expect improved localized, multilingual experiences for your global audience. Furthermore, Gemma 4 offers industry-leading performance with multimodal understanding, allowing your apps to understand and process text, images, and audio. To give you the best balance of performance and efficiency, Gemma 4 on Android comes in two sizes:

The new model is up to 4x faster than previous versions and uses up to 60% less battery. Starting today, you can experiment with improved capabilities including:

Join the Developer Preview today to download these models in preview models and start building next-generation features right away.

Start building with Gemma 4

Start testing the model

You can try out the model without code by following the Developer Preview guide. If you want to jump straight into integrating these models with your existing workflow, we've made that seamless. Head over to Android Studio to refine your prompt and build with the familiar ML Kit Prompt API. We've introduced a new ability to specify a model, allowing you to target the E2B (fast) or E4B (full) variants for testing.

// Define the configuration with a specific track and preference
val previewFullConfig = generationConfig {
    modelConfig = ModelConfig {
        releaseTrack = ModelReleaseTrack.PREVIEW
        preference = ModelPreference.FULL
    }
}

// Initialize the GenerativeModel with the configuration
val previewModel = GenerativeModel.getClient(previewFullConfig)

// Verify that the specific preview model is available
val previewModelStatus = previewModel.checkStatus()
if (previewModelStatus == FeatureStatus.AVAILABLE) {
    // Proceed with inference
    val response = previewModel.generateContent("If I get 26 paychecks per year, how much I should contribute each paycheck to reach my savings goal of $10k over the course of a year? Return only the amount.")

} else {
    // Handle the case where the preview model is not available
    // (e.g., print out log statements)
}

What to expect during the Developer Preview

The goal of this Developer Preview is to give you a head start on refining prompt accuracy and exploring new use cases for your specific apps.

We will be making several updates throughout the preview period, including support for tool calling, structured output, system prompts, and thinking mode in Prompt API, making it easier to take full advantage of the new capabilities and significant performance optimizations in Gemma 4.

The preview models are available for testing on AICore-enabled devices. These models will run on the latest generation of specialized AI accelerators from Google, MediaTek, and Qualcomm Technologies. On other devices, the models will initially run on a CPU implementation that is not representative of final production performance. If your device is not AICore-enabled, you can also test these models via the AI Edge Gallery app. We'll provide support for more devices in the future.

How to get started

Ready to see what Gemma 4 can do for your users?

  1. Opt-in: Sign up for the AICore Developer Preview.
  2. Download: Once opted in, you can trigger the download of the latest Gemma 4 models directly to your supported test device.
  3. Build: Update your ML Kit implementation to target the new models and start building in Android Studio.

02 Apr 2026 2:00pm GMT

Android Studio supports Gemma 4: our most capable local model for agentic coding

Posted by Matthew Warner, Google Product Manager


Every developer's AI workflow and needs are unique, and it's important to be able to choose how AI helps your development. In January, we introduced the ability to choose any local or remote AI model to power AI functionality in Android Studio, and today, we're announcing the availability of Gemma 4 for AI coding assistance in Android Studio. This new local model trained on Android development provides the best of both worlds: the privacy and cost-efficiency of on-device processing alongside state-of-the-art reasoning and tool-calling capabilities.

AI assistance, locally delivered

By running locally on your machine, Gemma 4 gives you AI code assistance that doesn't require an internet connection or an API key for its core operations. Key benefits include:

  • Privacy and security: Your code stays on your machine. Gemma 4 processes all Agent Mode requests locally, making it an ideal choice for developers working with data privacy requirements or in secure corporate environments.
  • Cost efficiency: Run complex agentic workflows without worrying about hitting quotas. Gemma 4 is optimized to run efficiently on modern development hardware, utilizing local GPU and RAM to provide snappy, responsive assistance.
  • Offline availability: Use the agent to write code even when you don't have an internet connection.
  • State-of-the-art reasoning: Gemma 4 delivers best-in-class reasoning, capable of complex multi-step coding tasks in Agent Mode.

Powerful agentic coding

Gemma 4 was trained for Android development with agentic tool calling capabilities. When you select Gemma 4 as your local model, you can leverage Agent Mode for a variety of development use cases, such as:

  • Designing new features: Developers can ask the agent to build a new feature or an entire app with commands like "build a calculator app" and the agent will not only generate the UI code but will use Android best practices like writing in Kotlin and using Jetpack Compose.
  • Refactoring: You can give high-level commands such as "Extract all hardcoded strings and migrate them to strings.xml." The agent will scan your codebase, identify instances requiring changes, and apply the edits across multiple files simultaneously.
  • Bug fixing and build resolution: If a project fails to build or has persistent lint errors, you can prompt the agent to "Build my project and fix any errors." The agent will navigate to the offending code and iteratively apply fixes until the build is successful.

Recommended hardware requirements

The 26B MoE is recommended for Android app developers using a machine with the minimum hardware requirements. Total RAM needed includes both Android Studio and Gemma.

Model Total RAM needed Storage needed
Gemma E2B 8GB 2 GB
Gemma E4B 12 GB 4 GB
Gemma 26B MoE 24 GB 17 GB

Get started

To get started, ensure you have the latest version of Android Studio installed.
  1. Install an LLM provider, such as LM Studio or Ollama, on your local computer.
  2. In Settings > Tools > AI > Model Providers add your LM Studio or Ollama instance.

  3. Download the Gemma 4 model from Ollama or LM Studio. Refer to hardware requirements for model size selection.
  4. In Agent Mode, select Gemma 4 as your active model.

For a detailed walkthrough on configuration, check out the official documentation on how to use a local model.

We are excited to see how Gemma 4 enables more private, secure, and powerful development workflows. As always, your feedback is essential as we continue to refine the AI experience in Android Studio. If you find a bug or issue, please file an issue. Also you can be part of our vibrant Android developer community on LinkedIn, YouTube, or X. Happy coding!

02 Apr 2026 2:00pm GMT

01 Apr 2026

feedAndroid Developers Blog

Get your Wear OS apps ready for the 64-bit requirement

Posted by Michael Stillwell, Developer Relations Engineer and Dimitris Kosmidis, Product Manager, Wear OS


64-bit architectures provide performance improvements and a foundation for future innovation, delivering faster and richer experiences for your users. We've supported 64-bit CPUs since Android 5. This aligns Wear OS with recent updates for Google TV and other form factors, building on the 64-bit requirement first introduced for mobile in 2019.

Today, we are extending this 64-bit requirement to Wear OS. This blog provides guidance to help you prepare your apps to meet these new requirements.

The 64-bit requirement: timeline for Wear OS developers

Starting September 15, 2026:

  • All new apps and app updates that include native code will be required to provide 64-bit versions in addition to 32-bit versions when publishing to Google Play.
  • Google Play will start blocking the upload of non-compliant apps to the Play Console.

We are not making changes to our policy on 32-bit support, and Google Play will continue to deliver apps to existing 32-bit devices.

The vast majority of Wear OS developers has already made this shift, with 64-bit compliant apps already available. For the remaining apps, we expect the effort to be small.

Preparing for the 64-bit requirement

Many apps are written entirely in non-native code (i.e. Kotlin or Java) and do not need any code changes. However, it is important to note that even if you do not write native code yourself, a dependency or SDK could be introducing it into your app, so you still need to check whether your app includes native code.

Assess your app

  • Inspect your APK or app bundle for native code using the APK Analyzer in Android Studio.
  • Look for .so files within the lib folder. For ARM devices, 32-bit libraries are located in lib/armeabi-v7a, while the 64-bit equivalent is lib/arm64-v8a.
  • Ensure parity: The goal is to ensure that your app runs correctly in a 64-bit-only environment. While specific configurations may vary, for most apps this means that for each native 32-bit architecture you support, you should include the corresponding 64-bit architecture by providing the relevant .so files for both ABIs.
  • Upgrade SDKs: If you only have 32-bit versions of a third-party library or SDK, reach out to the provider for a 64-bit compliant version.

How to test 64-bit compatibility

The 64-bit version of your app should offer the same quality and feature set as the 32-bit version. The Wear OS Android Emulator can be used to verify that your app behaves and performs as expected in a 64-bit environment.

Note: Since Wear OS apps are required to target Wear OS 4 or higher to be submitted to Google Play, you are likely already testing on these newer, 64-bit only images.

When testing, pay attention to native code loaders such as SoLoader or older versions of OpenSSL, which may require updates to function correctly on 64-bit only hardware.

Next steps

We are announcing this requirement now to give developers a six-month window to bring their apps into compliance before enforcement begins in September 2026. For more detailed guidance on the transition, please refer to our in-depth documentation on supporting 64-bit architectures.

This transition marks an exciting step for the future of Wear OS and the benefits that 64-bit compatibility will bring to the ecosystem.

01 Apr 2026 8:00pm GMT

30 Mar 2026

feedAndroid Developers Blog

Media3 1.10 is out

Posted by Andrew Lewis, Software Engineer



Media3 1.10 is out!

Media3 1.10 includes new features, bug fixes and feature improvements, including Material3-based playback widgets, expanded format support in ExoPlayer and improved speed adjustment when exporting media with Transformer. Read on to find out more, and check out the full release notes for a comprehensive list of changes.

Playback UI and Compose

We are continuing to expand the media3-ui-compose-material3 module to help you build Compose UIs for playback.

We've added a new Player Composable that combines a ContentFrame with customizable playback controls, giving you an out-of-the-box player widget with a modern UI.

This release also adds a ProgressSlider Composable for displaying player progress and performing seeks using dragging and tapping gestures. For playback speed management, a new PlaybackSpeedControl is available in the base media3-ui-compose module, alongside a styled PlaybackSpeedToggleButton in the Material 3 module.

We'll continue working on new additions like track selection utils, subtitle support and more customization options in the upcoming Media3 releases. We're eager to hear your feedback so please share your thoughts on the project issue tracker.


Player Composable in the Media3 Compose demo app

Playback feature enhancements

Media3 1.10 includes a variety of additions and improvements across the playback modules:
  • Format support: ExoPlayer now supports extracting Dolby Vision Profile 10 and Versatile Video Coding (VVC) tracks in MP4 containers, and we've introduced MPEG-H UI manager support in the decoder_mpeghextension. The IAMF extension now seamlessly supports binaural output, either through the decoder viaiamf_tools or through the Android OS Spatializer, with new logic to match the output layout of the speakers.

  • Ad playback: Improvements to reliability, improved HLS interstitial support forX-PLAYOUT-LIMIT and X-SNAP, and with the latest IMA SDK dependency you can control whether ad click-through URLs open in custom tabs with setEnableCustomTabs.

  • HLS: ExoPlayer now allows location fallback upon encountering load errors if redundant streams from different locations are available.
  • Session: MediaSessionService now extends LifecycleService, allowing apps to access the lifecycle scoping of the service.

One of our key focus areas this year is on playback efficiency and performance. Media3 1.10 includes experimental support for scheduling the core playback loop in a more efficient way. You can try this out by enabling experimentalSetDynamicSchedulingEnabled() via the ExoPlayer.Builder. We plan to make further improvements in future releases so stay tuned!

Media editing and Transformer

For developers building media editing experiences, we've made speed adjustments more robust. EditedMediaItem.Builder.setFrameRate()can now set a maximum output frame rate for video. This is particularly helpful for controlling output size and maintaining performance when increasing media speed with setSpeed().

New modules for frame extraction and applying Lottie effects

In this release we've split some functionality into new modules to reduce the scope of some dependencies:

  • FrameExtractor has been removed from the main media3-inspector module, so please migrate your code to use the new media3-inspector-framemodule and update your imports toandroidx.media3.inspector.frame.FrameExtractor.

  • We have also moved theLottieOverlayeffect to a separate media3-effect-lottie module. As a reminder, this gives you a straightforward way to apply vector-based Lottie animations directly to video frames.

Please get in touch via the issue tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!

30 Mar 2026 11:00pm GMT

Monzo boosts performance metrics by up to 35% with a simple R8 update

Posted by Ben Weiss, Senior Developer Relations Engineer
Monzo App Performance

Monzo is a UK digital bank with 15 million customers and growing. As the app scaled, the engineering team identified app startup time as a critical area for improvement but worried it would require significant changes to their codebase.

By fully enabling R8 optimizations, Monzo achieved a massive 35% reduction in their Application Not Responding (ANR) rate. This simple change proved that impactful optimizations don't always require complex engineering efforts.

Unlocking broad performance wins with R8 full mode

Monzo identified R8 full mode as an easy fix worth trying; and it worked, improving performance across the board:

  • Startup Reliability: Cold starts improved by 30%, Warm starts by 24%, and Hot starts by 14%.
  • Launch Speed: P50 launch times improved by 11% and P90 launch times by 12%.
  • Efficiency: Overall app size was reduced by 9%.
  • Stability: ANR reduction of 35%.

Enabling optimizations with a single change

Many Android apps use an outdated default configuration file which disables most functionality of the R8 optimizer. The main change Monzo made to unlock these performance improvements was to replace the proguard-android.txt default file with proguard-android-optimize.txt. This change removes the -dontoptimize instruction and allows R8 to properly do its job.

buildTypes {
  release {
    isMinifyEnabled = true
    isShrinkResources = true
    proguardFiles(
      getDefaultProguardFile("proguard-android-optimize.txt"),
    )
  }
}

After making this change, it's worth looking at your Keep configuration files. These files tell R8 which parts of your code to leave alone (usually because they're called dynamically or by external libraries). Tidying up unnecessary Keep rules means R8 can do more.

Improving scroll performance with Baseline Profiles

To further enhance the user experience, Monzo implemented Baseline Profiles, specifically targeting scroll and rendering performance on their main feed. This strategy ensured that the most common user journeys-opening the app and scrolling the feed-were fully optimized. The impact on rendering was substantial: P90 scroll performance became 71% faster, and P95 scroll performance improved by 87%. Now scrolling the app is smoother than before.

Monzo built this into their release process to maintain these improvements over time. "We trigger the baseline profile generation every week day (before running our nightly builds) and commit the latest changes once completed," Neumayer explains.

Keeping up with modern Android development

Monzo's experience shows what's possible when you stay up to date with Android build-tooling recommendations. While legacy apps often struggle with complex reflection usage, Monzo found the transition straightforward by documenting their Keep Rules properly. "We always add a comment explaining why Keep Rules are in place, so we know when it's safe to remove the rules," Neumayer notes.

Neumayer's advice for other teams? Regularly check your practices against current standards: "Take a look at the latest recommendations from Google around app performance and check if you're following all the latest advice."

To get started and learn more about R8, visit https://d.android.com/r8

30 Mar 2026 10:00pm GMT