02 Apr 2026

feedAndroid Developers Blog

Increase Guidance and Control over Agent Mode with Android Studio Panda 3

Posted by Matt Dyor, Senior Product Manager


Android Studio Panda 3 is now stable and ready for you to use in production. This release gives you even more control and customization over your AI-powered workflows, making it easier than ever to build high-quality Android apps.

Whether you're bringing new capabilities to an existing app or standing up a brand new app, these updates elevate your development experience by allowing your AI Agent in Android Studio to learn your specific practices and giving you granular control over its permissions.

Lastly, in addition to AI skills and Agent Mode enchantments, Android Studio Panda 3 also includes updated support for build Android apps for cars.

Here's a deep dive into what's new:

Agent skills

Create a more helpful AI agent by using agent skills in Android Studio. Agent skills are specialized instructions that teach the agent new capabilities and best practices for a specific workflow, which the agent can then leverage as needed. This significantly reduces the level of detail required for your day-to-day prompts. Agent skills work with Gemini in Android Studio or with other remote 3rd party LLMs you integrate into the agent framework in Android Studio.

You and members of your team can create skills that tell the agent exactly how you want to handle specific tasks in your codebase. For example, you could create a custom "code review" skill tailored to your organization's coding standards, or custom skill to provide the agent with more information on using an in-house library.

Once you have created a skill, the agent will be able to use it automatically, or you can manually trigger it by typing @ followed by the skill name. Check out the documentation to learn more about how to create skills for your codebase, or better yet-ask your agent to help you build a new skill and it will guide you through the details!

Manually Trigger Agent Skill in Android Studio

Getting Started

To build a skill for your project, do the following:

  • Create a .skills directory inside your project's root folder.
  • Place a SKILL.md file inside this new directory.
  • Add a name and description to the file to define your custom workflow, and your skill is ready.
  • Optionally include scripts, assets, and references to provide even more guidance to your agent.
Agent skills in Android Studio

Manage permissions for Agent Mode

You control your codebase, and you can now be more deliberate with which data and capabilities you choose to share with AI agents. The new granular agent permissions in Android Studio let you decide exactly what agents can do for you.

When Agent Mode needs to read files, run shell commands, or access the web, it explicitly asks for your permission. We know that 'approval fatigue' is a real risk in AI workflows-when a tool asks for permission too often, it's easy to start clicking 'Allow' without fully reviewing the action. By offering granular 'Always Allow' rules for trusted operations and an optional sandbox for experimental ones, Android Studio helps you stay focused on the high-stakes decisions that actually require your manual sign-off.

Agent Permissions

Agent permissions are intuitive to set up and use. For example, granting high-level permissions automatically authorizes related sub-tools, while commands you have previously approved will run automatically without interrupting your flow. Rest assured, accessing sensitive files like SSH keys will always require your explicit sign-off.

For even more security, you can also use an optional sandbox to enforce strict, isolated control over the agent.



Agent Shell Sandbox

Empty Car App Library App template

We're making it easier to build Android apps for cars. Building apps for the car used to mean wrestling with complex configurations just to get the project to build successfully.

Now, you can accelerate your development with the new "Empty Car App Library App" template in Android Studio. This template takes care of the required boilerplate code for a driving-optimized app on both Android Auto and Android Automotive OS, saving you significant time and effort. Instead of getting bogged down in setup, you can focus on creating the best experience for your users on the road.

Getting Started

To use the new template:

  • Select New Project on the Welcome to Android Studio screen (or File > New > New Project from within a project).
  • Search for or select the Empty Car App Library App template.
  • Name your app and click Finish to generate your driving-optimized app.



Empty Car App Library App template

Android Studio Panda releases

Panda 3 builds off last month's AI-focused Panda 2 release. Check out Go from prompt to working prototype with Android Studio Panda 2 post to learn more about new Android Studio features, including the AI-powered New Project Flow that takes you from prompt to prototype and the Version Upgrade Assistant that takes the toil out of updating your dependencies.

Get started

Dive in and accelerate your development. Download Android Studio Panda 3 and start exploring these powerful new agentic features today.

As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding!

02 Apr 2026 2:00pm GMT

Gemma 4: The new standard for local agentic intelligence on Android

Posted by Matthew McCullough, VP of Product Management Android Development



Today, we are enhancing Android development with Gemma 4, our latest state-of-the-art open model designed with complex reasoning and autonomous tool-calling capabilities.

Our vision is to enable local agentic AI on Android across the entire software lifecycle, from development to production. Android supports a range of Gemma 4 models, from the most efficient ones running directly on-device in your apps to more powerful ones running on your development machine to help you build apps. We are bringing Gemma 4 to Android developers through two pillars:

  • Local-first Agentic coding: Experience powerful, local AI code assistance with Gemma 4 in Android Studio in your development computer.
  • On-device intelligence: Build intelligent experiences using the ML Kit GenAI Prompt API to run Gemma 4 directly on Android device hardware.

Coding with Gemma 4 in Android Studio

When building Android apps, Android Studio can use Gemma 4 to leverage its state-of-the-art reasoning power and native support for tool use, while keeping the model and inference contained entirely on your local machine.

Gemma 4 was trained on Android development and designed with Agent Mode in mind. This means that when you select Gemma 4 as your local model, you can leverage the full suite of Agent Mode capabilities for a variety of Android development use cases, including refactoring legacy code, building an entire app or new features, and applying fixes iteratively.

Learn more about the possibilities Gemma 4 brings to your app development flow and how to get started.

Prototyping with Gemma 4 on-device

Since the introduction of Gemini Nano as the foundation model on Android, it has become available on over 140 million devices. Gemma 4 is the base model for the next generation of Gemini Nano (Gemini Nano 4) that is optimized for performance and quality on Android devices. This model is up to 4x faster than the previous version and uses up to 60% less battery.

To make it as easy as possible to preview and prototype with Gemma 4 E2B and E4B models directly on AICore-supported devices, we're launching the AICore Developer Preview. While we continue to expand the ML Kit GenAI Prompt API surface to unlock additional advanced capabilities of the model, you can already start exploring new use cases with Gemma 4 using the Prompt API.

Prepare your apps for the launch of the Gemini Nano 4 on the new flagship Android devices later this year by prototyping with Gemma 4 today. Read about the upcoming features and deep dive into AICore Developer Preview and its Gemma 4 support here.

Local agentic intelligence with Gemma 4

Running Gemma 4 locally, you can leverage its advanced reasoning and tool-calling capabilities in your entire workflow, from developing with the AI coding assistant in Android Studio to shipping intelligent features in your app with ML Kit GenAI Prompt API. This local-first approach, available under Gemma's open Apache license, provides an alternative for developers to innovate in a privacy-centric and cost effective manner. In a future release, we will update Android Bench to include Gemma 4 and other open models, providing the quantified data you need to navigate performance trade-offs and select the best model for your use case.

We can't wait to see what you build!

02 Apr 2026 2:00pm GMT

Announcing Gemma 4 in the AICore Developer Preview

Posted by David Chou, Product Manager and Caren Chang, Developer Relations Engineer



At Google, we're committed to bringing the most capable AI models directly to the Android devices in your pocket. Today, we're thrilled to announce the release of our latest state-of-the-art open model: Gemma 4.

These models are the foundation for the next generation of Gemini Nano, so code you write today for Gemma 4 will automatically work on Gemini Nano 4-enabled devices that will be available later this year. With Gemini Nano 4, you'll benefit from our additional performance optimizations so you can ship to production across the Android ecosystem with the most efficient on-device inference.

You can get early access to this model today through the AICore Developer Preview.




















Select the Gemini Nano 4 Fast model in the Developer Preview UI
to see its blazing fast inference speed in action before you write any code

Because Gemma 4 natively supports over 140 languages, you can expect improved localized, multilingual experiences for your global audience. Furthermore, Gemma 4 offers industry-leading performance with multimodal understanding, allowing your apps to understand and process text, images, and audio. To give you the best balance of performance and efficiency, Gemma 4 on Android comes in two sizes:

The new model is up to 4x faster than previous versions and uses up to 60% less battery. Starting today, you can experiment with improved capabilities including:

Join the Developer Preview today to download these models in preview models and start building next-generation features right away.

Start building with Gemma 4

Start testing the model

You can try out the model without code by following the Developer Preview guide. If you want to jump straight into integrating these models with your existing workflow, we've made that seamless. Head over to Android Studio to refine your prompt and build with the familiar ML Kit Prompt API. We've introduced a new ability to specify a model, allowing you to target the E2B (fast) or E4B (full) variants for testing.

// Define the configuration with a specific track and preference
val previewFullConfig = generationConfig {
    modelConfig = ModelConfig {
        releaseTrack = ModelReleaseTrack.PREVIEW
        preference = ModelPreference.FULL
    }
}

// Initialize the GenerativeModel with the configuration
val previewModel = GenerativeModel.getClient(previewFullConfig)

// Verify that the specific preview model is available
val previewModelStatus = previewModel.checkStatus()
if (previewModelStatus == FeatureStatus.AVAILABLE) {
    // Proceed with inference
    val response = previewModel.generateContent("If I get 26 paychecks per year, how much I should contribute each paycheck to reach my savings goal of $10k over the course of a year? Return only the amount.")

} else {
    // Handle the case where the preview model is not available
    // (e.g., print out log statements)
}

What to expect during the Developer Preview

The goal of this Developer Preview is to give you a head start on refining prompt accuracy and exploring new use cases for your specific apps.

We will be making several updates throughout the preview period, including support for tool calling, structured output, system prompts, and thinking mode in Prompt API, making it easier to take full advantage of the new capabilities and significant performance optimizations in Gemma 4.

The preview models are available for testing on AICore-enabled devices. These models will run on the latest generation of specialized AI accelerators from Google, MediaTek, and Qualcomm Technologies. On other devices, the models will initially run on a CPU implementation that is not representative of final production performance. If your device is not AICore-enabled, you can also test these models via the AI Edge Gallery app. We'll provide support for more devices in the future.

How to get started

Ready to see what Gemma 4 can do for your users?

  1. Opt-in: Sign up for the AICore Developer Preview.
  2. Download: Once opted in, you can trigger the download of the latest Gemma 4 models directly to your supported test device.
  3. Build: Update your ML Kit implementation to target the new models and start building in Android Studio.

02 Apr 2026 2:00pm GMT