22 Apr 2026
Android Developers Blog
What's new in the Jetpack Compose April '26 release
Today, the Jetpack Compose April '26 release is stable. This release contains version 1.11 of core Compose modules (see the full BOM mapping), shared element debug tools, trackpad events, and more. We also have a few experimental APIs that we'd love you to try out and give us feedback on.
To use today's release, upgrade your Compose BOM version to:
implementation(platform("androidx.compose:compose-bom:2026.04.01"))
Changes in Compose 1.11.0
Coroutine execution in tests
We're introducing a major update to how Compose handles test timing. Following the opt-in period announced in Compose 1.10, the v2 testing APIs are now the default, and the v1 APIs have been deprecated. The key change is a shift in the default test dispatcher. While the v1 APIs relied on UnconfinedTestDispatcher, which executed coroutines immediately, the v2 APIs use the StandardTestDispatcher. This means that when a coroutine is launched in your tests, it is now queued and does not execute until the virtual clock is advanced.
This better mimics production conditions, effectively flushing out race conditions and making your test suite significantly more robust and less flaky.
To ensure your tests align with standard coroutine behavior and to avoid future compatibility issues, we strongly recommend migrating your test suite. Check out our comprehensive migration guide for API mappings and common fixes.
Shared element improvements and animation tooling
We've also added some handy visual debugging tools for shared elements and Modifier.animatedBounds. You can now see exactly what's happening under the hood-like target bounds, animation trajectories, and how many matches are found-making it much easier to spot why a transition might not be behaving as expected. To use the new tooling, simply surround your SharedTransitionLayout with the LookaheadAnimationVisualDebugging composable.
LookaheadAnimationVisualDebugging(
overlayColor = Color(0x4AE91E63),
isEnabled = true,
multipleMatchesColor = Color.Green,
isShowKeylabelEnabled = false,
unmatchedElementColor = Color.Red,
) {
SharedTransitionLayout {
CompositionLocalProvider(
LocalSharedTransitionScope provides this,
) {
// your content
}
}
}
Trackpad events
We've revamped Compose support for trackpads, like built-in laptop trackpads, attachable trackpads for tablets, or external/virtual trackpads. Basic trackpad events will now generally be considered PointerType.Mouse events, aligning mouse and trackpad behavior to better match user expectations. Previously, these trackpad events were interpreted as fake touchscreen fingers of PointerType.Touch, which led to confusing user experiences. For example, clicking and dragging with a trackpad would scroll instead of selecting. By changing the pointer type these events have in the latest release of Compose, clicking and dragging with a trackpad will no longer scroll.
We also added support for more complicated trackpad gestures as recognized by the platform since API 34, including two finger swipes and pinches. These gestures are automatically recognized by components like Modifier.scrollable and Modifier.transformable to have better behavior with trackpads.
These changes improve behavior for trackpads across built-in components, with redundant touch slop removed, a more intuitive drag-and-drop starting gesture, double-click and triple-click selection in text fields, and desktop-styled context menus in text fields.
To test trackpad behavior, there are new testing APIs with performTrackpadInput, which allow validating the behavior of your apps when being used with a trackpad. If you have custom gesture detectors, validate behavior across input types, including touchscreens, mice, trackpads, and styluses, and ensure support for mouse scroll wheels and trackpad gestures.
| Before | After |
|---|---|
Composition host defaults (Compose runtime)
We introduced HostDefaultProvider, LocalHostDefaultProvider, HostDefaultKey, and ViewTreeHostDefaultKey to supply host-level services directly through compose-runtime. This removes the need for libraries to depend on compose-ui for lookups, better supporting Kotlin Multiplatform. To link these values to the composition tree, library authors can use compositionLocalWithHostDefaultOf to create a CompositionLocal that resolves defaults from the host.
Preview wrappers
Android Studio custom previews is a new feature that allows you to define exactly how the contents of a Compose preview are displayed.
By implementing the PreviewWrapperProvider interface and applying the new @PreviewWrapper annotation, you can easily inject custom logic, such as applying a specific Theme. The annotation can be applied to a function annotated with @Composable and @Preview or @MultiPreview, offering a generic, easy-to-use solution that works across preview features and significantly reduces repetitive code.
class ThemeWrapper: PreviewWrapper {
@Composable
override fun Wrap(content: @Composable (() -> Unit)) {
JetsnackTheme {
content()
}
}
}
@PreviewWrapperProvider(ThemeWrapper::class)
@Preview
@Composable
private fun ButtonPreview() {
// JetsnackTheme in effect
Button(onClick = {}) {
Text(text = "Demo")
}
}
Deprecations and removals
- As announced in the Compose 1.10 blog post, we're deprecating
Modifier.onFirstVisible(). Its name often led to misconceptions, particularly within lazy layouts, where it would trigger multiple times during scrolling. We recommend migrating toModifier.onVisibilityChanged(), which allows for more precise manual tracking of visibility states tailored to your specific use case requirements. - The
ComposeFoundationFlags.isTextFieldDpadNavigationEnabledflag was removed because D-pad navigation forTextFieldsis now always enabled by default. The new behavior ensures that the D-pad events from a gamepad or a TV remote first move the cursor in the given direction. The focus can move to another element only when the cursor reaches the end of the text.
Upcoming APIs
In the upcoming Compose 1.12.0 release, the compileSdk will be upgraded to compileSdk 37, with AGP 9 and all apps and libraries that depend on Compose inheriting this requirement. We recommend keeping up to date with the latest released versions, as Compose aims to promptly adopt new compileSdks to provide access to the latest Android features. Be sure to check out the documentation here for more information on which version of AGP is supported for different API levels.
In Compose 1.11.0, the following APIs are introduced as @Experimental, and we look forward to hearing your feedback as you explore them in your apps. Note that @Experimental APIs are provided for early evaluation and feedback and may undergo significant changes or removal in future releases.
Styles (Experimental)
We are introducing a new experimental foundation API for styling. The Style API is a new paradigm for customizing visual elements of components, which has traditionally been performed with modifiers. It is designed to unlock deeper, easier customization by exposing a standard set of styleable properties with simple state-based styling and animated transitions. With this new API, we're already seeing promising performance benefits. We plan to adopt Styles in Material components once the Style API stabilizes.
A basic example of overriding a pressed state style background:
@Composable
fun LoginButton(modifier: Modifier = Modifier) {
Button(
onClick = {
// Login logic
},
modifier = modifier,
style = {
background(
Brush.linearGradient(
listOf(lightPurple, lightBlue)
)
)
width(75.dp)
height(50.dp)
textAlign(TextAlign.Center)
externalPadding(16.dp)
pressed {
background(
Brush.linearGradient(
listOf(Color.Magenta, Color.Red)
)
)
}
}
){
Text(
text = "Login",
)
}
}
MediaQuery (Experimental)
The new mediaQuery API provides a declarative and performant way to adapt your UI to its environment. It abstracts complex information retrieval into simple conditions within a UiMediaScope, ensuring recomposition only happens when needed.
With support for a wide range of environmental signals-from device capabilities like keyboard types and pointer precision, to contextual states like window size and posture-you can build deeply responsive experiences. Performance is baked in with derivedMediaQuery to handle high-frequency updates, while the ability to override scopes makes testing and previews seamless across hardware configurations.
Previously, to get access to certain device properties - like if a device was in tabletop mode - you'd need to write a lot of boilerplate to do so:
@Composable
fun isTabletopPosture(
context: Context = LocalContext.current
): Boolean {
val windowLayoutInfo by
WindowInfoTracker
.getOrCreate(context)
.windowLayoutInfo(context)
.collectAsStateWithLifecycle(null)
return windowLayoutInfo.displayFeatures.any { displayFeature ->
displayFeature is FoldingFeature &&
displayFeature.state == FoldingFeature.State.HALF_OPENED &&
displayFeature.orientation == FoldingFeature.Orientation.HORIZONTAL
}
}
@Composable
fun VideoPlayer() {
if(isTabletopPosture()) {
TabletopLayout()
} else {
FlatLayout()
}
}
Now, with UIMediaQuery, you can add the mediaQuery syntax to query device properties, such as if a device is in tabletop mode:
@OptIn(ExperimentalMediaQueryApi::class)
@Composable
fun VideoPlayer() {
if (mediaQuery { windowPosture == UiMediaScope.Posture.Tabletop }) {
TabletopLayout()
} else {
FlatLayout()
}
}
Check out the documentation and file any bugs here.
Grid (Experimental)
Grid is a powerful new API for building complex, two-dimensional layouts in Jetpack Compose. While Row and Column are great for linear designs, Grid gives you the structural control needed for screen-level architecture and intricate components without the overhead of a scrollable list.
Grid allows you to define your layout using tracks, gaps, and cells, offering familiar sizing options like Dp, percentages, intrinsic content sizes, and flexible "Fr" units.
@OptIn(ExperimentalGridApi::class)
@Composable
fun GridExample() {
Grid(
config = {
repeat(4) { column(0.25f) }
repeat(2) { row(0.5f) }
gap(16.dp)
}
) {
Card1(modifier = Modifier.gridItem(rowSpan = 2)
Card2(modifier = Modifier.gridItem(colmnSpan = 3)
Card3(modifier = Modifier.gridItem(columnSpan = 2)
Card4()
}
}
You can place items automatically or explicitly span them across multiple rows and columns for precision. Best of all, it's highly adaptive-you can dynamically reconfigure your grid tracks and spans to respond to device states like tabletop mode or orientation changes, ensuring your UI looks great across form factors.
Check out the documentation and file any bugs here.
FlexBox (Experimental)
FlexBox is a layout container designed for high performance, adaptive UIs. It manages item sizing and space distribution based on available container dimensions. It handles complex tasks like wrapping (wrap) and multi-axis alignment of items (justifyContent, alignItems, alignContent). It allows items to grow (grow) or shrink (shrink) to fill the container.
@OptIn(ExperimentalFlexBoxApi::class)
fun FlexBoxWrapping(){
FlexBox(
config = {
wrap(FlexWrap.Wrap)
gap(8.dp)
}
) {
RedRoundedBox()
BlueRoundedBox()
GreenRoundedBox(modifier = Modifier.width(350.dp).flex { grow(1.0f) })
OrangeRoundedBox(modifier = Modifier.width(200.dp).flex { grow(0.7f) })
PinkRoundedBox(modifier = Modifier.width(200.dp).flex { grow(0.3f) })
}
}
New SlotTable implementation (Experimental)
We've introduced a new implementation of the SlotTable, which is disabled by default in this release. SlotTable is the internal data structure that the Compose runtime uses to track the state of your composition hierarchy, track invalidations/recompositions, store remembered values, and track all metadata of the composition at runtime. This new implementation is designed to improve performance, primarily around random edits.
To try the new SlotTable, enable ComposeRuntimeFlags.isLinkBufferComposerEnabled.
Start coding today!
With so many exciting new APIs in Jetpack Compose, and many more coming up, it's never been a better time to migrate to Jetpack Compose. As always, we value your feedback and feature requests (especially on @Experimental features that are still baking) - please file them here. Happy composing!
22 Apr 2026 11:00pm GMT
Streamline User Journeys with Verified Email via Credential Manager
In the modern digital landscape, the first encounter a user has with an app is often the most critical. Yet, for decades, this initial interaction has been hindered by the friction of traditional verification methods. Today, we're excited to announce a new verified email credential issued by Google, which developers can now retrieve directly from Android's Credential Manager Digital Credential API.
The Problem: Authentication Friction in the Modern Era
The "current era" of authentication is defined by a trade-off between security and convenience. To ensure that a user owns the email address they provide, you typically rely on One-Time Passwords (OTPs) or "magic links" sent by email or SMS.
While effective, these traditional steps introduce significant hurdles:
- Context switching: Users must leave the app, open their inbox or messaging app, find the code, and return, a process where many potential users simply drop off.
- Delivery issues: While Emails are free, they can be delayed or diverted to spam folders.
- Onboarding friction: Every extra second spent in the "verification loop" is a second where a user might lose interest, directly impacting conversion rates.
The Solution: Seamless, Verified Email
Google now issues a cryptographically verified email credential directly to Android devices. This verified email credential is delivered through the Credential Manager API, which is Android's implementation of the W3C's Digital Credential API standard.
For users, this completely removes the need to manually verify their email through external channels. For developers, the API securely delivers these verified user claims for any scenario whether you are building an account creation flow, a recovery process, or a high-risk step-up authentication.
While this specific verified email address is sourced securely from the user's Google Account on their device, the underlying Digital Credentials API is issuer-agnostic. This fosters an open ecosystem, allowing any holder of a digital credential with an email claim to offer that verification to your app.
User Experience
The beauty of this API lies in its simplicity for the end user. Instead of hunting for OTP codes, the experience is integrated directly into the Android OS:
- Initiation: The process begins when a user focuses on an email input field or taps a "Sign up" or "Recover account" button. You can also initiate the process on page load.
- Transparency: A native Android bottom sheet appears, clearly detailing exactly what data is being requested (for example, user's verified email address).
- One-tap consent: The user simply taps "Agree and continue" to share the data.
- Immediate progress: Once consent is given, the app receives the data instantly. For sign-up or account recovery flows, you can then seamlessly transition the user into passkey creation, ensuring:
- Users do not have to enter any user information manually, as compared to the traditional username/password registration.
- Their next login is even faster and more secure.
Use case 1. Sign up
Accelerate onboarding by fetching a verified email the moment the user taps "Sign up". We strongly recommend you pair the verified email retrieval with passkey creation, also part of the Credential Manager API:
Note: You can also fetch other unverified fields such as a user's given name, family name, name, profile picture and the hosted domain connected with the verified email.
Use case 2. Account recovery
Eliminate the frustration of users hunting for recovery codes in their spam folders by allowing them to recover their account using the verified email securely stored on their device.
Use case 3. Re-authentication for sensitive actions
Protect sensitive user actions, such as changing settings or updating profile details, by requiring a quick re-authentication step. Instead of an OTP, you can provide a low-friction verification using the device's verified email.
Important Considerations
As you design your authentication architecture around the Digital Credentials API, keep the following details in mind:
- Account support: For the specific email credential issued by Google, only regular consumer Google Accounts are supported (Workspace and supervised accounts are currently not supported). Keep in mind that the Credential Manager API itself is issuer-agnostic, meaning other identity providers can issue credentials with their own account support policies.
- Other user data: Beyond email, you can request the user's given name, family name, full name, and profile picture. However, note that only the email is verified by Google.
- Auto verify your @gmail accounts: The API provides verified emails for all consumer Google Accounts. We recommend auto-verifying @gmail.com users and routing custom domains to your existing verification flow - for example, an OTP flow. This ensures you maintain long-term access for external domains not directly managed by Google.
- Complementary to Sign in with Google: While both the new verified email credential & Sign in with Google API provides a verified email, the choice depends on the intended user experience:
- Use Sign in with Google when your users want to create a federated login session.
- Use Verified Email when your users want to sign in traditionally with a username/password or passkey but want to auto-verify the email address without the manual chore of an OTP.
Conclusion and Next steps
By integrating the new verified email via Credential Manager API, you can drastically reduce onboarding friction and provide users with a more streamlined, secure authentication journey. This represents a shift toward a future where "verification" is no longer a manual chore for the user, but a seamless, integrated part of the native mobile experience.
Ready to see how this fits into your own app? To get started, update your project to the latest Credential Manager API and explore our Integration Guide. We encourage you to explore how this streamlined verification can simplify your critical user journeys from optimizing account creation, to enhancing re-authentication flows.
22 Apr 2026 8:00pm GMT
21 Apr 2026
Android Developers Blog
Level up your development with Planning Mode and Next Edit Prediction in Android Studio Panda 4

Android Studio Panda 4 is now stable and ready for you to use in production. This release brings Planning Mode, Next Edit Prediction, and more, making it easier than ever to build high-quality Android apps.
Here's a deep dive into what's new:
Planning Mode
Before the Agent starts working on complex tasks for you, it would be helpful if it could come up with a detailed plan. Jumping straight into a large coding project without a design often leads to technical debt or logic errors; the same is true for AI. That's why we're adding Planning Mode.In this mode, the Agent comes up with a detailed project plan before executing tasks. Instead of a single pass where the model directly predicts the next token of code, Planning Mode facilitates a multi-stage reasoning process - giving the agent additional space to evaluate its own proposed logic for potential issues before presenting it to you. This is especially useful for complex and long-running tasks which demand a high degree of architectural precision.
To use Planning Mode, switch your conversation mode to "Planning" in the agent input box and enter your prompt. to "Planning" and enter your prompt.
Next Edit Prediction
Classic autocomplete is great for finishing your sentences, but coding is rarely a linear path. Often, a change in one place requires a secondary change elsewhere-like adding a new parameter to a function and then needing to update its invocations, or a UI preview update when a Composable is changed. Traditionally, this meant breaking your focus to hunt down the related lines of code that need attention.Next Edit Prediction (NEP) evolves code completion by anticipating your next move, even when it's not at your current cursor position. By analyzing your recent edits, Android Studio recognizes the logical pattern of your workflow. If you modify a data class or update a constructor, NEP can suggest the next relevant edit-perhaps in a distant function-allowing you to jump straight to the fix.
Instead of manually navigating back and forth, you can accept these multi-location suggestions with a single keystroke. This keeps you deep in the "flow state," reducing the cognitive load of routine updates and letting you focus on the complex logic that truly matters to your application. Experience a more intuitive, non-linear way to code in the latest version of Android Studio.
Gemini API Starter Template
Adding powerful AI features to your app just got easier, introducing the Gemini API Starter Template for Android Studio!Integrating generative AI into your Android application used to mean managing complex backend plumbing and worrying about API key security. With the new Gemini API Starter template in Android Studio, developers can now jump straight into building features rather than spending time configuring infrastructure.
Key benefits include:
- Zero API key management: Stop worrying about provisioning or rotating keys. By leveraging Firebase AI Logic, the template eliminates the need to embed sensitive credentials in your client-side code.
- Automated Firebase integration: The backend plumbing is handled for you. The template automatically connects your project to Firebase services, ensuring a secure bridge between your app and Google's Gemini models.
- Built to scale: This isn't just for prototypes. The production-ready architecture allows you to scale from a local test to a global user base without redesigning your foundation.
- Multimodal processing: Supports text, image, video, and audio inputs. You can build features like real-time image analysis, video summarization, and audio transcription.
Get Started
- Open Android Studio.
- Navigate to File > New > New Project.
- Select the Gemini API Starter template from the gallery.
Agent Web Search
When you're deep in development, the right answer is often just a search away-but leaving your IDE to find it can snap you out of your flow. Whether you need the exact version number for a dependency or the latest API changes for a third-party library, the Agent Web Search tool is here to help without you ever having to leave Android Studio.While Android Studio's agent already leverages the Android Knowledge Base for official documentation, modern Android development relies on a vast ecosystem of external libraries. Agent Web Search expands Gemini's reach, allowing it to query Google directly to fetch current reference material from across the web. From checking the latest setup guides for Coil to finding advanced configuration tips for Koin or Moshi, the agent can now pull in the most up-to-date information in real time.
The Agent Web Search tool is designed to be helpful but unobtrusive; it will automatically trigger a web search when it identifies a gap in its local knowledge. You can also take the wheel by asking it to find something specific-simply include "search the web for..." in your prompt. By integrating live web results directly into your workspace, Agent Web Search ensures you're always building with the most current data available, speeding up your workflow and keeping your project on the cutting edge.
Android Studio Panda Releases
- AI-powered New Project flow: Allows you to build a working app prototype with a single prompt. The agent manages initial setup, navigation configuration, and proper dependencies, and features an autonomous generation loop to handle build errors and deploy to an emulator.
- Version Upgrade Assistant: Automates dependency management and updates, iteratively attempting builds and resolving conflicts until a stable configuration is found.
- Agent skills: Specialized, user-defined instructions (stored in a .skills directory) that teach the AI agent project-specific capabilities, coding standards, or library usage.
- Agent permissions: Provides fine-grained control over what agents can do, with features like "Always Allow" rules for trusted operations. For even more security, you can also use an optional sandbox to enforce strict, isolated control over the agent.
- Empty Car App Library App template: Simplifies building driving-optimized apps for Android Auto and Android Automotive OS by handling required boilerplate code.
Get started
As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding!
21 Apr 2026 2:00pm GMT
.png)







.png)
.png)
.png)









