21 Nov 2025
Android Developers Blog
Fully Optimized: Wrapping up Performance Spotlight Week
Posted by Ben Weiss, Senior Developer Relations Engineer and Sara Hamilton, Product Manager
We spent the past week diving deep into sharing best practices and guidance that helps to make Android apps faster, smaller, and more stable. From the foundational powers of the R8 optimizer and Profile Guided Optimizations, to performance improvements with Jetpack Compose, to a new guide on levelling up your app's performance, we've covered the low effort, high impact tools you need to build a performant app.
This post serves as your index and roadmap to revisit these resources whenever you need to optimize. Here are the five key takeaways from our journey together.
Use the R8 optimizer to speed up your app
The single most impactful, low-effort change you can make is fully enabling the R8 optimizer. It doesn't just reduce app size; it performs deep, whole-program optimizations to fundamentally rewrite your code for efficiency. Revisit your Keep Rules and get R8 back into your engineering tasks.
Our newly updated and expanded documentation on the R8 optimizer is here to help.
Reddit observed a 40% faster cold startup and 30% fewer ANR errors after enabling R8 full mode.
You can read the full case study on our blog.
Engineers at Disney+ invest in app performance and are optimizing the app's user experience. Sometimes even seemingly small changes can make a huge impact. While inspecting their R8 configuration, the team found that the -dontoptimize flag was being used. After enabling optimizations by removing this flag, the Disney+ team saw significant improvements in their app's performance.
So next time someone asks you what you could do to improve app performance, just link them to this post.
Read more in our Day 1 blog: Use R8 to shrink, optimize, and fast-track your app
Guiding you to better performance
Baseline Profiles effectively remove the need for Just in Time compilation, improving startup speed, scrolling, animation and overall rendering performance. Startup Profiles make app startup more even more lightweight by bringing an intelligent order to your app's classes.dex files.
And to learn more about just how important Baseline Profiles are for app performance, read Meta's engineering blog where they shared how Baseline Profiles improved various critical performance metrics by up to 40% across their apps.
We continue to make Jetpack Compose more performant for you in Jetpack Compose 1.10. Features like pausable composition and the customizable cache window are crucial for maintaining zero scroll jank when dealing with complex list items.Take a look at the latest episode of #TheAndroidShow where we explain this in more detail.
Read more in our Wednesday's blog: Deeper Performance Considerations
Measuring performance can be easy as 1, 2, 3
You can't manage what you don't measure. Our Performance Leveling Guide breaks down your measurement journey into five steps, starting with easily available data and building up to advanced local tooling.
Starting at level 1, we'll teach you how to use readily available data from Android Vitals, which provides you with field data on ANRs, crashes, and excessive battery usage.
We'll also teach you how to level up. For example, we'll demonstrate how to reach level 3 with local performance testing using Jetpack Macrobenchmark and the new UiAutomator 2.4 API to accurately measure and verify any change in your app's performance.
Read more in our Thursday's blog
Debugging performance just got an upgrade
Advanced optimization shouldn't mean unreadable crash reports. New features are designed to help you confidently debug R8 and background work:
Automatic Logcat Retrace
Starting in Android Studio Narwhal, stack traces can automatically be de-obfuscated in the Logcat window. This way you can immediately see and debug any crashes in a production-ready build.
Narrow Keep Rules
On Tuesday we demystified the Keep Rules needed to fix runtime crashes, emphasizing writing specific, member-level rules over overly-broad wildcards. And because it's an important topic, we made you a video as well.
And with the new lint check for wide Keep Rules, the Android Studio Otter 3 Feature Drop has you covered here as well.
We also released new guidance on testing and troubleshooting your R8 configuration to help you get the configuration right with confidence.
Read more in our Tuesday's blog: Configure and troubleshoot R8 Keep Rules
Background Work
We shared guidance on debugging common scenarios you may encounter when scheduling tasks with WorkManager.
Background Task Inspector gives you a visual representation and graph view of WorkManager tasks, helping debug why scheduled work is delayed or failed. And our refreshed Background Work documentation landing page highlights task-specific APIs that are optimized for particular use cases, helping you achieve more reliable execution.
Read more in our Wednesday's blog: Background work performance considerations
Performance optimization is an ongoing journey
If you successfully took our challenge to enable R8 full mode this week, your next step is to integrate performance into your product roadmap using the App Performance Score. This standardized framework helps you find the highest leverage action items for continuous improvement.
We capped off the week with the #AskAndroid Live Q&A session, where engineers answered your toughest questions on R8, Profile Guided Optimizations, and more. If you missed it, look for the replay!
21 Nov 2025 5:00pm GMT
20 Nov 2025
Android Developers Blog
Leveling Guide for your Performance Journey

Posted by Alice Yuan - Senior Developer Relations Engineer

Welcome to day 4 of Performance Spotlight Week. Now that you've learned about some of the awesome tools and best practices we've introduced recently such as the R8 Optimizer, and Profile Guided Optimization with Baseline Profiles and Startup Profiles, you might be wondering where to start your performance improvement journey.
We've come up with a step-by-step performance leveling guide to meet your mobile development team where you are-whether you're an app with a single developer looking to get started with performance, or you have an entire team dedicated to improving Android performance.
The performance leveling guide features 5 levels. We'll start with level 1, which introduces minimal adoption effort performance tooling, and we'll go up to level 5, ideal for apps that have the resourcing to maintain a bespoke performance framework.
Feel free to jump to the level that resonates most with you:
Level 1: Use Play Console provided field monitoring
We recommend first leveraging Android vitals within the Play Console for viewing automatically collected field monitoring data, giving you insights about your application with minimal effort.
Android vitals is Google's initiative to automatically collect and surface this field data for you.
Here's an explanation of how we deliver this data:
-
Collect Data: When a user opts-in, their Android device automatically logs key performance and stability events from all apps, including yours.
-
Aggregate Data: Google Play collects and anonymizes this data from your app's users.
-
Surface Insights: The data is presented to you in the Android vitals dashboard within your Google Play Console.
The Android vitals dashboard tracks many metrics, but a few are designated as Core Vitals. These are the most important because they can affect your app's visibility and ranking on the Google Play Store.
The Core Vitals
| GOOGLE PLAY'S CORE TECHNICAL QUALITY METRICS
To maximize visibility on Google Play, keep your app below the bad behavior thresholds for these metrics. |
|
|---|---|
| User-perceived crash rate | The percentage of daily active users who experienced at least one crash that is likely to have been noticeable |
| User-perceived ANR rate | The percentage of daily active users who experienced at least one ANR that is likely to have been noticeable |
| Excessive battery usage | The percentage of watch face sessions where battery usage exceeds 4.44% per hour |
| New: Excessive partial wake locks | The percentage of user sessions where cumulative, non-exempt wake lock usage exceeds 2 hours |
The core vitals include user-perceived crash rate, ANR rate, excessive battery usage and the newly introduced metric on excessive partial wake locks.
User-Perceived ANR Rate
You can use the Android Vitals ANR dashboard, to see stack traces of issues that occur in the field and get insights and recommendations on how to fix the issue.

You can drill down into a specific ANR that occurred, to see the stack trace as well as insights on what might be causing the issue.
Also, check out our ANR guidance to help you diagnose and fix the common scenarios where ANRs might occur.
User-Perceived Crash Rate
Use the Android vitals crflevelash dashboard to further debug crashes and view a sample of stack traces that occur within your app.
Our documentation also has guidance around troubleshooting specific crashes. For example, the Troubleshoot foreground services guide discusses ways to identify and fix common scenarios where crashes occur.
Excessive Battery Usage
To decrease watch face sessions with excessive battery usage on Wear OS, check out the Wear guide on how to improve and conserve battery.
[new] Excessive Partial Wake Locks
We recently announced that apps that exceed the excessive partial wake locks threshold may see additional treatment starting on March 1st 2026.
For mobile devices, the Android vitals metric applies to non-exempted wake locks acquired while the screen is off and the app is in the background or running a foreground service. Android vitals considers partial wake lock usage excessive if wake locks are held for at least two hours within a 24-hour period and it affects more than 5% of your app's sessions, averaged over 28 days.
To debug and fix excessive wake lock issues, check out our technical blog post.
Consult our Android vitals documentation and continue your journey to better leverage Android vitals.
Level 2: Follow the App Performance Score action items
Next, move onto using the App Performance Score to find the high leverage action items to uplevel your app performance.
The Android App Performance Score is a standardized framework to measure your app's technical performance. It gives you a score between 0 and 100, where a lower number indicates more room for improvement.
To get easy wins, you should first start with the Static Performance Score first. These are often configuration changes or tooling updates that provide significant performance boosts.
Step 1: Perform the Static Assessment
The static assessment evaluates your project's configuration and tooling adoption. These are often the quickest ways to improve performance.
Navigate to the Static Score section of the scoreboard page and do the following:
-
Assess Android Gradle Plugin (AGP) Version.
-
Adopt R8 Minification incrementally or ideally, use R8 in full mode to minify and optimize the app code.
-
Adopt Baseline Profiles which improves code execution speed from the first launch providing performance enhancements for every new app install and every app update.
-
Adopt Startup Profiles to improve Dex Layout. Startup Profiles are used by the build system to further optimize the classes and methods they contain by improving the layout of code in your APK's DEX files.
-
Upgrade to the newest version of Jetpack Compose
Step 2: Perform the Dynamic Assessment
Once you have applied the static easy wins, use the dynamic assessment to validate the improvements on a real device. You can first do this manually with a physical device and a stop watch.
Navigate to the Dynamic Score section of the scoreboard page and do the following:
-
Set up your test environment with a physical device. Consider using a lower-end device to exaggerate performance issues, making them easier to spot.
-
Measure startup time from the launcher. Cold start your app from the launcher icon and measure the time until it is interactive.
-
Measure app startup time from a notification, with the goal to reduce notification startup time to be below a couple seconds.
-
Measure rendering performance by scrolling through your core screens and animations.
Once you've completed these steps, you will receive a score between 1 - 100 for the static and dynamic scores, giving you an understanding of your app's performance and where to focus on.
Level 3: Leverage local performance test frameworks
Once you've started to assess dynamic performance, you may find it too tedious to measure performance manually. Consider automating your performance testing using performance test frameworks such as Macrobenchmarks and UiAutomator.
Macrobenchmark 💚 UiAutomator
Think of Macrobenchmark and UiAutomator as two tools that work together: Macrobenchmark is the measurement tool. It's like a stopwatch and a frame-rate counter that runs outside your app. It is responsible for starting your app, recording metrics (like startup time or dropped frames), and stopping the app. UiAutomator is the robot user. The library lets you write code to interact with the device's screen. It can find an icon, tap a button, scroll on a list and more.
How to write a test
When you write a test, you wrap your UiAutomator code inside a Macrobenchmark block.
-
Define the Test: Use the @MacrobenchmarkRule
-
Start Measuring: Call benchmarkRule.measureRepeated.
-
Drive the UI: Inside that block, use UiAutomator code to launch your app, find UI elements, and interact with them.
Here's an example code snippet of what it looks like to test a compose list for scrolling jank.
benchmarkRule.measureRepeated(
// ...
metrics = listOf(
FrameTimingMetric(),
),
startupMode = StartupMode.COLD,
iterations = 10,
) {
// 1. Launch the app's main activity
startApp()
// 2. Find the list using its resource ID and scroll down
onElement { viewIdResourceName == "$packageName.my_list" }
.fling(Direction.DOWN)
}
-
Review the results: Each test run provides you with precisely measured information to give you the best data on your app's performance.
timeToInitialDisplayMs min 1894.4, median 2847.4, max 3355.6
frameOverrunMs P50 -3.2, P90 6.2, P95 10.4, P99 119.5
Common use cases
Macrobenchmark provides several core metrics out of the box. StartupTimingMetric allows you to accurately measure app startup. The FrameTimingMetric enables you to understand an app's rendering performance during the test.
We have a detailed and complete guide to using Macrobenchmarks and UiAutomator alongside code samples available for you to continue learning.
Level 4: Use trace analysis tools like Perfetto
Trace analysis tools like Perfetto are used when you need to see beyond your own application code. Unlike standard debuggers or profilers that only see your process, Perfetto captures the entire device state-kernel scheduling, CPU frequency, other processes, and system services-giving you complete context for performance issues.
Check our Performance Debugging youtube playlist for video instructions on performance debugging using system traces, Android Studio Profiler and Perfetto.
How to use Perfetto to debug performance
The general workflow for debugging performance using trace analysis tools is to record, load and analyze the trace.
Step 1: Record a trace
You can record a system trace using several methods:
-
Recording a trace manually on the device directly from the developer options.
-
Using the Android Studio CPU Profiler
-
Using the Perfetto UI
Step 2: Load the trace
Once you have the trace file, you need to load it into the analysis tool.
-
Open Chrome and navigate to ui.perfetto.dev.
-
Drag and drop your .perfetto-trace (or .pftrace) file directly into the browser window.
-
The UI will process the file and display the timeline.
Step 3: Analyze the trace
You can use Perfetto UI or Android Studio Profiler to investigate performance issues. Check out this episode of the MAD Skills series on Performance, where our performance engineer Carmen Jackson discusses the Perfetto traceviewer.
Scenarios for inspecting system traces using Perfetto
Perfetto is an expert tool and can provide information about everything that happened on the Android device while a trace was captured. This is particularly helpful when you cannot identify the root cause of a slowdown using standard logs or basic profilers.
Debugging Jank (Dropped Frames)
If your app stutters while scrolling, Perfetto can show you exactly why a specific frame missed its deadline.
If it's due to the app, you might see your main thread running for a long duration doing heavy parsing; this indicates scenarios where you should move the work into asynchronous processing.
If it's due to the system, you might see your main thread ready to run, but the CPU kernel scheduler gave priority to a different system service, leaving your app waiting (CPU contention). This indicates scenarios where you may need to optimize usage of platform APIs.
Analyzing Slow App Startup
Startup is complex, involving system init, process forking, and resource loading. Perfetto visualizes this timeline precisely.
You can see if you are waiting on Binder calls (inter-process communication). If your onCreate waits a long time for a response from the system PackageManager, Perfetto will show that blocked state clearly.
You can also see if your app is doing more work than necessary during the app startup. For example, if you are creating and laying out more views than the app needs to show, you can see these operations in the trace.
Investigating Battery Drain & CPU Usage
Because Perfetto sees the whole system, it's perfect for finding invisible power drains.
You can identify which processes are holding wake locks, preventing the device from sleeping under the "Device State" tracks. Learn more in our wake locks blog post. Also, use Perfetto to see if your background jobs are running too frequently or waking up the CPU unnecessarily.
Level 5: Build your own performance tracking framework
The final level is for apps that have teams with resourcing to maintain a performance tracking framework.
Building a custom performance tracking framework on Android involves leveraging several system APIs to capture data throughout the application lifecycle, from startup to exit, and during specific high-load scenarios.
By using ApplicationStartInfo, ProfilingManager, and ApplicationExitInfo, you can create a robust telemetry system that reports on how your app started, detailed info on what it did while running, and why it died.
ApplicationStartInfo: Tracking how the app started
Available from Android 15 (API 35), ApplicationStartInfo provides detailed metrics about app startup in the field. The data includes whether it was a cold, warm, or hot start, and the duration of different startup phases.
This helps you develop a baseline startup metric using production data to further optimize that might be hard to reproduce locally. You can use these metrics to run A/B tests optimizing the startup flow.
The goal is to accurately record launch metrics without manually instrumenting every initialization phase.
You can query this data lazily some time after application launch.
ProfilingManager: Capturing why it was slow
ProfilingManager (API 35) allows your app to programmatically trigger system traces on user devices. This is powerful for catching transient performance issues in the wild that you can't reproduce locally.
The goal is to automatically record a trace when a specific highly critical user journey is detected as running slowly or experiencing performance issues.
You can register a listener that triggers when specific conditions are met or trigger it manually when you detect a performance issue such as jank, excessive memory, or battery drain.
Check our documentation on how to capture a profile, retrieve and analyze profiling data and use debug commands.
ApplicationExitInfo: Tracking why the app died
ApplicationExitInfo (API 30) tells you why your previous process died. This is crucial for finding native crashes, ANRs, or system kills due to excessive memory usage (OOM). You'll also be able to get a detailed tombstone trace by using the API getTraceInputStream.
The goal of the API is to understand stability issues that don't trigger standard Java crash reporters (like Low Memory Kills).
You should trigger this API on the next app launch.
Next Steps
Improving Android performance is a step-by-step journey. We're so excited to see how you level up your performance using these tools!
Tune in tomorrow for Ask Android
You have shrunk your app with R8 and optimized your runtime with Profile Guided Optimization. And measure your app's performance.
Join us tomorrow for the live Ask Android session. Ask your questions now using #AskAndroid and get them answered by the experts.
20 Nov 2025 5:00pm GMT
19 Nov 2025
Android Developers Blog
Jetpack Navigation 3 is stable
Posted by Don Turner - Developer Relations Engineer
Jetpack Navigation 3 version 1.0 is stable 🎉. Go ahead and use it in your production apps today. JetBrains are already using it in their KotlinConf app.
Navigation 3 is a new navigation library built from the ground up to embrace Jetpack Compose state. It gives you full control over your back stack, helps you retain navigation state, and allows you to easily create adaptive layouts (like list-detail). There's even a cross-platform version from JetBrains.
Why a new library?
The original Jetpack Navigation library (now Nav2) was designed 7 years ago and, while it serves its original goals well and has been improved iteratively, the way apps are now built has fundamentally changed.
Reactive programming with a declarative UI is now the norm. Nav3 embraces this approach. For example, NavDisplay (the Nav3 UI component that displays your screens) simply observes a list of keys (each one representing a screen) backed by Compose state and updates its UI when that list changes.
Figure 1. NavDisplay observes changes to a list backed by Compose state.
Nav2 can also make it difficult to have a single source of truth for your navigation state because it has its own internal state. With Nav3, you supply your own state, which gives you complete control.
Lastly, you asked for more flexibility and customizability. Rather than having a single, monolithic API, Nav3 provides smaller, decoupled APIs (or "building blocks") that can be combined together to create complex functionality. Nav3 itself uses these building blocks to provide sensible defaults for well-defined navigation use cases.
This approach allows you to:
-
Customize screen animations at both a global and individual level
-
Display multiple panes at the same time, and create flexible layouts using the Scenes API
-
Easily replace Nav3 components with your own implementations if you want custom behavior
Read more about its design and features in the launch blog.
Migrating from Navigation 2
If you're already using Nav2, specifically Navigation Compose, you should consider migrating to Nav3. To assist you with this, there is a migration guide. The key steps are:
-
Add the navigation 3 dependencies.
-
Update your navigation routes to implement NavKey. Your routes don't have to implement this interface to use Nav3, but if they do, you can take advantage of Nav3's rememberNavBackStack function to create a persistent back stack.
-
Create classes to hold and modify your navigation state - this is where your back stacks are held.
-
Replace NavController with these classes.
-
Move your destinations from NavHost's NavGraph into an entryProvider.
-
Replace NavHost with NavDisplay.
Experimenting with AI agent migration
You may want to experiment with using an AI agent to read the migration guide and perform the steps on your project. To try this with Gemini in Android Studio's Agent Mode:
-
Save this markdown version of the guide into your project.
-
Paste this prompt to the agent (but don't hit enter): "Migrate this project to Navigation 3 using ".
-
Type @migration-guide.md - this will supply the guide as context to the agent.
As always, make sure you carefully review the changes made by the AI agent - it can make mistakes!
We'd love to hear how you or your agent performed, please send your feedback here.
Tasty navigation recipes for common scenarios
For common but nuanced use cases, we have a recipes repository. This shows how to combine the Nav3 APIs in a particular way, allowing you to choose or modify the recipe to your particular needs. If a recipe turns out to be popular, we'll consider "graduating" the non-nuanced parts of it into the core Nav3 library or add-on libraries.
Figure 2. Useful code recipes can graduate into a library.
There are currently 19 recipes, including for:
-
Passing navigation arguments to ViewModels (including using Koin)
-
Returning results from screens by events and by shared state
We're currently working on a deeplinks recipe, plus a Koin integration, and have plenty of others planned. An engineer from JetBrains has also published a Compose Multiplatform version of the recipes.
If you have a common use case that you'd like to see a recipe for, please file a recipe request.
Summary
To get started with Nav3, check out the docs and the recipes. Plus, keep an eye out for a whole week of technical content including:
-
A deep dive video on the API covering modularization, animations and adaptive layouts.
-
A live Ask Me Anything (AMA) with the engineers who built Nav3.
Nav3 Spotlight Week starts Dec 1st 2025.
As always, if you find any issues, please file them here.
19 Nov 2025 8:02pm GMT

