19 Nov 2025
TalkAndroid
How to Get the Most Out of Gemini, ChatGPT & Copilot in 2025
Using AI chatbots like Gemini, ChatGPT, or Copilot can feel like having a super-smart assistant on standby-but only…
19 Nov 2025 7:30am GMT
18 Nov 2025
Android Developers Blog
How Uber is reducing manual logins by 4 million per year with the Restore Credentials API
Posted by Niharika Arora - Senior Developer Relations Engineer at Google, Thomás Oliveira Horta - Android Engineer at Uber
How Uber is reducing manual logins by 4 million per year with the Restore Credentials API
Uber is the world's largest ridesharing company, getting millions of people from here to there while also supporting food delivery, healthcare transportation, and freight logistics. Simplicity of access is crucial to its success; when users switch to a new device, they expect a seamless transition without needing to log back into the Uber app or go through SMS-based one-time password authentication. This frequent device turnover presents a challenge, as well as an opportunity for strong user retention.
To maintain user continuity, Uber's engineers turned to the Restore Credentials feature, an essential tool for a time when 40% of people in the United States replace their smartphone every year. Following an assessment of user demand and code prototyping, they introduced Restore Credentials support in the Uber rider app. To validate that restoring credentials helps remove friction for re-logins, the Uber team ran a successful A/B experiment for a five-week period. The integration led to a reduction in manual logins that, when projected across Uber's massive user base, is estimated to eliminate 4 million manual logins annually.
Eliminating login friction with Restore Credentials

The Restore Credentials API eliminates the multi-step manual sign in process on new devices.
There were past attempts at account restoration on new devices using solutions like regular data backup and BlockStore, though both solutions required sharing authentication tokens directly, from source device to destination device. Since token information is highly sensitive, these solutions are only used to some extent, to pre-fill login fields on the destination device and reduce some friction during the sign-in flows. Passkeys are also used to provide a secure and fast login method, but their user-initiated nature limits their impact on seamless device transitions.
"Some users don't use the Uber app on a daily basis, but they expect it will just work when they need it," said Thomás Oliveira Horta, an Android engineer at Uber. "Finding out you're logged out just as you open the app to request a ride on your new Android phone can be an unpleasant, off-putting experience."
With Restore Credentials, the engineers were able to bridge this gap. The API generates a unique token on the old device, which is seamlessly and silently moved to the new device when the user restores their app data during the standard onboarding process. This process leverages Android OS's native backup and restore mechanism, ensuring the safe transfer of the restore key along with the app's data. The streamlined approach guarantees a simple and safe account transfer, meeting Uber's security requirements without any additional user input or development overhead.
|
Note: Restore keys and passkeys use the same underlying server implementation. However, when you save them in your database, you must differentiate between them. This distinction is crucial because user-created passkeys can be managed directly by the user, while restore keys are system-managed and hidden from the user interface. |
"With the adoption of Restore Credentials on Uber's rider app, we started seeing consistent usage," Thomás said. "An average of 10,000 unique daily users have signed in with Restore Credentials in the current rollout stage, and they've enjoyed a seamless experience when opening the app for the first time on a new device. We expect that number to double once we expand the rollout to our whole userbase."
Implementation Considerations
"Integration was pretty easy with minor adjustments on the Android side by following the sample code and documentation," Thomás said. "Our app already used Credential Manager for passkeys, and the backend required just a couple of small tweaks. Therefore, we simply needed to update the Credential Manager dependency to its latest version to get access to the new Restore Credentials API. We created a restore key via the same passkey creation flow and when our app is launched on a new device, the app proactively checks for this key by attempting a silent passkey retrieval. If the restore key is found, it is immediately utilized to automatically sign the user in, bypassing any manual login."
Throughout the development process, Uber's engineers navigated a few challenges during implementation-from choosing the right entry point to managing the credential lifecycle on the backend.
Choosing the Restore Credentials entry point
The engineers carefully weighed the tradeoffs between a perfectly seamless user experience and implementation simplicity when selecting which Restore Credentials entry point to use for recovery. Ultimately, they prioritized a solution that offered an ideal balance.
"This can take place during App Launch or in the background during device restoration and setup, using BackupAgent," Thomás said. "The background login entry point is more seamless for the user, but it presented challenges with background operations and required usage of the BackupAgent API, which would have led to increased complexity in a codebase as large as Uber's." They decided to implement the feature during the first app launch, which was significantly faster than the manual login.
Addressing server-side challenges
A few server-side challenges arose during integration with the backend WebAuthn APIs, as their design assumed user verification would always be required, and that all credentials would be listed in a user's account settings; neither of these assumptions worked for the non-user-managed Restore Credential keys.
The Uber team resolved this by making minor changes to the WebAuthn services, creating new credential types to distinguish passkeys from Restore Credentials and process them appropriately.
Managing the Restore Credentials lifecycle
Uber's engineers faced several challenges in managing the credential keys on the backend, with specialized support from backend engineer Ryan O'Laughlin:
-
Preventing orphaned keys: A significant challenge was defining a strategy for deleting registered Public Keys to prevent them from becoming "orphaned." For example, uninstalling the app deletes the local credential, but because this action doesn't signal the backend, it leaves an unused key on the server.
-
Balancing key lifespan: Keys needed a "time to live" that was long enough to handle edge cases. For example, if a user goes through a backup and restore, then manually logs out from the old device, the key is deleted from that old device. However, the key must remain valid on the server so the new device can still use it.
-
Supporting multiple devices: Since a user might have multiple devices (and could initiate a backup and restore from any of them), the backend needed to support multiple Restore Credentials per user (one for each device).
Uber's engineers addressed these challenges by establishing rules for server-side key deletion based on new credential registration and credential usage.
The feature went from design to delivery in a rapid two-month development and testing process. Afterward, a five-week A/B experiment (time to validate the feature with users) went smoothly and yielded undeniable results.
Preventing user drop-off with Restore Credentials
By eliminating manual logins on new devices, Uber retained users who might have otherwise abandoned the sign-in flow on a new device. This boost in customer ease was reflected in a wide array of improvements, and though they may seem slight at a glance, the impact is massive at the scale of Uber's user base:
-
3.4% decrease in manual logins (SMS OTP, passwords, social login).
-
1.2% reduction in expenses for logins requiring SMS OTP.
-
0.575% increase in Uber's access rate (% of devices that successfully reached the app home screen).
-
0.614% rise in devices with completed trips.
Today, Restore Credentials is well on its way to becoming a standard part of Uber's rider app, with over 95% of users in the trial group registered.
[UI flow]
During new device setup, users can restore app data and credentials from a backup. After selecting Uber for restoration and the background process finishes, the app will automatically sign the user in on the new device's first launch.
The invisible yet massive impact of Restore Credentials
In the coming months, Uber plans to expand the integration of Restore Credentials. Projecting from the trial's results, they estimate the change will eliminate 4 million manual logins annually. By simplifying app access and removing a key pain point, they are actively building a more satisfied and loyal customer base, one ride at a time.
"Integrating Google's RestoreCredentials allowed us to deliver the seamless 'it just works' experience our users expect on a new device," said Matt Mueller, Lead Project Manager for Core Identity at Uber. "This directly translated to a measurable increase in revenue, proving that reducing login friction is key to user engagement and retention."
Ready to enhance your app's login experience?
Learn how to facilitate a seamless login experience when switching devices with Restore Credentials and read more in the blog post. In the latest canary of the Android Studio Otter you can validate your integration, as new features help mock the backup and restoring mechanisms.
If you are new to Credential Manager, you can refer to our official documentation, codelab and samples for help with integration.
18 Nov 2025 10:00pm GMT
TalkAndroid
Boba Story Lid Recipes – 2025
Look no further for all the latest Boba Story Lid Recipes. They are all right here!
18 Nov 2025 5:11pm GMT
Dice Dreams Free Rolls – Updated Daily
Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.
18 Nov 2025 5:10pm GMT
Android Developers Blog
Configure and troubleshoot R8 Keep Rules

Posted by Ajesh R Pai - Developer Relations Engineer & Ben Weiss - Senior Developer Relations Engineer

In modern Android development, shipping a small, fast, and secure application is a fundamental user expectation. The Android build system's primary tool for achieving this is the R8 optimizer, the compiler that handles dead code and resource removal for shrinking, code renaming or minification, and app optimization.
Enabling R8 is a critical step in preparing an app for release, but it requires developers to provide guidance in the form of "Keep Rules."
After reading this article, check out the Performance Spotlight Week video on enabling, debugging and troubleshooting the R8 optimizer on YouTube.
Why Keep Rules are needed
The need to write Keep Rules stems from a core conflict: R8 is a static analysis tool, but Android apps often rely on dynamic execution patterns like reflection or calls in and out of native code using the JNI (Java Native Interface).
R8 builds a graph of used code by analyzing direct calls. When code is accessed in a dynamic way, R8's static analysis cannot predict that and it will identify that code as unused and remove it, leading to runtime crashes.
A keep rule is an explicit instruction to the R8 compiler, stating: "This specific class, method, or field is an entry point that will be accessed dynamically at runtime. You must keep it, even if you cannot find a direct reference to it."
See the official guide for more details on Keep Rules.
Where to write Keep Rules
Custom Keep Rules for an application are written in text file. By convention, this file is named proguard-rules.pro and is located in the root of the app or library module. This file is then specified in your module's build.gradle.kts file's release build type.
release {
isShrinkResources = true
isMinifyEnabled = true
proguardFiles(
getDefaultProguardFile("proguard-android-optimize.txt"),
"proguard-rules.pro",
)
}
Use the correct default file
The getDefaultProguardFile method imports a default set of rules provided by the Android SDK. When using the wrong file your app might not be optimized. Make sure to use proguard-android-optimize.txt. This file provides the default Keep Rules for standard Android components and enables R8's code optimizations. The outdated proguard-android.txt only provides the Keep Rules but does not enable R8's optimizations.
Since this is a serious performance problem, we are starting to warn developers about using the wrong file, starting in Android Studio Narwhal 3 Feature Drop. And starting with the Android Gradle Plugin Version 9.0 we're no longer supporting the outdated proguard-android.txt file. So make sure you upgrade to the optimized version.
How to write Keep Rules
A keep rule consists of three main parts:
-
An option like -keep or -keepclassmembers
-
Optional modifiers like allowshrinking
-
A class specification that defines the code to match
For the complete syntax and examples, refer to the guidance to add Keep Rules.
Keep Rule anti-patterns
It's important to know about best practices, but also about anti-patterns. These anti-patterns often arise from misunderstandings or troubleshooting shortcuts and can be catastrophic for a production build's performance.
Global options
These flags are global toggles that should never be used in a release build. They are only for temporary debugging to isolate a problem.
Using -dontotptimize effectively disables R8's performance optimizations leading to a slower app.
When using -dontobfuscate you disable all renaming and using -dontshrink turns off dead code removal. Both of these global rules increase app size.
Avoid using these global flags in a production environment wherever possible for a more performant app user experience.
Overly broad keep rules
The easiest way to nullify R8's benefits is to write overly-broad Keep Rules. Keep rules like the one below instruct the R8 optimizer to not shrink, not obfuscate, and not optimize any class in this package or any of its sub-packages. This completely removes R8's benefits for that entire package. Try to write narrow and specific Keep Rules instead.
-keep class com.example.package.** { *;} // WIDE KEEP RULES CAUSE PROBLEMS
The inversion operator (!)
The inversion operator (!) seems like a powerful way to exclude a package from a rule. But it's not that simple. Take this example:
-keep class !com.example.my_package.** { *; } // USE WITH CAUTION
You might think that this rule means "do not keep classes in com.example.package." But it actually means "keep every class, method and property in the entire application that is not in com.example.package." If that came as a surprise to you, best check for any negations in your R8 configuration.
Redundant rules for Android components
Another common mistake is to manually add Keep Rules for your app's Activities, Services, or BroadcastReceivers. This is unnecessary. The default proguard-android-optimize.txt file already includes the relevant rules for these standard Android components to work out of the box.
Also many libraries bring their own Keep Rules. So you should not have to write your own rules for these. In case there is a problem with Keep Rules from a library you're using, it is best to reach out to the library author to see what the problem is.
Keep Rule best practices
Now that you know what not to do, let's talk about best practices.
Write narrow Keep Rules
Good Keep Rules should be as narrow and specific as possible. They should preserve only what is necessary, allowing R8 to optimize everything else.
|
Rule |
Quality |
|---|---|
-keep class com.example.** { ; } |
Low: Keeps an entire package and its subpackages |
-keep class com.example.MyClass { ; } |
Low: Keeps an entire class which is likely still too wide |
-keepclassmembers class com.example.MyClass { private java.lang.String secretMessage; public void onNativeEvent(java.lang.String); } |
High: Only relevant methods and properties from a specific class are kept |
Use common ancestors
Instead of writing separate Keep Rules for multiple different data models, write one rule that targets a common base class or interface. The below rule tells R8 to keep any members of classes that implement this interface and is highly scalable.
# Keep all fields of any class that implements SerializableModel
-keepclassmembers class * implements com.example.models.SerializableModel {
<fields>;
}
Use Annotations to target multiple classes
Create a custom annotation (e.g., @Serialize) and use it to "tag" classes that need their fields preserved. This is another clean, declarative, and highly scalable pattern. You can create Keep Rules for already existing annotations from frameworks you're using as well.
# Keep all fields of any class annotated with @Serialize
-keepclassmembers class * {
@com.example.annotations.Serialize <fields>;
}
Choose the right Keep Option
The Keep Option is the most critical part of the rule. Choosing the wrong one can needlessly disable optimization.
|
Keep Option |
What It Does |
|
-keep |
Prevents the class and members mentioned in the declaration from being removed or renamed. |
|
-keepclassmembers |
Prevents the specified members from being removed or renamed, but allows the class itself to be removed but only on classes which are not otherwise removed. |
|
-keepclasseswithmembers |
A combination: Keeps the class and its members, only if all the specified members are present. |
You can find more about the keep option in our documentation for Keep Options.
Allow optimization with Modifiers
Modifiers like allowshrinking and allowobfuscation relax a broad -keep rule, giving optimization power back to R8. For example, if a legacy library forces you to use -keep on an entire class, you might be able to reclaim some optimization by allowing shrinking and obfuscation:
# Keep this class, but allow R8 to remove it if it's unused and allow R8 to rename it.
-keep,allowshrinking,allowobfuscation class com.example.LegacyClass
Add global options for additional optimization
Beyond Keep Rules, you can add global flags to your R8 configuration file to encourage even more optimization.
-repackageclasses is a powerful option that instructs R8 to move all obfuscated classes into a single package. This saves significant space in the DEX file by removing redundant package name strings.
-allowaccessmodification allows R8 to widen access (e.g., private to public) to enable more aggressive inlining. This is now enabled by default when using proguard-android-optimize.txt.
Warning: Library authors must never add these global optimization flags to their consumer rules, as they would be forcibly applied to the entire app.
And to make it even more clear, in version 9.0 of the Android Gradle Plugin we're going to start ignoring global optimization flags from libraries altogether.
Best practices for libraries
Every Android app relies on libraries one way or another. So let's talk about best practices for libraries.
For library developers
If your library uses reflection or JNI, you have the responsibility to provide the necessary Keep Rules to its consumers. These rules are placed in a consumer-rules.pro file, which is then automatically bundled inside the library's AAR file.
android {
defaultConfig {
consumerProguardFiles("consumer-rules.pro")
}
...
}
For library consumers
Filter out problematic Keep Rules
If you must use a library that includes problematic Keep Rules, you can filter them out in your build.gradle.kts file starting with AGP 9.0 This tells R8 to ignore the rules coming from a specific dependency.
release {
optimization.keepRules {
// Ignore all consumer rules from this specific library
it.ignoreFrom("com.somelibrary:somelibrary")
}
}
The best Keep Rule is no Keep Rule
The ultimate R8 configuration strategy is to remove the need to write Keep Rules altogether. For many apps can be achieved by choosing modern libraries that favor code generation over reflection. With code generation, the optimizer can more easily determine what code is actually used at runtime and what code can be removed. Also not using any dynamic reflection means no "hidden" entry points, and therefore, no Keep Rules are needed. When choosing a new library, always prefer a solution that uses code generation over reflection.
For more information about how to choose libraries, check choose library wisely.
Debugging and troubleshooting your R8 configuration
When R8 removes code it should have kept, or your APK is larger than expected, use these tools to diagnose the problem.
Find duplicate and global Keep Rules
Because R8 merges rules from dozens of sources, it can be hard to know what the "final" ruleset is. Adding this flag to your proguard-rules.pro file generates a complete report:
# Outputs the final, merged set of rules to the specified file
-printconfiguration build/outputs/logs/configuration.txt
You can search this file to find redundant rules or trace a problematic rule (like -dontoptimize) back to the specific library that included it.
Ask R8: Why are you keeping this?
If a class you expected to be removed is still in your app, R8 can tell you why. Just add this rule:
# Asks R8 to explain why it's keeping a specific class
class com.example.MyUnusedClass
-whyareyoukeeping
During the build, R8 will print the exact chain of references that caused it to keep that class, allowing you to trace the reference and adjust your rules.
For a full guide, check out the troubleshoot R8 section.
Next steps
R8 is a powerful tool for enhancing Android app performance. Its effectiveness, depends on a correct understanding of its operation as a static analysis engine.
By writing specific, member-level rules, leveraging ancestors and annotations, and carefully choosing the right keep options, you can preserve exactly what is necessary. The most advanced practice is to eliminate the need for rules entirely by choosing modern, codegen-based libraries over their reflection-based predecessors.
As you're following along Performance Spotlight Week, make sure to check out today's Spotlight Week video on YouTube and continue with our R8 challenge. Use #optimizationEnabled for any questions on enabling or troubleshooting R8. We're here to help.
It's time to see the benefits for yourself.
We challenge you to enable R8 full mode for your app today.
-
Follow our developer guides to get started: Enable app optimization.
-
Check if you still use proguard-android.txt and replace it with proguard-android-optimize.txt.
-
Then, measure the impact. Don't just feel the difference, verify it. Measure your performance gains by adapting the code from our Macrobenchmark sample app on GitHub to measure your startup times before and after.
We're confident you'll see a meaningful improvement in your app's performance.
While you're at it, use the social tag #AskAndroid to bring your questions. Throughout the week our experts are monitoring and answering your questions.
Stay tuned for tomorrow, where we'll talk about Profile Guided Optimization with Baseline and Startup Profiles, share how Compose rendering performance improved over the past releases and share performance considerations for background work.
18 Nov 2025 5:00pm GMT
Gemini 3 is now available for AI assistance in Android Studio

Posted by Tor Norbye - Senior Director of Engineering

The Gemini 3 Pro model, released today and engineered for better coding and agentic experiences, is now available for AI assistance in the latest version of Android Studio Otter. Android Studio is the best place for professional Android developers to use Gemini 3 for superior performance in Agent Mode, streamlined development workflows, and advanced problem solving capabilities. With agentic AI assistance to help you with boilerplate and complex development tasks, Android Studio helps you focus on what you do best-creating high quality apps for your users.
To get started with Gemini 3 Pro for Android development, download or update to the latest version of Android Studio Otter. For developers using Gemini in Android Studio at no-cost (Default Model), we are rolling out limited access to Gemini 3 with a 1 million token size window. For higher usage rate limits and longer sessions with Agent Mode, use a Gemini API key to leverage Gemini 3 in Android Studio for the highest tier of AI capability.
Adding a Gemini API key in Android Studio
This week we're rolling out Gemini 3 access for organizations, starting with users who have Gemini Code Assist Enterprise licenses. Your IT administrator will need to enable access to preview models through the Google Cloud console, and you'll need to sign up for the waitlist.
Try Gemini 3 Pro in Android Studio, and let us and the Android developer community know what you think. You can follow us across LinkedIn, Blog, YouTube, and X. We can't wait to see what you build!
18 Nov 2025 4:06pm GMT
TalkAndroid
Why teens are ditching Instagram for WhatsApp channels
Instagram may have the filters, Reels, and influencers - but when it comes to the next generation, another…
18 Nov 2025 7:30am GMT
This WhatsApp AI scam is spreading fast here’s how to stay safe
If you use WhatsApp regularly, you're far from alone. With nearly two billion users worldwide, it's one of…
18 Nov 2025 4:30am GMT
17 Nov 2025
Android Developers Blog
How Reddit used the R8 optimizer for high impact performance improvements

Posted by Ben Weiss - Senior Developer Relations Engineer

In today's world of mobile applications, a seamless user experience is not just a feature-it's a necessity. Slow load times, unresponsive interfaces, and instability can be significant barriers to user engagement and retention. During their work with the Android Developer Relations team, the engineering team at Reddit used the App Performance Score to evaluate their app. After assessing their performance, they identified significant improvement potential and decided to take the steps to enable the full power of R8, the Android app optimizer. This focused initiative led to remarkable improvements in startup times, reductions in slow or frozen frames and ANRs, and an overall increase in Play Store ratings. This case study breaks down how Reddit achieved these impressive results.
How the R8 Optimizer helped Reddit
-
Tree shaking is the most important step to reduce an app's size. Here, unused code from app dependencies and the app itself is removed.
-
Method inlining replaces method calls with the actual code, making the app more performant.
-
Class merging, and other strategies are applied to make the code more compact. At this point it's not about human readability of source code any more, but making compiled code work fast. So abstractions, such as interfaces or class hierarchies don't matter here and will be removed.
-
Identifier minification changes the names of classes, fields, and methods to shorter, meaningless names. So instead of MyDataModel you might end up with a class called a.
-
Resource shrinking removes unused resources such as xml files and drawables to further reduce app size.
Caption: Main stages of R8 Optimization
From hard data to user satisfaction: Identifying success in production
Reddit saw improved performance results immediately after a new version of the app was rolled out to users. By using Android Vitals and Crashlytics, Reddit was able to capture performance metrics on real devices with actual users, allowing them to compare the new release against previous versions.
Caption: How R8 improved Reddit's app performance
The team observed a 40% faster cold startup, a 30% reduction in "Application Not Responding" (ANR) errors, a 25% improvement in frame rendering, and a 14% reduction in app size.
These enhancements are crucial for user satisfaction. A faster startup means less waiting and quicker access to content. Fewer ANRs lead to a more stable and reliable app, reducing user frustration. Smoother frame rendering removes UI jank, making scrolling and animations feel fluid and responsive. This positive technical impact was also clearly visible in user sentiment.
User satisfaction indicators of the optimization's success were directly visible on the Google Play Store. Following the rollout of the R8-optimized version, the team saw a dramatic and positive shift in user sentiment and engagement.
Drew Heavner: "Enabling R8's full potential tool less than 2 weeks"
Most impressively, this was accomplished with a focused effort. Drew Heavner, the Staff Software Engineer at Reddit who worked on this initiative, noted that implementing the changes to enable R8's full potential took less than two weeks.
Confirming the gains: A deep dive with macrobenchmarks
After observing the significant real-world improvements, Reddit's engineering team and the Android Developer Relations team at Google conducted detailed benchmarks to scientifically confirm the gains and experiment with further optimizations. For this analysis, Reddit engineering provided two versions of their app: one without optimizations and another that applied R8 and two more foundational performance optimization tools: Baseline Profiles, and Startup Profiles.
Baseline Profiles effectively move the Just in Time (JIT) compilation steps away from user devices and onto developer machines. The generated Ahead Of Time (AOT) compiled code has proven to reduce startup time and rendering issues alike.
When an app is packaged, the d8 dexer takes classes and methods and constructs your app's classes.dex files. When a user opens the app, these dex files are loaded, one after the other until the app can start. By providing a Startup Profile you let d8 know which classes and methods to pack in the first classes.dex files. This structure allows the app to load fewer files, which in turn improves startup speed.
Jetpack Macrobenchmark was the core tool for this phase, allowing for precise measurement of user interactions in a controlled environment. To simulate a typical user journey, they used the UIAutomator API to create a test that opened the app, scrolled down three times, and then scrolled back up.
In the end all that was needed to write the benchmark was this:
uiAutomator {
startApp(REDDIT)
repeat(3) {
onView { isScrollable }.fling(Direction.DOWN) }
repeat(3) {
onView {isScrollable }.fling(Direction.UP)
}
}
The benchmark data confirmed the field observations and provided deeper insights. The fully optimized app started 55% faster and users could begin to browse 18% sooner. The optimized app also showed a two-thirds reduction in Just in Time (JIT) compilation occurrences and a one-third decrease in JIT compilation time. Frame rendering improved, resulting in 19% more frames being rendered over the benchmarked user journey. Finally, the app's size was reduced by over a third.
Caption: Reddit's overall performance improvements
You can measure the JIT compilation time with a custom Macrobenchmark trace section metric like this:
val jitCompilationMetric = TraceSectionMetric("JIT Compiling %", label = "JIT compilation")
Enabling the technology behind the transformation: R8
To enable R8 in full mode, you configure your app/build.gradle.kts file by setting minifyEnabled and shrinkResources to true in the release build type.
android {
...
buildTypes {
release {
isMinifyEnabled = true
isShrinkResources = true
proguardFiles(
getDefaultProguardFile("proguard-android-optimize.txt"),
"keep-rules.pro",
)
}
}
}
This step has to be followed by holistic end to end testing, as performance optimizations can lead to unwanted behavior, which you better catch before your users do.
As shown earlier in this article, R8 performs extensive optimizations in order to maximize your performance benefits. R8 makes substantial modifications to the code including renaming, moving, and removing classes, fields and methods. If you observe that these modifications cause errors, you need to specify which parts of the code R8 shouldn't modify by declaring those in keep rules.
Follow Reddit's example in your app
Reddit's success with R8 serves as a powerful case study for any development team looking to make a significant, low-effort impact on their app's performance. The direct correlation between the technical improvements and the subsequent rise in user satisfaction underscores the value of performance optimization.
By following the blueprint laid out in this case study-using tools like the App Performance Score to identify opportunities, enabling R8's full optimization potential, monitoring real-world data, and using benchmarks to confirm and deepen understanding-other developers can achieve similar gains.
To get started with R8 in your own app, refer to the freshly updated official documentation and guidance on enabling, configuring and troubleshooting the R8 optimizer.
17 Nov 2025 6:01pm GMT
TalkAndroid
Verizon to Lay Off 15,000 Employees In A Holiday Shock
Forget the Grinch. Verizon is here to steal your Christmas spirit this year.
17 Nov 2025 5:37pm GMT
Android Developers Blog
Use R8 to shrink, optimize, and fast-track your app

Posted by Ben Weiss - Senior Developer Relations Engineer

Welcome to day one of Android Performance Spotlight Week!
We're kicking things off with the single most impactful, low-effort change you can make to improve your app's performance: enabling the R8 optimizer in full mode.
You probably already know R8 as a tool to shrink your app's size. It does a fantastic job of removing unused code and resources, reducing your app's size. But its real power, the one it's really g-R8 at, is as an optimizer.
When you enable full mode and allow optimizations, R8 performs deep, whole-program optimizations, rewriting your code to be fundamentally more efficient. This isn't just a minor tweak.
After reading this article, check out the Performance Spotlight Week introduction to the R8 optimizer on YouTube.
How R8 makes your app more performant
Let's shine a spotlight on the largest steps that the R8 optimizer takes to improve app performance.
Tree shaking is the most important step to reduce app size. During this phase the R8 optimizer removes unused code from libraries that your app depends on as well as dead code from your own codebase.
Method inlining replaces a method call with the actual code, which improves runtime performance.
Class merging, and other strategies are applied to make the code more compact. All your beautiful abstractions, such as interfaces and class hierarchies don't matter at this point and are likely to be removed.
Code minification is used to change the names of classes, fields, and methods to shorter, meaningless ones. So instead of MyDataModel you might end up with a class called a. This is what causes the most confusion when reading stack traces from an R8 optimized app. (Note that we have improved this in AGP 9.0!)
Resource shrinking further reduces an app's size by removing unused resources such as xml files and drawables.
By applying these steps the R8 optimizer improves app startup times, enables smoother UI rendering, with fewer slow and frozen frames and improves overall on-device resource usage.
Case Study: Reddit's performance improvements with R8
As one example of the performance improvements that R8 can bring, let's take a look at an example from Reddit. After enabling R8 in full mode, the Reddit for Android app saw significant performance improvements in various areas.
Caption: How R8 improved Reddit's app performance
The team observed a 40% faster cold startup, a 30% reduction in "Application Not Responding" (ANR) errors, a 25% improvement in frame rendering, and a 14% reduction in app size .
These enhancements are crucial for user satisfaction. A faster startup means less waiting and quicker access to content. Fewer ANRs lead to a more stable and reliable app, reducing user frustration. Smoother frame rendering removes UI jank, making scrolling and animations feel fluid and responsive. This positive technical impact was also clearly visible in user sentiment.
You can read more about their improvements on our blog.
Non-technical side effects of using R8
During our work with partners we have seen that these technical improvements have a direct impact on user satisfaction and can be reflected in user retention, engagement and session length. User stickiness, which can be measured with daily, weekly or monthly active users, has also been positively affected by technical performance improvements. And we've seen app ratings on the Play Store rise in correlation with R8 adoption. Sharing this with your product owners, CTOs and decision makers can help speed up your app's performance.
So let's call it what it is: Deliberate performance optimization is a virtue.
Guiding you to a more performant app
We heard that our developer guidance for R8 needed to be improved. So we went to work. The developer guidance for the R8 optimizer now is much more actionable and provides comprehensive guidance to enable and debug R8.
The documentation guides you on the high-level strategy for adoption, emphasizing the importance of choosing optimization-friendly libraries and, crucially, adopting R8's features incrementally to ensure stability. This phased approach allows you to safely unlock the benefits of R8 while providing you with guidance on difficult-to-debug issues.
We have significantly expanded our guidance on Keep Rules, which are the primary mechanism for controlling the R8 optimizer. We now provide a section on what Keep Rules are, how to apply them and guide you with best practices for writing and maintaining them. We also provide practical and actionable use cases and examples, helping you understand how to correctly prevent R8 from removing code that is needed at runtime, such as code accessed via reflection or use of the JNI native interface.
The documentation now also covers essential follow-up steps and advanced scenarios. We added a section on testing and troubleshooting, so you can verify the performance gains and debug any potential issues that arise. The advanced configurations section explains how to target specific build variants, customize which resources are kept or removed, and offers special optimization instructions for library authors, ensuring you can provide an optimized and R8-friendly package for other developers to use.
Enable the R8 optimizer's full potential
The R8 optimizer defaults to using "full mode" since version 8.0 of the Android Gradle Plugin. If your project has been developed over many years, it might still include a legacy flag to disable it. Check your gradle.properties file for this line and remove it.
android.enableR8.fullMode=false // delete this line to enable R8's full potential
Now check whether you have enabled R8 in your app's build.gradle.kts file for the release variant. It's enabled by setting isMinifyEnabled and isShrinkResources to true. You can also pass default and custom configuration files at this step.
release {
isMinifyEnabled = true
isShrinkResources = true
proguardFiles(
getDefaultProguardFile("proguard-android-optimize.txt"),
"keep-rules.pro"
)
}
Case Study: Disney+ performance improvements
Engineers at Disney+ invest in app performance and are optimizing the app's user experience. Sometimes even seemingly small changes can make a huge impact. While inspecting their R8 configuration, the team found that the -dontoptimize flag was being used. It was brought in by a default configuration file, which is still used in many apps today.
After replacing proguard-android.txt with proguard-android-optimize.txt, the Disney+ team saw significant improvements in their app's performance.
After a new version of the app containing this change was rolled out to users, Disney+ saw 30% faster app startup and 25% fewer user-perceived ANRs.
Today many apps still use the proguard-android.txt file which contains the -dontoptimize flag. And that's where our tooling improvements come in.
Tooling support
Starting with Android Studio Narwhal 3 Feature Drop, you will see a lint warning when using proguard-android.txt
And from AGP 9.0 onwards we are entirely dropping support for the file. This means you will have to migrate to proguard-android-optimize.txt.
We've also invested in new Android Studio features to make debugging R8-optimized code easier than ever. Starting in AGP 9.0 you can now automatically de-obfuscate stack traces within Android Studio's logcat for R8-processed builds, helping you pinpoint the exact line of code causing an issue, even in a fully optimized app. This will be covered in more depth in tomorrow's blog post on this Android Performance Spotlight Week.
Next Steps
Check out the Performance Spotlight Week introduction to the R8 optimizer on YouTube.
📣 Take the Performance Challenge!
It's time to see the benefits for yourself.
We challenge you to enable R8 full mode for your app today.
-
Follow our developer guides to get started: Enable app optimization.
-
Check if you still use proguard-android.txt and replace it with proguard-android-optimize.txt.
-
Then, measure the impact. Don't just feel the difference, verify it. Measure your performance gains by adapting the code from our Macrobenchmark sample app on GitHub to measure your startup times before and after.
We're confident you'll see a meaningful improvement in your app's performance. Use #optimizationEnabled for any questions on enabling or troubleshooting R8. We're here to help.
Bring your questions for the Ask Android session on Friday
Use the social tag #AskAndroid to bring any performance questions. Throughout the week we are monitoring your questions and will answer several in the Ask Android session on performance on Friday, November 21. Stay tuned for tomorrow, where we'll dive even deeper into debugging and troubleshooting. But for now, get started with R8 and get your app on the fast track.
17 Nov 2025 5:00pm GMT
Get your app on the fast track with Android Performance Spotlight Week!
Posted by Ben Weiss - Senior Developer Relations Engineer, Performance Paladin

When working on new features, app performance often takes a back seat. However, while it's not always top of mind for developers, users can see exactly where your app's performance lags behind. When that new feature takes a long time to load or is slow to render, your users can become frustrated. And unhappy users are more likely to abandon the feature you spent so much time on.
App performance is a core part of user experience and app quality, and recent studies and research shows that it's highly correlated with increased user satisfaction, higher retention, and better review scores.
And we're here to help… Welcome to Android Performance Spotlight Week! All week long, we're providing you with low-effort, high-impact tools and guidance to get your app on the fast track to better performance. We help you lay the foundation and then dive deeper into helping your app become a better version of itself.
The R8 optimizer and Profile Guided Optimizations are foundational tools to improve overall app performance. And that's why we just released significant improvements to Android Studio tooling for performance and with the Android Gradle Plugin 9.0 we're introducing new APIs to make it easier for you to do the right thing when configuring the R8 Android app optimizer. Jetpack Compose version 1.10, which is now in beta, ships with several features that improve app rendering performance. In addition to these updates, we're bringing you a refresher on improving app health and performance monitoring. Some of our partners are going to tell their performance improvement stories as well.
Stay tuned to the blog all week as we'll be updating this post with a digest of all the content released. We're excited to share these updates and help you improve your app's performance.
Here's a closer look at what we'll be covering:
Monday: Deliberate performance optimization with R8
November 17, 2025
We're kicking off with a deep dive into the R8 optimizer. It's not just about shrinking your app's size, it's about gaining a fundamental understanding of how the R8 optimizer can improve performance in your app and why you should use it right away. We just published the largest overhaul of new technical guidance to date. The guides cover how to enable, configure and troubleshoot the R8 optimizer. On Monday you'll also see case studies from top partners showing the real-world gains they achieved.
Read the blog post and developer guide.
Tuesday: Debugging and troubleshooting R8
November 18, 2025
We tackle the "Why does my app crash after enabling R8?" question head-on. We know advanced optimization can sometimes reveal edge cases, so we're focusing on debugging and troubleshooting R8 related issues. We'll show you how to use new features in Android Studio to de-obfuscate stack traces, identify common configuration problems, and implement best practices to get the most out of R8. We want you to feel confident, not just hopeful, when you flip the switch.
Content coming on November 18, 2025
Wednesday: Deeper performance considerations
November 19, 2025
Mid-week, we explore high-impact performance offerings beyond the R8 optimizer. We'll show you how to supercharge your app's startup and interactions using Profile Guided Optimization with Baseline Profiles and Startup Profiles. They are ready and proven to deliver another massive boost. We also have exciting news on Jetpack Compose rendering performance improvements. Plus, we'll share how to optimize your app's health by managing background work effectively.
Content coming on November 19, 2025
Thursday: Measure and improve
November 20, 2025
It's not an improvement if you can't prove it. Thursday is dedicated to performance measurement. We'll share our complete guide, starting from local measurement and debugging with tools like Jetpack Macrobenchmark and the new UiAutomator API to capture jank and startup times, all the way to monitoring your app in the wild. You'll learn about Play Vitals and other new APIs to understand your real user performance and quantify your success.
Content coming on November 20, 2025
Friday: Ask Android Live
November 21, 2025
We cap off the week with an in-depth, live conversation. This is your chance to talk directly with the engineers and Developer Relations team who build and use these tools every day. We'll have a panel of experts from the R8 and other performance teams ready to answer your toughest questions live. Get your questions ready!
Content coming on November 21, 2025
📣 Take the Performance Challenge!
We're not just sharing guidance. We're challenging you to put it into action!
Here's our challenge for you this week: Enable R8 full mode for your app.
-
Follow our developer guides to get started: Enable app optimization.
-
Then, measure the impact. Don't just feel the difference, verify it. Measure your performance gains by using or adapting the code from our Macrobenchmark sample app on GitHub to measure your startup times before and after.
We're confident you'll see a meaningful improvement in your app's performance.
While you're at it, use the social tags #AskAndroid to bring your questions. Throughout the week our experts are monitoring and answering your questions.
17 Nov 2025 5:00pm GMT
TalkAndroid
Low battery This new Google Maps option could save your day
We've all been there: you're halfway to an unfamiliar destination, your phone's battery is dipping into the red,…
17 Nov 2025 4:30pm GMT
Don’t learn to code anymore Nvidia’s CEO drops a controversial take
For years, parents, teachers, and career advisors have repeated the same mantra: learn to code. But according to…
17 Nov 2025 7:30am GMT
16 Nov 2025
TalkAndroid
Be kind to ChatGPT it may just surprise you in return
If you've ever noticed ChatGPT giving sharper answers after you've asked a question politely, you're not imagining it.…
16 Nov 2025 4:30pm GMT
Did you know Waze can really detect speed traps? Here’s how to enable it
With speed checks becoming more frequent and new-generation cameras appearing on highways, many drivers are turning to smart…
16 Nov 2025 7:30am GMT
15 Nov 2025
TalkAndroid
Here’s how to blur your house on Google Maps Street View in seconds
With Street View offering sharper and more detailed images than ever, some people have begun to notice just…
15 Nov 2025 4:30pm GMT
Best Screen Protectors For Nothing Phone 3a
Only the best screen protectors for your Nothing Phone 3a. Cop them at affordable prices now.
15 Nov 2025 10:04am GMT
The Awful Name ChatGPT Nearly Launched With
It's hard to imagine life without ChatGPT now - the name alone feels as familiar as Google or…
15 Nov 2025 7:30am GMT
14 Nov 2025
TalkAndroid
Google Photos adds brilliant new feature to instantly free up space
If your phone's storage is constantly running low, Google has just delivered a clever solution. The latest Google…
14 Nov 2025 4:30pm GMT
Galaxy S25+ vs OnePlus 15: The Hard Numbers, No Hype
Two phones, two philosophies - it's all in the specs
14 Nov 2025 3:55pm GMT
Prime Video abruptly cancels action series meant to succeed Reacher
After a strong debut and high hopes from fans, Prime Video has unexpectedly pulled the plug on its…
14 Nov 2025 7:30am GMT
13 Nov 2025
Android Developers Blog
Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture

Posted by Scott Nien, Software Engineer
For video recording, users can now effortlessly capture stunning slow-motion or high-frame-rate videos. More importantly, the new Feature Group API allows you to confidently enable complex combinations like 10-bit HDR and 60 FPS, ensuring consistent results across supported devices.
On the image capture front, you gain maximum flexibility with support for capturing unprocessed, uncompressed DNG (RAW) files. Plus, you can now leverage Ultra HDR output even when using powerful Camera Extensions.
Underpinning these features is the new SessionConfig API, which streamlines camera setup and reconfiguration. Now, let's dive into the details of these exciting new features.
Powerful Video Recording: High-Speed and Feature Combinations
CameraX 1.5 significantly expands its video capabilities, enabling more creative and robust recording experiences.
Slow Motion & High Frame Rate Video
One of our most anticipated features, slow-motion video, is now available. You can now capture high-speed video (e.g., 120 or 240 fps) and encode it directly into a dramatic slow-motion video. Alternatively, you can record at the same high frame rate to produce exceptionally smooth video.
Implementing this is straightforward if you're familiar with the VideoCapture API.
-
Check for High-Speed Support: Use the new Recorder.getHighSpeedVideoCapabilities() method to query if the device supports this feature.
val cameraInfo = cameraProvider.getCameraInfo(cameraSelector) val highSpeedCapabilities = Recorder.getHighSpeedVideoCapabilities(cameraInfo) if (highSpeedCapabilities == null) { // This camera device does not support high-speed video. return }
-
Configure and Bind the Use Case: Use the returned videoCapabilities (which contains supported video quality information) to build a HighSpeedVideoSessionConfig. You must then query the supported frame rate ranges via cameraInfo.getSupportedFrameRateRanges() and set the desired range. Invoke setSlowMotionEnabled(true) to record slow motion videos, otherwise it will record high-frame-rate videos. The final step is to use the regular Recorder.prepareRecording().start() to begin recording the video.
val preview = Preview.Builder().build() val quality = highSpeedCapabilities .getSupportedQualities(DynamicRange.SDR).first() val recorder = Recorder.Builder() .setQualitySelector(QualitySelector.from(quality))) .build() val videoCapture = VideoCapture.withOutput(recorder) val frameRateRange = cameraInfo.getSupportedFrameRateRanges( HighSpeedVideoSessionConfig(videoCapture, preview) ).first() val sessionConfig = HighSpeedVideoSessionConfig( videoCapture, preview, frameRateRange = frameRateRange, // Set true for slow-motion playback, or false for high-frame-rate isSlowMotionEnabled = true ) cameraProvider.bindToLifecycle( lifecycleOwner, cameraSelector, sessionConfig) // Start recording slow motion videos. val recording = recorder.prepareRecording(context, outputOption) .start(executor, {})
Compatibility and Limitations
High-speed recording requires specific CameraConstrainedHighSpeedCaptureSession and CamcorderProfile support. Always perform the capability check, and enable high-speed recording only on supported devices to prevent bad user experience. Currently, this feature is supported on the rear cameras of almost all Pixel devices and select models from other manufacturers.
Check the blog post for more details.
Combine Features with Confidence: The Feature Group API
CameraX 1.5 introduces the Feature Group API, which eliminates the guesswork of feature compatibility. Based on Android 15's feature combination query API, you can now confidently enable multiple features together, guaranteeing a stable camera session. The Feature Group currently supports: HDR (HLG), 60 fps, Preview Stabilization, and Ultra HDR. For instance, you can enable HDR, 60 fps, and Preview Stabilization simultaneously on Pixel 10 and Galaxy S25 series. Future enhancements are planned to include 4K recording and ultra-wide zoom.
The feature group API enables two essential use cases:
Use Case 1: Prioritizing the Best Quality
If you want to capture using the best possible combination of features, you can provide a prioritized list. CameraX will attempt to enable them in order, selecting the first combination the device fully supports.
val sessionConfig = SessionConfig( useCases = listOf(preview, videoCapture), preferredFeatureGroup = listOf( GroupableFeature.HDR_HLG10, GroupableFeature.FPS_60, GroupableFeature.PREVIEW_STABILIZATION ) ).apply { // (Optional) Get a callback with the enabled features to update your UI. setFeatureSelectionListener { selectedFeatures -> updateUiIndicators(selectedFeatures) } } processCameraProvider.bindToLifecycle(activity, cameraSelector, sessionConfig)
In this example, CameraX tries to enable features in this order:
-
HDR + 60 FPS + Preview Stabilization
-
HDR + 60 FPS
-
HDR + Preview Stabilization
-
HDR
-
60 FPS + Preview Stabilization
-
60 FPS
-
Preview Stabilization
-
None
Use Case 2: Building a User-Facing Settings UI
You can now accurately reflect which feature combinations are supported in your app's settings UI, disabling toggles for unsupported options like the picture below.
To determine whether to gray out a toggle, use the following codes to check for feature combination support. Initially, query the status of every individual feature. Once a feature is enabled, re-query the remaining features with the enabled features to see if their toggles must now be grayed out due to compatibility constraints.
fun disableFeatureIfNotSuported( enabledFeatures: Set<GroupableFeature>, featureToCheck:GroupableFeature ) { val sessionConfig = SessionConfig( useCases = useCases, requiredFeatureGroup = enabledFeatures + featureToCheck ) val isSupported = cameraInfo.isFeatureGroupSupported(sessionConfig) if (!isSupported) { // disable the toggle for featureToCheck } }
Please refer to the Feature Group blog post for more information.
More Video Enhancements
-
Concurrent Camera Improvements: With CameraX 1.5.1, you can now bind Preview + ImageCapture + VideoCapture use cases concurrently for each SingleCameraConfig in non-composition mode. Additionally, in composition mode (same use cases with CompositionSettings), you can now set the CameraEffect that is applied to the final composition result.
-
Dynamic Muting: You can now start a recording in a muted state using PendingRecording.withAudioEnabled(boolean initialMuted) and allow the user to unmute later using Recording.mute(boolean muted).
-
Improved Insufficient Storage Handling: CameraX now reliably dispatches the VideoRecordEvent.Finalize.ERROR_INSUFFICIENT_STORAGE error, allowing your app to gracefully handle low storage situations and inform the user.
-
Low Light Boost: On supported devices (like the Pixel 10 series), you can enable CameraControl.enableLowLightBoostAsync to automatically brighten the preview and video streams in dark environments.
Professional-Grade Image Capture
CameraX 1.5 brings major upgrades to ImageCapture for developers who demand maximum quality and flexibility.
Unleash Creative Control with DNG (RAW) Capture
For complete control over post-processing, CameraX now supports DNG (RAW) capture. This gives you access to the unprocessed, uncompressed image data directly from the camera sensor, enabling professional-grade editing and color grading. The API supports capturing the DNG file alone, or capturing simultaneous JPEG and DNG outputs. See the sample code below for how to capture JPEG and DNG files simultaneously.
val capabilities = ImageCapture.getImageCaptureCapabilities(cameraInfo) val imageCapture = ImageCapture.Builder().apply { if (capabilities.supportedOutputFormats .contains(OUTPUT_FORMAT_RAW_JPEG)) { // Capture both RAW and JPEG formats. setOutputFormat(OUTPUT_FORMAT_RAW_JPEG) } }.build() // ... bind imageCapture to lifecycle ... // Provide separate output options for each format. val outputOptionRaw = /* ... configure for image/x-adobe-dng ... */ val outputOptionJpeg = /* ... configure for image/jpeg ... */ imageCapture.takePicture( outputOptionRaw, outputOptionJpeg, executor, object : ImageCapture.OnImageSavedCallback { override fun onImageSaved(results: OutputFileResults) { // This callback is invoked twice: once for the RAW file // and once for the JPEG file. } override fun onError(exception: ImageCaptureException) {} } )
Ultra HDR for Camera Extensions
Get the best of both worlds: the stunning computational photography of Camera Extensions (like Night Mode) combined with the brilliant color and dynamic range of Ultra HDR. This feature is now supported on many recent premium Android phones, such as the Pixel 9/10 series and Samsung S24/S25 series.
// Support UltraHDR when Extension is enabled. val extensionsEnabledCameraSelector = extensionsManager .getExtensionEnabledCameraSelector( CameraSelector.DEFAULT_BACK_CAMERA, ExtensionMode.NIGHT) val imageCapabilities = ImageCapture.getImageCaptureCapabilities( cameraProvider.getCameraInfo(extensionsEnabledCameraSelector) val imageCapture = ImageCapture.Builder() .apply { if (imageCapabilities.supportedOutputFormats .contains(OUTPUT_FORMAT_JPEG_ULTRA_HDR) { setOutputFormat(OUTPUT_FORMAT_JPEG_ULTRA_HDR) } }.build()
Core API and Usability Enhancements
A New Way to Configure: SessionConfig
As seen in the examples above, SessionConfig is a new concept in CameraX 1.5. It centralizes configuration and simplifies the API in two key ways:
-
No More Manual unbind() Calls: CameraX APIs are lifecycle-aware. It will implicitly "unbind" your use cases when the activity or other LifecycleOwner is destroyed. But updating use cases or switching cameras still requires you to call unbind() or unbindAll() before rebinding. Now with CameraX 1.5, when you bind a new SessionConfig, CameraX seamlessly updates the session for you, eliminating the need for unbind calls.
-
Deterministic Frame Rate Control: The new SessionConfig API introduces a deterministic way to manage the frame rate. Unlike the previous setTargetFrameRate, which was only a hint, this new method guarantees the specified frame rate range will be applied upon successful configuration. To ensure accuracy, you must query supported frame rates using CameraInfo.getSupportedFrameRateRanges(SessionConfig). By passing the full SessionConfig, CameraX can accurately determine the supported ranges based on stream configurations.
Camera-Compose is Now Stable
We know how much you enjoy Jetpack Compose, and we're excited to announce that the camera-compose library is now stable at version 1.5.1! This release includes critical bug fixes related to CameraXViewfinder usage with Compose features like moveableContentOf and Pager, as well as resolving a preview stretching issue. We will continue to add more features to camera-compose in future releases.
ImageAnalysis and CameraControl Improvements
-
Torch Strength Adjustment: Gain fine-grained control over the device's torch with new APIs. You can query the maximum supported strength using CameraInfo.getMaxTorchStrengthLevel() and then set the desired level with CameraControl.setTorchStrengthLevel().
-
NV21 Support in ImageAnalysis: You can now request the NV21 image format directly from ImageAnalysis, simplifying integration with other libraries and APIs. This is enabled by invoking ImageAnalysis.Builder.setOutputImageFormat(OUTPUT_IMAGE_FORMAT_NV21).
Get Started Today
Update your dependencies to CameraX 1.5 today and explore the exciting new features. We can't wait to see what you build.
To use CameraX 1.5, please add the following dependencies to your libs.versions.toml. (We recommend using 1.5.1 which contains many critical bug fixes and concurrent camera improvements.)
[versions] camerax = "1.5.1" [libraries] .. androidx-camera-core = { module = "androidx.camera:camera-core", version.ref = "camerax" } androidx-camera-compose = { module = "androidx.camera:camera-compose", version.ref = "camerax" } androidx-camera-view = { module = "androidx.camera:camera-view", version.ref = "camerax" } androidx-camera-lifecycle = { group = "androidx.camera", name = "camera-lifecycle", version.ref = "camerax" } androidx-camera-camera2 = { module = "androidx.camera:camera-camera2", version.ref = "camerax" } androidx-camera-extensions = { module = "androidx.camera:camera-extensions", version.ref = "camerax" }
And then add these to your module build.gradle.kts dependencies:
dependencies {
..
implementation(libs.androidx.camera.core)
implementation(libs.androidx.camera.lifecycle)
implementation(libs.androidx.camera.camera2)
implementation(libs.androidx.camera.view) // for PreviewView
implementation(libs.androidx.camera.compose) // for compose UI
implementation(libs.androidx.camera.extensions) // For Extensions
}
Have questions or want to connect with the CameraX team? Join the CameraX developer discussion group or file a bug report:
13 Nov 2025 5:00pm GMT
#WeArePlay: Meet the game creators who entertain, inspire and spark imagination
Posted by Robbie McLachlan, Developer Marketing
In our latest #WeArePlay stories, we meet the game creators who entertain, inspire and spark imagination in players around the world on Google Play. From delivering action-packed 3D kart racing to creating a calming, lofi world for plant lovers - here are a few of our favourites:
Ralf and Matt, co-founders of Vector Unit
San Rafael (CA), U.S.
With over 557 million downloads, Ralf and Matt's game, Beach Buggy Racing, brings the joy of classic, action-packed kart racing to gamers worldwide.
After meeting at a California game company back in the late '90s, Matt and Ralf went on to work at major studios. Years later, they reunited to form Vector Unit, a new company where they could finally have full creative freedom. They channeled their passion for classic kart-racers into Beach Buggy Racing, a vibrant 3D title that brought a console-quality feel to phones. The fan reception was immense, with players celebrating by baking cakes and dressing up for in-game events. Today, the team keeps Beach Buggy Racing 2 updated with global collaborations and is already working on a new prototype, all to fulfill their mission: sparking joy.
Camilla, founder of Clover-Fi Games
Batangas, Philippines
Camilla's game, Window Garden, lets players slow down by decorating and caring for digital plants.
While living with her mother during the pandemic, tech graduate Camilla made the leap from software engineer to self-taught game developer. Her mom's indoor plants sparked an idea: Window Garden. She created the lofi idle game to encourage players to slow down. In the game, players water flowers and fruits, and decorate cozy spaces in their own style. With over 1 million downloads to date, this simple loop has become a calming daily ritual since its launch. The game's success earned it a "Best of 2024" award from Google Play, and Camilla now hopes to expand her studio and collaborate with other creatives.
Rodrigo, founder of Kolb Apps
Curitiba, Brazil
Rodrigo's game, Real Drum, puts a complete, realistic-sounding virtual drum set in your pocket, making it easy for anyone to play.
Rodrigo started coding at just 12 years old, creating software for his family's businesses. This technical skill later combined with his hobby as an amateur musician. While pursuing a career in programming, he noticed a clear gap: there were no high-quality percussion apps. He united his two passions, technology and rhythm, to create Real Drum. The result is a realistic, easy-to-use virtual set that has amassed over 437 million downloads, letting people around the world play drums and cymbals without the noise. His game has made learning music accessible to many and inspired new artists. Now, Rodrigo's team plans to launch new apps for children to continue nurturing musical creativity.
Discover other inspiring app and game founders featured in #WeArePlay.13 Nov 2025 5:00pm GMT
12 Nov 2025
Android Developers Blog
Android developer verification: Early access starts now as we continue to build with your feedback

Posted by Matthew Forsythe Director - Product Management, Android App Safety

We recently announced new developer verification requirements, which serve as an additional layer of defense in our ongoing effort to keep Android users safe. We know that security works best when it accounts for the diverse ways people use our tools. This is why we announced this change early: to gather input and ensure our solutions are balanced. We appreciate the community's engagement and have heard the early feedback - specifically from students and hobbyists who need an accessible path to learn, and from power users who are more comfortable with security risks. We are making changes to address the needs of both groups.
To understand how these updates fit into our broader mission, it is important to first look at the specific threats we are tackling.
Why verification is important
Keeping users safe on Android is our top priority. Combating scams and digital fraud is not new for us - it has been a central focus of our work for years. From Scam Detection in Google Messages to Google Play Protect and real-time alerts for scam calls, we have consistently acted to keep our ecosystem safe.
However, online scams and malware campaigns are becoming more aggressive. At the global scale of Android, this translates to real harm for people around the world - especially in rapidly digitizing regions where many are coming online for the first time. Technical safeguards are critical, but they cannot solve for every scenario where a user is manipulated. Scammers use high-pressure social engineering tactics to trick users into bypassing the very warnings designed to protect them.
For example, a common attack we track in Southeast Asia illustrates this threat clearly. A scammer calls a victim claiming their bank account is compromised and uses fear and urgency to direct them to sideload a "verification app" to secure their funds, often coaching them to ignore standard security warnings. Once installed, this app - actually malware - intercepts the victim's notifications. When the user logs into their real banking app, the malware captures their two-factor authentication codes, giving the scammer everything they need to drain the account.
While we have advanced safeguards and protections to detect and take down bad apps, without verification, bad actors can spin up new harmful apps instantly. It becomes an endless game of whack-a-mole. Verification changes the math by forcing them to use a real identity to distribute malware, making attacks significantly harder and more costly to scale. We have already seen how effective this is on Google Play, and we are now applying those lessons to the broader Android ecosystem to ensure there is a real, accountable identity behind the software you install.
Supporting students and hobbyists
We heard from developers who were concerned about the barrier to entry when building apps intended only for a small group, like family or friends. We are using your input to shape a dedicated account type for students and hobbyists. This will allow you to distribute your creations to a limited number of devices without going through the full verification requirements.
Empowering experienced users
While security is crucial, we've also heard from developers and power users who have a higher risk tolerance and want the ability to download unverified apps.
Based on this feedback and our ongoing conversations with the community, we are building a new advanced flow that allows experienced users to accept the risks of installing software that isn't verified. We are designing this flow specifically to resist coercion, ensuring that users aren't tricked into bypassing these safety checks while under pressure from a scammer. It will also include clear warnings to ensure users fully understand the risks involved, but ultimately, it puts the choice in their hands. We are gathering early feedback on the design of this feature now and will share more details in the coming months.
Getting started with early access
We're excited to start inviting developers to the early access for developer verification in Android Developer Console for developers that distribute exclusively outside of Play, and will share invites to the Play Console experience soon for Play developers. We are looking forward to your questions and feedback on streamlining the experience for all developers.
Watch our video below for a walkthrough of the new Android Developer Console experience and see our guides for more details and FAQs.
We are committed to working with you to keep the ecosystem safe while getting this right.
12 Nov 2025 11:47pm GMT
10 Nov 2025
Android Developers Blog
Raising the bar on battery performance: excessive partial wake locks metric is now out of beta
Posted by Karan Jhavar - Product Manager, Android Frameworks, Dan Brown - Product Manager, Google Play, and Eric Brenner - Software Engineer, Google Play

A great user experience is built on a foundation of strong technical performance. We are committed to helping you create stable, responsive, and efficient apps that users love. Excessive battery drain is top of mind for your users, and together, we are taking significant steps to help you build more power-efficient apps.
Earlier this year, we introduced a new beta metric in Android vitals, excessive partial wake locks, to help you identify and address sources of battery drain. This initial beta metric was co-developed in close collaboration with Samsung, combining their deep, real-world insights into user experience with battery consumption with Android's platform data.
We want to thank you for providing invaluable feedback during the beta period. Powered by your input and our continued collaboration with Samsung, we have further refined the algorithm to be even more accurate and representative. We are excited to announce that this refined metric is now generally available as a new core vitals metric to all developers in Android vitals.
We have defined a bad behavior threshold for excessive wake locks. Starting March 1, 2026, if your title does not meet this quality threshold, we may exclude the title from prominent discovery surfaces such as recommendations. In some cases, we may display a warning on your store listing to indicate to users that your app may cause excessive battery drain.
| GOOGLE PLAY'S CORE TECHNICAL QUALITY METRICS
To maximize visibility on Google Play, keep your app below the bad behavior thresholds for these metrics. |
|
|---|---|
| User-perceived crash rate | The percentage of daily active users who experienced at least one crash that is likely to have been noticeable |
| User-perceived ANR rate | The percentage of daily active users who experienced at least one ANR that is likely to have been noticeable |
| Excessive battery usage | The percentage of watch face sessions where battery usage exceeds 4.44% per hour |
| New: Excessive partial wake locks | The percentage of user sessions where cumulative, non-exempt wake lock usage exceeds 2 hours |
Excessive partial wake locks newly join the technical quality bars that Play expects all titles to maintain for a great user experience
This is the first in a series of new metrics designed to provide deeper insight into your app's resource utilization, enabling you to improve the experience for your users across the entire Android ecosystem.
1. Aligning our definition of excessive wake locks with user expectations
Apps can hold wake locks to prevent the user's device from entering sleep mode, letting the apps perform background work while the screen is off.
We consider a user session excessive if it holds more than 2 cumulative hours of non-exempt wake locks in a 24 hour period. These excessive sessions are a heavy contributor to battery drain. A wake lock is exempted if it is a system held wake lock that offers clear user benefits that cannot be further optimized, such as audio playback or user-initiated data transfer.
The bad behaviour threshold is crossed when 5% of an app's user sessions over the last 28 days are excessive. If your app exceeds this threshold, you will be alerted directly on your Android vitals overview page. You can read more information about our definition on the Android Developer pages.
Android vitals will alert you to excessive wake lock issues and provide a table of wake lock tags to P90/ P99 duration to help you identify the source by wake lock name.To help you understand your app's partial wake lock usage, we are enhancing the excessive partial wake locks page in Android vitals with a new wake lock names table. This table breaks down wake lock sessions by their specific tag names and durations, allowing you to easily identify long wake locks in your local development environment, like Android Studio, for easier debugging. You should investigate any wake locks with P90 or P99 durations above 60 minutes.

2. Excessive wake locks and their impact on Google Play visibility
If your title exceeds the bad behavior threshold for excessive wake locks, it may be ineligible for some discovery surfaces where users find new apps and games.
In some cases, we may also show a warning on your store listing to inform users that your app may cause their device's battery to drain faster.

Users may see a warning on your store listing if your app exceeds the bad behavior threshold. Note: The exact text and design are subject to change.
We know making technical changes to your app's code and how it works can be time consuming, so we are making the metric available for you to diagnose and fix potential issues now, with time before the Store visibility changes begin, starting from March 1, 2026.
3. What to do next
We encourage you to take the following steps to ensure your app delivers a great experience for users:
-
Visit Android vitals: Review your app's performance on the new excessive partial wake locks metric. The metric is now visible to all developers whose apps have wake lock sessions.
-
Discover excessive partial wake locks: Use the new wake lock names table to identify excessive partial wake locks.
-
Consult the documentation: For detailed guidance on best practices and fixing common issues, please check out our technical blog post, technical video and updated developer documentation on wake locks.
Thank you for your continued partnership in building high-quality, performant experiences that users can rely on every day.
10 Nov 2025 10:00pm GMT
06 Nov 2025
Android Developers Blog
#WeArePlay: Meet the people making apps & games to improve your health
Posted by Robbie McLachlan - Developer Marketing

In our latest #WeArePlay stories, we meet the founders building apps and games that are making health and wellness fun and easy for everyone on Google Play. From getting heavy sleepers jumping into their mornings, to turning mental wellness into an immersive adventure game.
Here are a few of our favorites:
Jay, founder of Delightroom
Seoul, South Korea
With over 90 million downloads, Jay's app Alarmy helps heavy sleepers to get moving with smart, challenge-based alarms.
While studying computer science, Jay's biggest challenge wasn't debugging code, it was waking up for his morning classes. This struggle sparked an idea: what if there were an app that could help anyone get out of bed? Jay built a basic version and showcased it at a tech event, where it quickly drew attention. That prototype evolved into Alarmy, an app that uses creative missions, like solving math problems, doing squats, or snapping a photo, to get people moving so they fully wake up. Now available in over 30 languages and 170+ countries, Jay and his team are expanding beyond alarms, adding sleep tracking and wellness features to help even more people start their day right.
Ellie and Hazel, co-founders of Mind Monsters Games
Cambridge, UK
Ellie and Hazel's game, Betwixt, makes mental wellness more fun by using an interactive story to reduce anxiety.
While working in London's tech scene and later writing about psychology, Ellie noticed a pattern: many people turned to video games to ease stress but struggled to engage with traditional meditation. That's when she came up with the idea to combine the two. While curating a book on mental health, she met Hazel-a therapist, former world champion boxer, and game lover and together they created Betwixt, an interactive fantasy adventure that guides players on a journey of self-discovery. By blending storytelling with evidence-based techniques, the game helps reduce anxiety and promote well-being. Now, with three new projects in development, Ellie and Hazel strive to turn play into a mental health tool.
Kevin and Robin's app, MapMyFitness, helps a global community of runners and cyclists map their routes and track their training.
Growing up across the Middle East, the Philippines, and Africa, Kevin developed a fascination with maps. In San Diego, while training for his second marathon, he built a simple MapMyRun website to map his routes. When other runners joined, former professional cyclist Robin reached out with a vision to also help cyclists discover and share maps. Together they founded MapMyFitness in 2007 and launched MapMyRide soon after, blending Kevin's technical expertise and Robin's athletic know-how. Today, the MapMy suite powers millions of walkers, runners, and riders with adaptive training plans, guided workouts, live safety tracking, and community challenges-all in support of their mission to "get everybody outside".
Discover more #WeArePlay stories from founders across the globe.
06 Nov 2025 5:00pm GMT
03 Nov 2025
Android Developers Blog
Health Connect Jetpack v1.1.0 is now available!
Posted by Brenda Shaw, Health & Home Partner Engineering Technical Writer
Health Connect is Android's on-device platform designed to simplify connectivity between health and fitness apps, allowing developers to build richer experiences with secure, centralized data. Today, we're thrilled to announce three major updates that empower you to create more intelligent, connected, and nuanced applications: the stable release of the Health Connect Jetpack library 1.1.0 and the expanded device type support.
Health Connect Jetpack Library 1.1.0 is Now Stable
We are excited to announce that the Health Connect Jetpack library has reached its 1.1.0 stable release. This milestone provides you with the confidence and reliability needed to build production-ready health and fitness experiences at scale.
Since its inception, Health Connect has grown into a robust platform supporting over 50 different data types across activity, sleep, nutrition, medical records, and body measurements. The journey to this stable release has been marked by significant advancements driven by developer feedback. Throughout the alpha and beta phases, we introduced critical features like background reads for continuous data monitoring, historical data sync to provide users with a comprehensive long-term view of their health, and support for critical new data types like Personal Health records, Exercise Routes, Training Plans, and Skin Temperature. This stable release encapsulates all of these enhancements, offering a powerful and dependable foundation for your applications.
Expanded Device Type Support
Accurate data representation is key to building trust and delivering precise insights. To that end, we have significantly expanded the list of supported device types in Health Connect. This will be available in 1.2.0-alpha02. When data is written to the platform, specifying the source device is crucial metadata that helps data readers understand its context and quality.
The newly supported device types include:
-
Consumer Medical Device: For over-the-counter medical hardware like Continuous Glucose Monitors (CGMs) and Blood Pressure Cuffs.
-
Glasses: For smart glasses and other head-mounted optical devices.
-
Hearables: For earbuds, headphones, and hearing aids with sensing capabilities.
-
Fitness Machine: For stationary equipment like treadmills and indoor cycles, as well as outdoor equipment like bicycles.
This expansion ensures data is represented more accurately, allowing you to build more nuanced experiences based on the specific hardware used to record it.
What's Next?
We encourage all developers to upgrade to the stable 1.1.0 Health Connect Jetpack library to take full advantage of these new features and improvements.
-
Learn more in the official documentation and release notes.
-
Provide feedback and report issues on our public issue tracker.
We are committed to the continued growth of the Health Connect platform. We can't wait to see the incredible experiences you build!
03 Nov 2025 5:00pm GMT
30 Oct 2025
Android Developers Blog
ML Kit’s Prompt API: Unlock Custom On-Device Gemini Nano Experiences
Posted by Caren Chang, Developer Relations Engineer, Chengji Yan, Software Engineer, and Penny Li, Software Engineer
Today marks a major milestone for Android's on-device generative AI. We're announcing the Alpha release of the ML Kit GenAI Prompt API. This API allows you to send natural language and multimodal requests to Gemini Nano, addressing the demand for more control and flexibility when building with generative models.
Partners like Kakao are already building with Prompt API, creating unique experiences with real-world impact. You can experiment with Prompt API's powerful features today with minimal code.
Move beyond pre-built to custom on-device GenAI Prompt API moves beyond pre-built functionality to support custom, app-specific GenAI use cases, allowing you to create unique features with complex data transformation. Prompt API uses Gemini Nano on-device to process data locally, enabling offline capability and improved user privacy.
Key use cases for Prompt API:
Prompt API allows for highly customized GenAI use cases. Here are some recommended examples:
-
Image understanding: Analyzing photos for classification (e.g., creating a draft social media post or identifying tags such as "pets," "food," or "travel").
-
Intelligent document scanning: Using a traditional ML model to extract text from a receipt, and then categorizing each item with Prompt API.
-
Transforming data for the UI: Analyzing long-form content to create a short, engaging notification title.
-
Content prompting: Suggesting topics for new journal entries based on a user's preference for themes.
-
Content analysis: Classifying customer reviews into a positive, neutral, or negative category.
-
Information extraction: Extracting important details about an upcoming event from an email thread.
Prompt API lets you create custom prompts and set optional generation parameters with just a few lines of code:
Generation.getClient().generateContent(
generateContentRequest(
ImagePart(bitmapImage),
TextPart("Categorize this image as one of the following: car, motorcycle, bike, scooter, other. Return only the category as the response."),
) {
// Optional parameters
temperature = 0.2f
topK = 10
candidateCount = 1
maxOutputTokens = 10
},
)
For more detailed examples of implementing Prompt API, check out the official documentation and sample on Github.
Gemini Nano, performance, and prototyping
Prompt API currently performs best on the Pixel 10 device series, which runs the latest version of Gemini Nano (nano-v3). This version of Gemini Nano is built on the same architecture as Gemma 3n, the model we first shared with the open model community at I/O.
The shared foundation between Gemma 3n and nano-v3 enables developers to more easily prototype features. For those without a Pixel 10 device, you can start experimenting with prompts today by prototyping with Gemma 3n locally.
For the full list of devices that support GenAI APIs, refer to our device support documentation.
Learn more
Start implementing Prompt API in your Android apps today with guidance from our official documentation and the sample on Github.30 Oct 2025 7:51pm GMT
Kakao Mobility uses Gemini Nano on-device to reduce costs and boost call conversion by 45%

Posted by Sa-ryong Kang and Caren Chang, Developer Relations Engineers
Kakao Mobility is South Korea's leading mobility business, offering a range of transportation and delivery services, including taxi-hailing, navigation, bike and scooter-sharing, parking, and parcel delivery, through its Kakao T app. The team at Kakao Mobility utilized Gemini Nano via ML Kit's GenAI Prompt API to offer parking assistance for its bike-sharing service and an improved address entry experience for its navigation and delivery services.
The Kakao T app serves over 30 million total users, and its bike-sharing service is one of its most popular services. But unfortunately, many users were improperly parking the bikes or scooters when not in use. This behavior led to an influx of parking violations and safety concerns, resulting in public complaints, fines, and towing. These issues began to negatively affect public perception of both Kakao Mobility and its bike-sharing services.
"By leveraging the ML Kit's GenAI Prompt API and Gemini Nano, we were able to quickly implement features that improve social value without compromising user experience. Kakao Mobility will continue to actively adopt on-device AI to provide safer and more convenient mobility services." - Wisuk Ryu, Head of Client Development Div
To address these concerns, the team initially designed an image recognition model to notify users if their bike or scooter was parked correctly according to local laws and safety standards. Running this model through the cloud would have incurred significant server costs. In addition, the users' uploaded photos contained information about their parking location, so the team wanted to avoid any privacy or security concerns. The team needed to find a more reliable and cost-effective solution.
The team also wanted to improve the entity extraction experience for the parcel delivery service within the Kakao T app. Previously, users were able to easily order parcel delivery on a chat interface, but drivers needed to enter the address into an order form manually to initiate the delivery order-a process which was cumbersome and prone to human error. The team sought to streamline this process, making order forms faster and less frustrating for delivery personnel.
Enhancing the user experience with ML Kit's GenAI Prompt API
The team tested and compared cloud-based Gemini models against Gemini Nano, accessed via ML Kit's GenAI Prompt API. "After reviewing privacy, cost, accuracy, and response speed, ML Kit's GenAI Prompt API was clearly the optimal choice," said Jinwoo Park, Android application developer at Kakao Mobility.
To address the issue of improperly parked bikes or scooters, the team used Gemini Nano's multimodal capability via the ML Kit GenAI API SDK to detect when a bike or scooter violates local regulations by parking on yellow tactile paving. With a carefully crafted prompt, they were able to evaluate more than 200 labeled images of parking photos while continually refining the inputs. This evaluation, measured through well-known metrics like accuracy, precision, recall, and the F1 score, ensured the feature met production-level quality and reliability standards.
Now users can take a photo of their parked bike or scooter, and the app will inform them if it is parked properly, or provide guidance if it is not. The entire process happens in seconds on the device, protecting the user's location and information.
To create a streamlined entity extraction feature, the team again used ML Kit's GenAI Prompt API to process users' delivery orders written in natural language. If they had employed traditional machine learning, it would have required a large learning dataset and special expertise in machine learning. Instead, they could simply start with a prompt like, "Extract the recipient's name, address, and phone number from the message." The team prepared around 200 high-quality evaluation examples, and evaluated their prompt through many rounds of iteration to get the best result. The most effective method employed was a technique called few-shot prompting, and the results were carefully analyzed to ensure the output contained minimal hallucinations.
"ML Kit's Prompt API reduces developer overhead while offering strong security and reliability on-device. It enables rapid prototyping, lowers infrastructure dependency, and incurs no additional cost. There is no reason not to recommend it." - Jinwoo Park, Android application developer at Kakao Mobility
Delivering big results with ML Kit's GenAI Prompt API
As a result, the entity extraction feature correctly identifies the necessary details of each order, even when multiple names and addresses are entered. To maximize the feature's reach and provide a robust fallback, the team also implemented a cloud-based path using Gemini Flash.
Implementing ML Kit's GenAI Prompt API has yielded a significant amount of cost savings for the Kakao Mobility team by shifting to on-device AI. While the bike parking analysis feature has not yet launched, the address entry improvement has already delivered excellent results:
-
Order completion time for delivery orders has been reduced by 24%.
-
The conversion rate has increased by 45% for new users and 6% for existing users.
-
During peak seasons, AI-powered orders increase by over 200%.
"Small business owners in particular have shared very positive feedback, saying the feature has made their work much more efficient and significantly reduced stress," Wisuk added.
After the image recognition feature for bike and scooter parking launches, the Kakao Mobility team is eager to improve it further. Urban parking environments can be challenging, and the team is exploring ways to filter out unnecessary regions from images.
"ML Kit's GenAI Prompt API offers high-quality features without additional overhead," said Jinwoo. "This reduced developer effort, shortened overall development time, and allowed us to focus on prompt tuning for higher-quality results."
Try ML Kit's GenAI Prompt API for yourself
30 Oct 2025 6:28pm GMT
redBus uses Gemini Flash via Firebase AI Logic to boost the length of customer reviews by 57%

Posted by Thomas Ezan, Developer Relations Engineer
As the world's largest online bus ticketing platform, redBus serves millions of travelers across India, Southeast Asia, and Latin America. The service is predominantly mobile-first, with over 90% of all bookings occurring through its app. However, this presents a significant challenge in gathering helpful feedback from a user base that speaks dozens of different languages. Typing reviews is inconvenient for many users, and a review written in Tamil, for instance, offers little value to a bus operator who only speaks Hindi.
To improve the quality and volume of user feedback, developers at redBus used Gemini Flash, a Google AI model providing low latency, to instantly transcribe and translate user voice recordings. To connect this powerful AI to their app without dealing with complex backend work, they used Firebase AI Logic. This new feature removed language barriers and simplified the review process, leading to a significant increase in user engagement and feedback quality.
Simplifying user feedback with a voice-first approach
The developer team wanted to create a frictionless, voice-first experience, so they designed a new flow where users could simply speak their review in their native language. To encourage adoption, the team implemented a prominent, animated mic button paired with a text mentioning: "Your voice matters, share your review in your own language." This mention appears in the user's native language, consistent with their app language settings.
Using Gemini Flash, the application processes the user's voice recording. It first transcribes the speech into text, then translates it into English, and finally analyzes the sentiment to automatically generate a star rating and predict relevant tags based on the review content. It then creates a concise summary and autofills the review form fields with the generated content.
Developers chose Firebase AI Logic because it allowed them to build and ship the feature without the help from the backend team, dramatically reducing development time and complexity. "The Firebase AI SDK was a key differentiator because it was the only solution that empowered our frontend team to build and ship the feature independently," Abhi explained. This approach enabled the team to go from concept to launch in just 30 days.
During implementation, the engineers used structured output, enabling the Gemini Flash model to return well-formed JSON responses, including the transcription, translation, sentiment analysis, and star rating, making it easy to then populate the UI. This ensured a seamless user experience. Users are then shown both the original transcribed text in their own language and the translated, summarized version in English. Most importantly, the user is given full control to review and edit all AI-generated text and change the star rating before submitting the review. They can even speak again to add more content.
Driving engagement and capturing deeper user insights
The AI-powered voice review feature had a significant positive impact on user engagement. By enabling users to speak in their native language, redBus saw a 57% increase in review length and a notable increase in the overall volume of reviews.
The new feature successfully engaged a segment of the user base that was previously hesitant to type a review. Since implementation, user feedback has been overwhelmingly positive: customers appreciate the accuracy of the transcription and translation, and find the AI-generated summaries to be a concise overview of their longer, more detailed reviews.
Gemini Flash, although hosted in the cloud, delivered a highly responsive user experience. "A common observation from our partners and stakeholders has been that the level of responsiveness from our new AI feature is so fast and seamless that it feels like the AI is running directly on the device," said Abhi. "This is a testament to the low latency of the Gemini Flash model, which has been a key factor in its success."
An easier way to build with AI
Following the success of the voice review feature, the team at redBus is exploring other use cases for on-device generative AI to further enhance their app. They also plan to use Google AI Studio to test and iterate on prompts moving forward. For Abhi, the lesson is clear: "It's no longer about complex backend setups," he said. "It's about crafting the right prompt to build the next innovative feature that directly enhances the user experience."
Get started
30 Oct 2025 6:23pm GMT
New agentic experiences for Android Studio, new AI APIs, the first Android XR device and more, in our Fall episode of The Android Show

Posted by Matthew McCullough, VP of Product Management, Android Developer

We're in an important moment where AI changes everything, from how we work to the expectations that users have for your apps, and our goal on Android is to transform this AI evolution into opportunities for you and your users. Today in our Fall episode of The Android Show, we unpacked a bunch of new updates towards delivering the highest return on investment in building for the Android platform. From new agentic experiences for Gemini in Android Studio to a brand new on-device AI API to the first Android XR device, there's so much to cover - let's dive in!
Build your own custom Gen AI features with the new Prompt API
On Android, we offer AI models on-device, or in the cloud. Today, we're excited to now give you full flexibility to shape the output of the Gemini Nano model by passing in any prompt you can imagine with the new Prompt API, now in Alpha. For flagship Android devices, Gemini Nano lets you build efficient on-device options where the users' data never leaves their device. At I/O this May, we launched our on-device GenAI APIs using the Gemini Nano model, making common tasks easier with simple APIs for tasks like summarization, proofreading and image description. Kakao used the Prompt API to transform their parcel delivery service, replacing a slow, manual process where users had to copy and paste details into a form into just a simple message requesting a delivery, and the API automatically extracts all the necessary information. This single feature reduced order completion time by 24% and boosted new user conversion by an incredible 45%.
Tap into Nano Banana and Imagen using the Firebase SDK
When you want to add cutting-edge capabilities across the entire fleet of Android devices, our cloud-based AI solutions with Firebase AI Logic are a great fit. The excitement for models like Gemini 2.5 Flash Image (a.k.a. Nano Banana) and Imagen have been incredible; now your users can now generate and edit images using Nano Banana, and then for finer control, like selecting and transforming specific parts of an image, users can use the new mask-based editing feature that leverages the Imagen model. See our blog post to learn more. And beyond image generation, you can also use Gemini multimodal capabilities to process text, audio and image input. RedBus, for example, revolutionized their user reviews using Gemini Flash via Firebase AI Logic to make giving feedback easier, more inclusive, and reliable. The old problem? Short, low-quality text reviews. The new solution? Users can now leave reviews using voice input in their native languages. From the audio Gemini Flash is then generating a structured text response enabling longer, richer and more reliable user reviews. It's a win for everyone: travelers, operators, and developers!
Helping you be more productive, with agentic experiences in Android Studio
Helping you be more productive is our goal with Gemini in Android Studio, and why we're infusing AI across our tooling. Developers like Pocket FM have seen an impressive development time savings of 50%. With the recent launch of Agent Mode, you can describe a complex goal in natural language and (with your permission), the agent plans and executes changes on multiple files across your project. The agent's answers are now grounded in the most modern development practices, and can even cross-reference our latest documentation in real time. We demoed new agentic experiences such as updates to Agent Mode, the ability to upgrade APIs on your behalf, the new project assistant, and we announced you'll be able to bring any LLM of your choice to power the AI functionality inside Android Studio, giving you more flexibility and choice on how you incorporate AI into your workflow. And for the newest stable features such as Back Up and Sync, make sure to download the latest stable version of Android Studio.
Elevating AI-assisted Android development, and improving LLMs with an Android benchmark
Our goal is to make it easier for Android developers to build great experiences. With more code being written by AI, developers have been asking for models that know more about Android development. We want to help developers be more productive, and that's why we're building a new task set for LLMs against a range of common Android development areas. The goal is to provide LLM makers with a benchmark, a north star of high quality Android development, so Android developers have a range of helpful models to choose for AI assistance.
To reflect the challenges of Android development, the benchmark is composed of real-world problems sourced from public GitHub Android repositories. Each evaluation attempts to have an LLM recreate a pull request, which are then verified using human authored tests. This allows us to measure a model's ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day.
We're finalizing the task set we'll be testing against LLMs, and will be sharing the results publicly in the coming months. We're looking forward to seeing how this shapes AI assisted Android development, and the additional flexibility and choice it gives you to build on Android.
.gif)
The first Android XR device: Samsung Galaxy XR
Last week was the launch of the first in a new wave of Android XR devices: the Galaxy XR, in partnership with Samsung. Android XR devices are built entirely in the Gemini era, creating a major new platform opportunity for your app. And because Android XR is built on top of familiar Android frameworks, when building adaptively, you're already building for XR. To unlock the full potential of Android XR features, you can use the Jetpack XR SDK. The Calm team provides a perfect example of this in action. They successfully transformed their mobile app into an immersive spatial experience, building their first functional XR menus on the first day and a core XR experience in just two weeks by leveraging their existing Android codebase and the Jetpack XR SDK. You can read more about Android XR from our Spotlight Week last week.
Jetpack Navigation 3 is in Beta
The new Jetpack Navigation 3 library is now in beta! Instead of having behavior embedded into the library itself, we're providing 'how-to recipes' with good defaults (nav3 recipes on github). Out of the box, it's fully customizable, has animation support and is adaptive. Nav 3 was built from the ground up with Compose State as a fundamental building block. This means that it fully buys into the declarative programming model - you change the state you own and Nav3 reacts to that new state. On the Compose front, we've been working on making it faster and easier for you to build UI, covering the features you told us you needed from Views, while at the same time ensuring that Compose is performant.
Accelerate your business success on Google Play
With AI speeding up app development, Google Play is streamlining your workflow in Play Console so that your business growth can keep up with your code. The reimagined, goal-oriented app dashboard puts actionable metrics front and center. Plus, new capabilities are making your day-to-day operations faster, smarter, and more efficient: from pre-release testing with deep links validation to AI-powered analytics summaries and app strings localization. These updates are just the beginning. Check out the full list of announcements to get the latest from Play.
Watch the Fall episode of The Android Show
Thank you for tuning into our Fall episode of The Android Show. We're excited to continue building great things together, and this show is an important part of our conversation with you. We'd love to hear your ideas for our next episode, so please reach out on X or LinkedIn. A special thanks to my co-hosts, Rebecca Gutteridge and Adetunji Dahunsi, for helping us share the latest updates.
30 Oct 2025 5:09pm GMT







