29 Jan 2026
Android Developers Blog
Accelerating your insights with faster, smarter monetization data and recommendations
Posted by Phalene Gowling, Product Manager, Google Play
To build a thriving business on Google Play, you need more than just data - you need a clear path to action. Today, we're announcing a suite of upgrades to the Google Play Console and beyond, giving you greater visibility into your financial performance and specific, data-backed steps to improve it.
From new, actionable recommendations to more granular sales reporting, here's how we're helping you maximize your ROI.
New: Monetization insights and recommendations
Launch Status: Rolling out today
The Monetize with Play overview page is designed to be your ultimate command center. Today, we are upgrading it with a new dynamic insights section designed to give you a clearer view of your revenue drivers.
.gif)
- Optimize conversion: Track your new Cart Conversion Rate.
-
Reduce churn: Track cancelled subscriptions over time.
-
Optimize pricing: Monitor your Average Revenue Per Paying User (ARPPU).
-
Increase buyer reach: Analyze how much of your engaged audience convert to buyers.

We recently rolled out new Sales Channel data in your financial reporting. This allows you to attribute revenue to specific surfaces - including your app, the Play Store, and platforms like Google Play Games on PC.
For native-PC game developers and media & entertainment subscription businesses alike, this granularity allows you to calculate the precise ROI of your cross-platform investments and understand exactly which channels are driving your growth. Learn more.

The Orders API provides programmatic access to one-time and recurring order transaction details. If you haven't integrated it yet, this API allows you to ingest real-time data directly into your internal dashboards for faster reconciliation and improved customer support.

Level Infinite (Tencent) says the API "works so well that we want every app to use it."
Continuous improvements towards objective-led reporting
You've told us that the biggest challenge isn't just accessing data, but connecting the dots across different metrics to see the full picture. We're enhancing reporting that goes beyond data dumps to provide straightforward, actionable insights that help you reach business objectives faster.
Our goal is to create a more cohesive product experience centered around your objectives. By shifting from static reporting to dynamic, goal-orientated tools, we're making it easier to track and optimize for revenue, conversion rates, and churn. These updates are just the beginning of a transformation designed to help you turn data into measurable growth.
29 Jan 2026 5:00pm GMT
28 Jan 2026
Android Developers Blog
How Automated Prompt Optimization Unlocks Quality Gains for ML Kit’s GenAI Prompt API
Posted by Chetan Tekur, PM at AI Innovation and Research, Chao Zhao, SWE at AI Innovation and Research, Paul Zhou, Prompt Quality Lead at GCP Cloud AI and Industry Solutions, and Caren Chang, Developer Relations Engineer at Android
Automated Prompt Optimization (APO)
To further help bring your ML Kit Prompt API use cases to production, we are excited to announce Automated Prompt Optimization (APO) targeting On-Device models on Vertex AI. Automated Prompt Optimization is a tool that helps you automatically find the optimal prompt for your use cases.
The era of On-Device AI is no longer a promise-it is a production reality. With the release of Gemini Nano v3, we are placing unprecedented language understanding and multimodal capabilities directly into the palms of users. Through the Gemini Nano family of models, we have wide coverage of supported devices across the Android Ecosystem. But for developers building the next generation of intelligent apps, access to a powerful model is only step one. The real challenge lies in customization: How do you tailor a foundation model to expert-level performance for your specific use case without breaking the constraints of mobile hardware?
In the server-side world, the larger LLMs tend to be highly capable and require less domain adaptation. Even when needed, more advanced options such as LoRA (Low-Rank Adaptation) fine-tuning can be feasible options. However, the unique architecture of Android AICore prioritizes a shared, memory-efficient system model. This means that deploying custom LoRA adapters for every individual app comes with challenges on these shared system services.
But there is an alternate path that can be equally impactful. By leveraging Automated Prompt Optimization (APO) on Vertex AI, developers can achieve quality approaching fine-tuning, all while working seamlessly within the native Android execution environment. By focusing on superior system instruction, APO enables developers to tailor model behavior with greater robustness and scalability than traditional fine-tuning solutions.
Note: Gemini Nano V3 is a quality optimized version of the highly acclaimed Gemma 3N model. Any prompt optimizations that are made on the open source Gemma 3N model will apply to Gemini Nano V3 as well. On supported devices, ML Kit GenAI APIs leverage the nano-v3 model to maximize the quality for Android Developers
APO treats the prompt not as a static text, but as a programmable surface that can be optimized. It leverages server-side models (like Gemini Pro and Flash) to propose prompts, evaluate variations and find the optimal one for your specific task. This process employs three specific technical mechanisms to maximize performance:
-
Automated Error Analysis: APO analyzes error patterns from training data to Automatically identify specific weaknesses in the initial prompt.
-
Semantic Instruction Distillation: It analyzes massive training examples to distill the "true intent" of a task, creating instructions that more accurately reflect the real data distribution.
-
Parallel Candidate Testing: Instead of testing one idea at a time, APO generates and tests numerous prompt candidates in parallel to identify the global maximum for quality.
Why APO Can Approach Fine Tuning Quality
It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:
-
Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model's weights to over-index on a specific distribution of data. This often leads to "catastrophic forgetting," where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.
-
Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model's latent capabilities, often discovering strategies that might be hard for human engineers to find.
To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.
|
Use Case |
Task Type |
Task Description |
Metric |
APO Improvement |
|
Topic classification |
Text classification |
Classify a news article into topics such as finance, sports, etc |
Accuracy |
+5% |
|
Intent classification |
Text classification |
Classify a customer service query into intents |
Accuracy |
+8.0% |
|
Webpage translation |
Text translation |
Translate a webpage from English to a local language |
BLEU |
+8.57% |
A Seamless, End-to-End Developer Workflow
It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:
-
Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model's weights to over-index on a specific distribution of data. This often leads to "catastrophic forgetting," where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.
-
Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model's latent capabilities, often discovering strategies that might be hard for human engineers to find.
To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.
Conclusion
The release of Automated Prompt Optimization (APO) marks a turning point for on-device generative AI. By bridging the gap between foundation models and expert-level performance, we are giving developers the tools to build more robust mobile applications. Whether you are just starting with Zero-Shot Optimization or scaling to production with Data-Driven refinement, the path to high-quality on-device intelligence is now clearer. Launch your on-device use cases to production today with ML Kit's Prompt API and Vertex AI's Automated Prompt Optimization.
Relevant links:
28 Jan 2026 5:00pm GMT
27 Jan 2026
Android Developers Blog
The Embedded Photo Picker
Posted by Roxanna Aliabadi Walker, Product Manager and Yacine Rezgui, Developer Relations Engineer
The Embedded Photo Picker: A more seamless way to privately request photos and videos in your app
Seamless integration, enhanced privacy
- Intuitive placement: The photo picker sits right below the camera button, giving users a clear choice between capturing a new photo or selecting an existing one.
- Dynamic preview: Immediately after a user taps a photo, they see a large preview, making it easy to confirm their selection. If they deselect the photo, the preview disappears, keeping the experience clean and uncluttered.
- Expand for more content: The initial view is simplified, offering easy access to recent photos. However, users can easily expand the photo picker to browse and choose from all photos and videos in their library, including cloud content from Google Photos.
- Respecting user choices: The embedded photo picker only grants access to the specific photos or videos the user selects, meaning they can stop requesting the photo and video permissions altogether. This also saves the Messages from needing to handle situations where users only grant limited access to photos and videos.
Integrating the embedded photo picker is made easy with the Photo Picker Jetpack library.
implementation("androidx.photopicker:photopicker-compose:1.0.0-alpha01")@Composable fun EmbeddedPhotoPickerDemo() { // We keep track of the list of selected attachments var attachments by remember { mutableStateOf(emptyList<Uri>()) } val coroutineScope = rememberCoroutineScope() // We hide the bottom sheet by default but we show it when the user clicks on the button val scaffoldState = rememberBottomSheetScaffoldState( bottomSheetState = rememberStandardBottomSheetState( initialValue = SheetValue.Hidden, skipHiddenState = false ) ) // Customize the embedded photo picker val photoPickerInfo = EmbeddedPhotoPickerFeatureInfo .Builder() // Set limit the selection to 5 items .setMaxSelectionLimit(5) // Order the items selection (each item will have an index visible in the photo picker) .setOrderedSelection(true) // Set the accent color (red in this case, otherwise it follows the device's accent color) .setAccentColor(0xFF0000) .build() // The embedded photo picker state will be stored in this variable val photoPickerState = rememberEmbeddedPhotoPickerState( onSelectionComplete = { coroutineScope.launch { // Hide the bottom sheet once the user has clicked on the done button inside the picker scaffoldState.bottomSheetState.hide() } }, onUriPermissionGranted = { // We update our list of attachments with the new Uris granted attachments += it }, onUriPermissionRevoked = { // We update our list of attachments with the Uris revoked attachments -= it } ) SideEffect { val isExpanded = scaffoldState.bottomSheetState.targetValue == SheetValue.Expanded // We show/hide the embedded photo picker to match the bottom sheet state photoPickerState.setCurrentExpanded(isExpanded) } BottomSheetScaffold( topBar = { TopAppBar(title = { Text("Embedded Photo Picker demo") }) }, scaffoldState = scaffoldState, sheetPeekHeight = if (scaffoldState.bottomSheetState.isVisible) 400.dp else 0.dp, sheetContent = { Column(Modifier.fillMaxWidth()) { // We render the embedded photo picker inside the bottom sheet EmbeddedPhotoPicker( state = photoPickerState, embeddedPhotoPickerFeatureInfo = photoPickerInfo ) } } ) { innerPadding -> Column(Modifier.padding(innerPadding).fillMaxSize().padding(horizontal = 16.dp)) { Button(onClick = { coroutineScope.launch { // We expand the bottom sheet, which will trigger the embedded picker to be shown scaffoldState.bottomSheetState.partialExpand() } }) { Text("Open photo picker") } LazyVerticalGrid(columns = GridCells.Adaptive(minSize = 64.dp)) { // We render the image using the Coil library itemsIndexed(attachments) { index, uri -> AsyncImage( model = uri, contentDescription = "Image ${index + 1}", contentScale = ContentScale.Crop, modifier = Modifier.clickable { coroutineScope.launch { // When the user clicks on the media from the app's UI, we deselect it // from the embedded photo picker by calling the method deselectUri photoPickerState.deselectUri(uri) } } ) } } } } }
implementation("androidx.photopicker:photopicker:1.0.0-alpha01")<view class="androidx.photopicker.EmbeddedPhotoPickerView" android:id="@+id/photopicker" android:layout_width="match_parent" android:layout_height="match_parent" />
// We keep track of the list of selected attachments private val _attachments = MutableStateFlow(emptyList<Uri>()) val attachments = _attachments.asStateFlow() private lateinit var picker: EmbeddedPhotoPickerView private var openSession: EmbeddedPhotoPickerSession? = null val pickerListener = object EmbeddedPhotoPickerStateChangeListener { override fun onSessionOpened (newSession: EmbeddedPhotoPickerSession) { openSession = newSession } override fun onSessionError (throwable: Throwable) {} override fun onUriPermissionGranted(uris: List<Uri>) { _attachments += uris } override fun onUriPermissionRevoked (uris: List<Uri>) { _attachments -= uris } override fun onSelectionComplete() { // Hide the embedded photo picker as the user is done with the photo/video selection } } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.main_view) // // Add the embedded photo picker to a bottom sheet to allow the dragging to display the full photo library // picker = findViewById(R.id.photopicker) picker.addEmbeddedPhotoPickerStateChangeListener(pickerListener) picker.setEmbeddedPhotoPickerFeatureInfo( // Set a custom accent color EmbeddedPhotoPickerFeatureInfo.Builder().setAccentColor(0xFF0000).build() ) }
// Notify the embedded picker of a configuration change openSession.notifyConfigurationChanged(newConfig) // Update the embedded picker to expand following a user interaction openSession.notifyPhotoPickerExpanded(/* expanded: */ true) // Resize the embedded picker openSession.notifyResized(/* width: */ 512, /* height: */ 256) // Show/hide the embedded picker (after a form has been submitted) openSession.notifyVisibilityChanged(/* visible: */ false) // Remove unselected media from the embedded picker after they have been // unselected from the host app's UI openSession.requestRevokeUriPermission(removedUris)
For enhanced user privacy and security, the system renders the embedded photo picker in a way that prevents any drawing or overlaying. This intentional design choice means that your UX should consider the photo picker's display area as a distinct and dedicated element, much like you would plan for an advertising banner.
If you have any feedback or suggestions, submit tickets to our issue tracker.
27 Jan 2026 5:00pm GMT

.png)



