22 Feb 2026
TalkAndroid
Boba Story Lid Recipes – 2026
Look no further for all the latest Boba Story Lid Recipes. They are all right here!
22 Feb 2026 3:42am GMT
Dice Dreams Free Rolls – Updated Daily
Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.
22 Feb 2026 3:42am GMT
21 Feb 2026
TalkAndroid
Shock for Redmi and Poco Users: These Phones Will Never Get Android 17
Big news for Xiaomi fans-and not the good kind! Just when we were starting to think Android phone…
21 Feb 2026 4:30pm GMT
This hidden car menu could transform your driving experience in 2026
Think you've seen all Android Auto has to offer? Think again. Tucked below its familiar and reassuringly safe…
21 Feb 2026 7:30am GMT
20 Feb 2026
TalkAndroid
Google’s $800 Pixel 10 Is Free, But Only If You Do This
T-Mobile is offering the Pixel 10 series for almost next to nothing.
20 Feb 2026 5:28pm GMT
You won’t believe these groundbreaking sci-fi series and movies coming in 2026
Ready to embark on an interstellar journey to the future of streaming? Netflix's 2026 lineup is here to…
20 Feb 2026 4:30pm GMT
Google’s Pixel 10a Is Here, and It Looks Very Familiar
Pixel 10a is here. Or should we say Pixel 9a in disguise?
20 Feb 2026 2:50pm GMT
Google’s I/O 2026 Event Is Set For May 19
Expect to see new AI-powered features and more at the next I/O event.
20 Feb 2026 11:16am GMT
Sci-fi fans rejoice: an unexpected return is coming for this cult series
If you haven't spent the last fifteen years living under a rock (and if you have, welcome back…
20 Feb 2026 7:30am GMT
19 Feb 2026
TalkAndroid
Pixel 9a vs Pixel 10a: Google Phoned It In
Every year, the A-series Pixels follow the same pattern - modest upgrades, a slightly nicer spec sheet, and…
19 Feb 2026 6:01pm GMT
Vivo V70 goes official with ZEISS telephoto camera, massive battery and smarter OriginOS
The Vivo V70 refines the V-series formula
19 Feb 2026 5:13pm GMT
You’ll Finally See Which Apps Secretly Kill Your Phone Battery
Ever started your day with a nearly full phone battery, only to find it gasping for breath by…
19 Feb 2026 4:30pm GMT
Top Features of MileageWise’s Mileage Tracker App
The MileageWise Web Dashboard, paired with the Mileage Tracker App, offers a complete mileage logging solution. Whether you're…
19 Feb 2026 3:20pm GMT
No more timezone mix-ups: Your phone will always show the right time
Tired of landing in a new country, feeling all proud for having survived a long-haul flight, only to…
19 Feb 2026 7:30am GMT
18 Feb 2026
TalkAndroid
Stuck charging your EV? Android Auto’s surprising new feature will change everything
Ever found yourself stuck at a charging station, idly watching your EV's battery go from "barely alive" to…
18 Feb 2026 4:30pm GMT
Move Over Fallout: This Ambitious Sci-Fi Series Redefines Blockbuster TV
Hold onto your exosuits and get ready to dial again: while Fallout keeps fans grinning and Stranger Things…
18 Feb 2026 7:30am GMT
17 Feb 2026
Android Developers Blog
Get ready for Google I/O May 19-20
Posted by The Google I/O Team
Google I/O returns May 19-20
Google I/O is back! Join us online as we share our latest AI breakthroughs and updates in products across the company, from Gemini to Android, Chrome, Cloud, and more.
Tune in to learn about agentic coding and the latest Gemini model updates. The event will feature keynote addresses from Google leaders, forward-looking panel discussions, and product demos designed to showcase the next frontier of technology.
Register now and tune in live
Visit io.google and register to receive updates about Google I/O. Kicking off May 19 at 10am PT, this year we'll be livestreaming keynotes, demos, and more sessions across two days. We'll also be bringing back the popular Dialogues sessions featuring big thinkers and bold leaders discussing how AI is shaping our future.
17 Feb 2026 8:00pm GMT
Under the hood: Android 17’s lock-free MessageQueue
Posted by Shai Barack, Android Platform Performance Lead and Charles Munger, Principal Software Engineer

In Android 17, apps targeting SDK 37 or higher will receive a new implementation of MessageQueue where the implementation is lock-free. The new implementation improves performance and reduces missed frames, but may break clients that reflect on MessageQueue private fields and methods. To learn more about the behavior change and how you can mitigate impact, check out the MessageQueue behavior change documentation. This technical blog post provides an overview of the MessageQueue rearchitecture and how you can analyze lock contention issues using Perfetto.
The Looper drives the UI thread of every Android application. It pulls work from a MessageQueue, dispatches it to a Handler, and repeats. For two decades, MessageQueue used a single monitor lock (i.e. a synchronized code block) to protect its state.
Android 17 introduces a significant update to this component: a lock-free implementation named DeliQueue.
This post explains how locks affect UI performance, how to analyze these issues with Perfetto, and the specific algorithms and optimizations used to improve the Android main thread.
The problem: Lock Contention and Priority Inversion
The legacy MessageQueue functioned as a priority queue protected by a single lock. If a background thread posts a message while the main thread performs queue maintenance, the background thread blocks the main thread.
When two or more threads are competing for exclusive use of the same lock, this is called Lock contention. This contention can cause Priority Inversion, leading to UI jank and other performance problems.
Priority inversion can happen when a high-priority thread (like the UI thread) is made to wait for a low-priority thread. Consider this sequence:
-
A low priority background thread acquires the MessageQueue lock to post the result of work that it did.
-
A medium priority thread becomes runnable and the Kernel's scheduler allocates it CPU time, preempting the low priority thread.
-
The high priority UI thread finishes its current task and attempts to read from the queue, but is blocked because the low priority thread holds the lock.
The low-priority thread blocks the UI thread, and the medium-priority work delays it further.
Analyzing contention with Perfetto
You can diagnose these issues using Perfetto. In a standard trace, a thread blocked on a monitor lock enters the sleeping state, and Perfetto shows a slice indicating the lock owner.
When you query trace data, look for slices named "monitor contention with …" followed by the name of the thread that owns the lock and the code site where the lock was acquired.
Case study: Launcher jank
To illustrate, let's analyze a trace where a user experienced jank while navigating home on a Pixel phone immediately after taking a photo in the camera app. Below we see a screenshot of Perfetto showing the events leading up to the missed frame:

-
Symptom: The Launcher main thread missed its frame deadline. It blocked for 18ms, which exceeds the 16ms deadline required for 60Hz rendering.
-
Diagnosis: Perfetto showed the main thread blocked on the MessageQueue lock. A "BackgroundExecutor" thread owned the lock.
-
Root Cause: The BackgroundExecutor runs at Process.THREAD_PRIORITY_BACKGROUND (very low priority). It performed a non-urgent task (checking app usage limits). Simultaneously, medium priority threads were using CPU time to process data from the camera. The OS scheduler preempted the BackgroundExecutor thread to run the camera threads.
This sequence caused the Launcher's UI thread (high priority) to become indirectly blocked by the camera worker thread (medium priority), which was keeping the Launcher's background thread (low priority) from releasing the lock.
Querying traces with PerfettoSQL
You can use PerfettoSQL to query trace data for specific patterns. This is useful if you have a large bank of traces from user devices or tests, and you're searching for specific traces that demonstrate a problem.
For example, this query finds MessageQueue contention coincident with dropped frames (jank):
INCLUDE PERFETTO MODULE android.monitor_contention; INCLUDE PERFETTO MODULE android.frames.jank_type; SELECT process_name, -- Convert duration from nanoseconds to milliseconds SUM(dur) / 1000000 AS sum_dur_ms, COUNT(*) AS count_contention FROM android_monitor_contention WHERE is_blocked_thread_main AND short_blocked_method LIKE "%MessageQueue%" -- Only look at app processes that had jank AND upid IN ( SELECT DISTINCT(upid) FROM actual_frame_timeline_slice WHERE android_is_app_jank_type(jank_type) = TRUE ) GROUP BY process_name ORDER BY SUM(dur) DESC;
In this more complex example, join trace data that spans multiple tables to identify MessageQueue contention during app startup:
INCLUDE PERFETTO MODULE android.monitor_contention; INCLUDE PERFETTO MODULE android.startup.startups; -- Join package and process information for startups DROP VIEW IF EXISTS startups; CREATE VIEW startups AS SELECT startup_id, ts, dur, upid FROM android_startups JOIN android_startup_processes USING(startup_id); -- Intersect monitor contention with startups in the same process. DROP TABLE IF EXISTS monitor_contention_during_startup; CREATE VIRTUAL TABLE monitor_contention_during_startup USING SPAN_JOIN(android_monitor_contention PARTITIONED upid, startups PARTITIONED upid); SELECT process_name, SUM(dur) / 1000000 AS sum_dur_ms, COUNT(*) AS count_contention FROM monitor_contention_during_startup WHERE is_blocked_thread_main AND short_blocked_method LIKE "%MessageQueue%" GROUP BY process_name ORDER BY SUM(dur) DESC;
You can use your favorite LLM to write PerfettoSQL queries to find other patterns.
At Google, we use BigTrace to run PerfettoSQL queries across millions of traces. In doing so, we confirmed that what we saw anecdotally was, in fact, a systemic issue. The data revealed that MessageQueue lock contention impacts users across the entire ecosystem, substantiating the need for a fundamental architectural change.
Solution: lock-free concurrency
We addressed the MessageQueue contention problem by implementing a lock-free data structure, using atomic memory operations rather than exclusive locks to synchronize access to shared state. A data structure or algorithm is lock-free if at least one thread can always make progress regardless of the scheduling behavior of the other threads. This property is generally hard to achieve, and is usually not worth pursuing for most code.
The atomic primitives
Lock-free software often relies on atomic Read-Modify-Write primitives that the hardware provides.
On older generation ARM64 CPUs, atomics used a Load-Link/Store-Conditional (LL/SC) loop. The CPU loads a value and marks the address. If another thread writes to that address, the store fails, and the loop retries. Because the threads can keep trying and succeed without waiting for another thread, this operation is lock-free.
ARM64 LL/SC loop example
retry:
ldxr x0, [x1] // Load exclusive from address x1 to x0
add x0, x0, #1 // Increment value by 1
stxr w2, x0, [x1] // Store exclusive.
// w2 gets 0 on success, 1 on failure
cbnz w2, retry // If w2 is non-zero (failed), branch to retr
Newer ARM architectures (ARMv8.1) support Large System Extensions (LSE) which include instructions in the form of Compare-And-Swap (CAS) or Load-And-Add (demonstrated below). In Android 17 we added support to the Android Runtime (ART) compiler to detect when LSE is supported and emit optimized instructions:
/ ARMv8.1 LSE atomic example ldadd x0, x1, [x2] // Atomic load-add. // Faster, no loop required.
In our benchmarks, high-contention code that uses CAS achieves a ~3x speedup over the LL/SC variant.
The Java programming language offers atomic primitives via java.util.concurrent.atomic that rely on these and other specialized CPU instructions.
The Data Structure: DeliQueue
To remove lock contention from MessageQueue, our engineers designed a novel data structure called DeliQueue. DeliQueue separates Message insertion from Message processing:
-
The list of Messages (Treiber stack): A lock-free stack. Any thread can push new Messages here without contention.
-
The priority queue (Min-heap): A heap of Messages to handle, exclusively owned by the Looper thread (hence no synchronization or locks are needed to access).
Enqueue: pushing to a Treiber stack
The list of Messages is kept in a Treiber stack [1], a lock-free stack that uses a CAS loop to update the head pointer.
public class TreiberStack <E> { AtomicReference<Node<E>> top = new AtomicReference<Node<E>>(); public void push(E item) { Node<E> newHead = new Node<E>(item); Node<E> oldHead; do { oldHead = top.get(); newHead.next = oldHead; } while (!top.compareAndSet(oldHead, newHead)); } public E pop() { Node<E> oldHead; Node<E> newHead; do { oldHead = top.get(); if (oldHead == null) return null; newHead = oldHead.next; } while (!top.compareAndSet(oldHead, newHead)); return oldHead.item; } }
Source code based on Java Concurrency in Practice [2], available online and released to the public domain
Any producer can push new Messages to the stack at any time. This is like pulling a ticket at a deli counter - your number is determined by when you showed up, but the order you get your food in doesn't have to match. Because it's a linked stack, every Message is a sub-stack - you can see what the Message queue was like at any point in time by tracking the head and iterating forwards - you won't see any new Messages pushed on top, even if they're being added during your traversal.
Dequeue: bulk transfer to a min-heap
To find the next Message to handle, the Looper processes new Messages from the Treiber stack by walking the stack starting from the top and iterating until it finds the last Message that it previously processed. As the Looper traverses down the stack, it inserts Messages into the deadline-ordered min-heap. Since the Looper exclusively owns the heap, it orders and processes Messages without locks or atomics.
In walking down the stack, the Looper also creates links from stacked Messages back to their predecessors, thus forming a doubly-linked list. Creating the linked list is safe because links pointing down the stack are added via the Treiber stack algorithm with CAS, and links up the stack are only ever read and modified by the Looper thread. These back links are then used to remove Messages from arbitrary points in the stack in O(1) time.
This design provides O(1) insertion for producers (threads posting work to the queue) and amortized O(log N) processing for the consumer (the Looper).
Using a min-heap to order Messages also addresses a fundamental flaw in the legacy MessageQueue, where Messages were kept in a singly-linked list (rooted at the top). In the legacy implementation, removal from the head was O(1), but insertion had a worst case of O(N) - scaling poorly for overloaded queues! Conversely, insertion to and removal from the min-heap scale logarithmically, delivering competitive average performance but really excelling in tail latencies.
|
Legacy (locked) MessageQueue |
DeliQueue |
|
|
Insert |
O(N) |
O(1) for calling thread O(logN) for Looper thread |
|
Remove from head |
O(1) |
O(logN) |
In the legacy queue implementation, producers and the consumer used a lock to coordinate exclusive access to the underlying singly-linked list. In DeliQueue, the Treiber stack handles concurrent access, and the single consumer handles ordering its work queue.
Removal: consistency via tombstones
DeliQueue is a hybrid data structure, joining a lock-free Treiber stack with a single-threaded min-heap. Keeping these two structures in sync without a global lock presents a unique challenge: a message might be physically present in the stack but logically removed from the queue.
To solve this, DeliQueue uses a technique called "tombstoning." Each Message tracks its position in the stack via the backwards and forwards pointers, its index in the heap's array, and a boolean flag indicating whether it has been removed. When a Message is ready to run, the Looper thread will CAS its removed flag, then remove it from the heap and stack.
When another thread needs to remove a Message, it doesn't immediately extract it from the data structure. Instead, it performs the following steps:
-
Logical removal: the thread uses a CAS to atomically set the Message's removal flag from false to true. The Message remains in the data structure as evidence of its pending removal, a so-called "tombstone". Once a Message is flagged for removal, DeliQueue treats it as if it no longer exists in the queue whenever it's found.
-
Deferred cleanup: The actual removal from the data structure is the responsibility of the Looper thread, and is deferred until later. Rather than modifying the stack or heap, the remover thread adds the Message to another lock-free freelist stack.
-
Structural removal: Only the Looper can interact with the heap or remove elements from the stack. When it wakes up, it clears the freelist and processes the Messages it contained. Each Message is then unlinked from the stack and removed from the heap.
This approach keeps all management of the heap single-threaded. It minimizes the number of concurrent operations and memory barriers required, making the critical path faster and simpler.
Traversal: benign Java memory model data races
Most concurrency APIs, such as Future in the Java standard library, or Kotlin's Job and Deferred, include a mechanism to cancel work before it completes. An instance of one of these classes matches 1:1 with a unit of underlying work, and calling cancel on an object cancels the specific operations associated with them.
Today's Android devices have multi-core CPUs and concurrent, generational garbage collection. But when Android was first developed, it was too expensive to allocate one object for each unit of work. Consequently, Android's Handler supports cancellation via numerous overloads of removeMessages - rather than removing a specific Message, it removes all Messages that match the specified criteria. In practice, this requires iterating through all Messages inserted before removeMessages was called and removing the ones that match.
When iterating forward, a thread only requires one ordered atomic operation, to read the current head of the stack. After that, ordinary field reads are used to find the next Message. If the Looper thread modifies the next fields while removing Messages, the Looper's write and another thread's read are unsynchronized - this is a data race. Normally, a data race is a serious bug that can cause huge problems in your app - leaks, infinite loops, crashes, freezes, and more. However, under certain narrow conditions, data races can be benign within the Java Memory Model. Suppose we start with a stack of:

We perform an atomic read of the head, and see A. A's next pointer points to B. At the same time as we process B, the looper might remove B and C, by updating A to point to C and then D.
Even though B and C are logically removed, B retains its next pointer to C, and C to D. The reading thread continues traversing through the detached removed nodes and eventually rejoins the live stack at D.
By designing DeliQueue to handle races between traversal and removal, we allow for safe, lock-free iteration.
Quitting: Native refcount
Looper is backed by a native allocation that must be manually freed once the Looper has quit. If some other thread is adding Messages while the Looper is quitting, it could use the native allocation after it's freed, a memory safety violation. We prevent this using a tagged refcount, where one bit of the atomic is used to indicate whether the Looper is quitting.
Before using the native allocation, a thread reads the refcount atomic. If the quitting bit is set, it returns that the Looper is quitting and the native allocation must not be used. If not, it attempts a CAS to increment the number of active threads using the native allocation. After doing what it needs to, it decrements the count. If the quitting bit was set after its increment but before the decrement, and the count is now zero, then it wakes up the Looper thread.
When the Looper thread is ready to quit, it uses CAS to set the quitting bit in the atomic. If the refcount was 0, it can proceed to free its native allocation. Otherwise, it parks itself, knowing that it will be woken up when the last user of the native allocation decrements the refcount. This approach does mean that the Looper thread waits for the progress of other threads, but only when it's quitting. That only happens once and is not performance sensitive, and it keeps the other code for using the native allocation fully lock-free.
There's a lot of other tricks and complexity in the implementation. You can learn more about DeliQueue by reviewing the source code.
Optimization: branchless programming
While developing and testing DeliQueue, the team ran many benchmarks and carefully profiled the new code. One issue identified using the simpleperf tool was pipeline flushes caused by the Message comparator code.
A standard comparator uses conditional jumps, with the condition for deciding which Message comes first simplified below:
static int compareMessages(@NonNull Message m1, @NonNull Message m2) { if (m1 == m2) { return 0; } // Primary queue order is by when. // Messages with an earlier when should come first in the queue. final long whenDiff = m1.when - m2.when; if (whenDiff > 0) return 1; if (whenDiff < 0) return -1; // Secondary queue order is by insert sequence. // If two messages were inserted with the same `when`, the one inserted // first should come first in the queue. final long insertSeqDiff = m1.insertSeq - m2.insertSeq; if (insertSeqDiff > 0) return 1; if (insertSeqDiff < 0) return -1; return 0; }
This code compiles to conditional jumps (b.le and cbnz instructions). When the CPU encounters a conditional branch, it can't know whether the branch is taken until the condition is computed, so it doesn't know which instruction to read next, and has to guess, using a technique called branch prediction. In a case like binary search, the branch direction will be unpredictably different at each step, so it's likely that half the predictions will be wrong. Branch prediction is often ineffective in searching and sorting algorithms (such as the one used in a min-heap), because the cost of guessing wrong is larger than the improvement from guessing correctly. When the branch predictor guesses wrong, it must throw away the work it did after assuming the predicted value, and start again from the path that was actually taken - this is called a pipeline flush.
To find this issue, we profiled our benchmarks using the branch-misses performance counter, which records stack traces where the branch predictor guesses wrong. We then visualized the results with Google pprof, as shown below:
Recall that the original MessageQueue code used a singly-linked list for the ordered queue. Insertion would traverse the list in sorted order as a linear search, stopping at the first element that's past the point of insertion and linking the new Message ahead of it. Removal from the head simply required unlinking the head. Whereas DeliQueue uses a min-heap, where mutations require reordering some elements (sifting up or down) with logarithmic complexity in a balanced data structure, where any comparison has an even chance of directing the traversal to a left child or to a right child. The new algorithm is asymptotically faster, but exposes a new bottleneck as the search code stalls on branch misses half the time.
Realizing that branch misses were slowing down our heap code, we optimized the code using branch-free programming:
// Branchless Logic static int compareMessages(@NonNull Message m1, @NonNull Message m2) { final long when1 = m1.when; final long when2 = m2.when; final long insertSeq1 = m1.insertSeq; final long insertSeq2 = m2.insertSeq; // signum returns the sign (-1, 0, 1) of the argument, // and is implemented as pure arithmetic: // ((num >> 63) | (-num >>> 63)) final int whenSign = Long.signum(when1 - when2); final int insertSeqSign = Long.signum(insertSeq1 - insertSeq2); // whenSign takes precedence over insertSeqSign, // so the formula below is such that insertSeqSign only matters // as a tie-breaker if whenSign is 0. return whenSign * 2 + insertSeqSign; }
To understand the optimization, disassemble the two examples in Compiler Explorer and use LLVM-MCA, a CPU simulator that can generate an estimated timeline of CPU cycles.
The original code: Index 01234567890123 [0,0] DeER . . . sub x0, x2, x3 [0,1] D=eER. . . cmp x0, #0 [0,2] D==eER . . cset w0, ne [0,3] .D==eER . . cneg w0, w0, lt [0,4] .D===eER . . cmp w0, #0 [0,5] .D====eER . . b.le #12 [0,6] . DeE---R . . mov w1, #1 [0,7] . DeE---R . . b #48 [0,8] . D==eE-R . . tbz w0, #31, #12 [0,9] . DeE--R . . mov w1, #-1 [0,10] . DeE--R . . b #36 [0,11] . D=eE-R . . sub x0, x4, x5 [0,12] . D=eER . . cmp x0, #0 [0,13] . D==eER. . cset w0, ne [0,14] . D===eER . cneg w0, w0, lt [0,15] . D===eER . cmp w0, #0 [0,16] . D====eER. csetm w1, lt [0,17] . D===eE-R. cmp w0, #0 [0,18] . .D===eER. csinc w1, w1, wzr, le [0,19] . .D====eER mov x0, x1 [0,20] . .DeE----R ret
Note the one conditional branch, b.le, which avoids comparing the insertSeq fields if the result is already known from comparing the when fields.
The branchless code: Index 012345678 [0,0] DeER . . sub x0, x2, x3 [0,1] DeER . . sub x1, x4, x5 [0,2] D=eER. . cmp x0, #0 [0,3] .D=eER . cset w0, ne [0,4] .D==eER . cneg w0, w0, lt [0,5] .DeE--R . cmp x1, #0 [0,6] . DeE-R . cset w1, ne [0,7] . D=eER . cneg w1, w1, lt [0,8] . D==eeER add w0, w1, w0, lsl #1 [0,9] . DeE--R ret
Here, the branchless implementation takes fewer cycles and instructions than even the shortest path through the branchy code - it's better in all cases. The faster implementation plus the elimination of mispredicted branches resulted in a 5x improvement in some of our benchmarks!
However, this technique is not always applicable. Branchless approaches generally require doing work that will be thrown away, and if the branch is predictable most of the time, that wasted work can slow your code down. In addition, removing a branch often introduces a data dependency. Modern CPUs execute multiple operations per cycle, but they can't execute an instruction until its inputs from a previous instruction are ready. In contrast, a CPU can speculate about data in branches, and work ahead if a branch is predicted correctly.
Testing and Validation
Validating the correctness of lock-free algorithms is notoriously difficult!
In addition to standard unit tests for continuous validation during development, we also wrote rigorous stress tests to verify queue invariants and to attempt to induce data races if they existed. In our test labs we could run millions of test instances on emulated devices and on real hardware.
With Java ThreadSanitizer (JTSan) instrumentation, we could use the same tests to also detect some data races in our code. JTSan did not find any problematic data races in DeliQueue, but - surprisingly -actually detected two concurrency bugs in the Robolectric framework, which we promptly fixed.
To improve our debugging capabilities, we built new analysis tools. Below is an example showing an issue in Android platform code where one thread is overloading another thread with Messages, causing a large backlog, visible in Perfetto thanks to the MessageQueue instrumentation feature that we added.
To enable MessageQueue tracing in the system_server process, include the following in your Perfetto configuration:
data_sources {
config {
name: "track_event"
target_buffer: 0 # Change this per your buffers configuration
track_event_config {
enabled_categories: "mq"
}
}
}
Impact
DeliQueue improves system and app performance by eliminating locks from MessageQueue.
-
Synthetic benchmarks: multi-threaded insertions into busy queues is up to 5,000x faster than the legacy MessageQueue, thanks to improved concurrency (the Treiber stack) and faster insertions (the min-heap).
-
In Perfetto traces acquired from internal beta testers, we see a reduction of 15% in app main thread time spent in lock contention.
-
On the same test devices, the reduced lock contention leads to significant improvements to the user experience, such as:
-
-
-4% missed frames in apps.
-
-7.7% missed frames in System UI and Launcher interactions.
-
-9.1% in time from app startup to the first frame drawn, at the 95%ile.
-
Next steps
DeliQueue is rolling out to apps in Android 17. App developers should review preparing your app for the new lock-free MessageQueue on the Android Developers blog to learn how to test their apps.
References
[1] Treiber, R.K., 1986. Systems programming: Coping with parallelism. International Business Machines Incorporated, Thomas J. Watson Research Center.
[2] Goetz, B., Peierls, T., Bloch, J., Bowbeer, J., Holmes, D., & Lea, D. (2006). Java Concurrency in Practice. Addison-Wesley Professional.
17 Feb 2026 4:00pm GMT
13 Feb 2026
Android Developers Blog
Prepare your app for the resizability and orientation changes in Android 17
Posted by Miguel Montemayor, Developer Relations Engineer, Android
With the release of Android 16 in 2025, we shared our vision for a device ecosystem where apps adapt seamlessly to any screen-whether it's a phone, foldable, tablet, desktop, car display, or XR. Users expect their apps to work everywhere. Whether multitasking on a tablet, unfolding a device to read comfortably, or running apps in a desktop windowing environment, users expect the UI to fill the available display space and adapt to the device posture.
We introduced significant changes to orientation and resizability APIs to facilitate adaptive behavior, while providing a temporary opt-out to help you make the transition. We've already seen many developers successfully adapt to this transition when targeting API level 36.
Now with the release of the Android 17 Beta, we're moving to the next phase of our adaptive roadmap: Android 17 (API level 37) removes the developer opt-out for orientation and resizability restrictions on large screen devices (sw > 600 dp). When you target API level 37, your app must be capable of adapting to a variety of display sizes.
The behavior changes ensure that the Android ecosystem offers a consistent, high-quality experience on all device form factors.
What's changing in Android 17
Apps targeting Android 17 must ensure their compatibility with the phase out of manifest attributes and runtime APIs introduced in Android 16. We understand for some apps this may be a big transition, so we've included best practices and tools for helping avoid common issues later in this blog post.
No new changes have been introduced since Android 16, but the developer opt-out is no longer possible. As a reminder: when your app is running on a large screen-where large screen means that the smaller dimension of the display is greater than or equal to 600 dp-the following manifest attributes and APIs are ignored:
Note: As previously mentioned with Android 16, these changes do not apply for screens that are smaller than sw 600 dp or apps categorized as games based on the android:appCategory flag.
| Manifest attributes/API | Ignored values |
| screenOrientation | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
| setRequestedOrientation() | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
| resizeableActivity | all |
| minAspectRatio | all |
| maxAspectRatio | all |
Also, users retain control. In the aspect ratio settings, users can explicitly opt-in to using the app's requested behavior.
Prepare your app
Apps will need to support landscape and portrait layouts for display sizes in the full range of aspect ratios in which users can choose to use apps, including resizable windows, as there will no longer be a way to restrict the aspect ratio and orientation to portrait or to landscape.
Test your app
Your first step is to test your app with these changes to make sure the app works well across display sizes.
Use Android 17 Beta 1 with the Pixel Tablet and Pixel Fold series emulators in Android Studio, and set the targetSdkPreview = "CinnamonBun". Alternatively, you can use the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag if your app does not target API level 36 yet.
We have additional tools to ensure your layouts adapt correctly. You can automatically audit your UI and get suggestions to make your UI more adaptive with Compose UI Check, and simulate specific display characteristics in your tests using DeviceConfigurationOverride.
For apps that have historically restricted orientation and aspect ratio, we commonly see issues with skewed or misoriented camera previews, stretched layouts, inaccessible buttons, or loss of user state when handling configuration changes.
Let's take a look at some strategies for addressing these common issues.
Ensure camera compatibility
A common problem on landscape foldables or for aspect ratio calculations in scenarios like multi-window, desktop windowing, or connected displays, is when the camera preview appears stretched, rotated, or cropped.
Ensure your camera preview isn't stretched or rotated.
This issue often happens on large screen and foldable devices because apps assume fixed relationships between camera features (like aspect ratio and sensor orientation) and device features (like device orientation and natural orientation).
To ensure your camera preview adapts correctly to any window size or orientation, consider these four solutions:
Solution 1: Jetpack CameraX (preferred)
The simplest and most robust solution is to use the Jetpack CameraX library. Its PreviewView UI element is designed to handle all preview complexities automatically:
-
PreviewView correctly adjusts for sensor orientation, device rotation, and scaling
-
PreviewView maintains the aspect ratio of the camera image, typically by centering and cropping (FILL_CENTER)
-
You can set the scale type to FIT_CENTER to letterbox the preview if needed
For more information, see Implement a preview in the CameraX documentation.
Solution 2: CameraViewfinder
If you are using an existing Camera2 codebase, the CameraViewfinder library (backward compatible to API level 21) is another modern solution. It simplifies displaying the camera feed by using a TextureView or SurfaceView and applying all the necessary transformations (aspect ratio, scale, and rotation) for you.
For more information, see the Introducing Camera Viewfinder blog post and Camera preview developer guide.
Solution 3: Manual Camera2 implementation
If you can't use CameraX or CameraViewfinder, you must manually calculate the orientation and aspect ratio and ensure the calculations are updated on each configuration change:
-
Get the camera sensor orientation (for example, 0, 90, 180, 270 degrees) from CameraCharacteristics
-
Get the device's current display rotation (for example, 0, 90, 180, 270 degrees)
-
Use the camera sensor orientation and display rotation values to determine the necessary transformations for your SurfaceView or TextureView
-
Ensure the aspect ratio of your output Surface matches the aspect ratio of the camera preview to prevent distortion
Important: Note the camera app might be running in a portion of the screen, either in multi-window or desktop windowing mode or on a connected display. For this reason, screen size should not be used to determine the dimensions of the camera viewfinder; use window metrics instead. Otherwise you risk a stretched camera preview.
For more information, see the Camera preview developer guide and Your Camera app on different form factors video.
Solution 4: Perform basic camera actions using an Intent
If you don't need many camera features, a simple and straightforward solution is to perform basic camera actions like capturing a photo or video using the device's default camera application. In this case, you can simply use an Intent instead of integrating with a camera library, for easier maintenance and adaptability.
For more information, see Camera intents.
Avoid stretched UI or inaccessible buttons
If your app assumes a specific device orientation or display aspect ratio, the app may run into issues when it's now used across various orientations or window sizes.
Ensure buttons, textfields, and other elements aren't stretched on large screens.
You may have set buttons, text fields, and cards to fillMaxWidth or match_parent. On a phone, this looks great. However, on a tablet or foldable in landscape, UI elements stretch across the entire large screen. In Jetpack Compose, you can use the widthIn modifier to set a maximum width for components to avoid stretched content:
Box(
contentAlignment = Alignment.Center,
modifier = Modifier.fillMaxSize()
) {
Column(
modifier = Modifier
.widthIn(max = 300.dp) // Prevents stretching beyond 300dp
.fillMaxWidth() // Fills width up to 300dp
.padding(16.dp)
) {
// Your content
}
}
If a user opens your app in landscape orientation on a foldable or tablet, action buttons like Save or Login at the bottom of the screen may be rendered offscreen. If the container is not scrollable, the user can be blocked from proceeding. In Jetpack Compose, you can add a verticalScroll modifier to your component:
Column(
modifier = Modifier
.fillMaxSize()
.verticalScroll(rememberScrollState())
.padding(16.dp)
)
By combining max-width constraints with vertical scrolling, you ensure your app remains functional and usable, regardless of how wide or short the app window size becomes.
See our guide on building adaptive layouts.
Preserve state with configuration changes
Removing orientation and aspect ratio restrictions means your app's window size will change much more frequently. Users may rotate their device, fold/unfold it, or resize your app dynamically in split-screen or desktop windowing modes.
By default, these configuration changes destroy and recreate your activity. If your app does not properly manage this lifecycle event, users will have a frustrating experience: scroll positions are reset to the top, half-filled forms are wiped clean, and navigation history is lost. To ensure a seamless adaptive experience, it's critical your app preserves state through these configuration changes. With Jetpack Compose, you can opt-out of recreation, and instead allow window size changes to recompose your UI to reflect the new amount of space available.
See our guide on saving UI state.
Targeting API level 37 by August 2027
If your app previously opted out of these changes when targeting API level 36, your app will only be impacted by the Android 17 opt-out removal after your app targets API level 37. To help you plan ahead and make the necessary adjustments to your app, here's the timeline when these changes will take effect:
-
Android 17: Changes described above will be the baseline experience for large screen devices (smallest screen width > 600 dp) for apps that target API level 37. Developers will not have an option to opt-out.
The deadlines for targeting a specific API level are app-store specific. For Google Play, new apps and updates will be required to target API level 37, making this behavior mandatory for distribution in August 2027.
Preparing for Android 17
Refer to the Android 17 changes page for all changes impacting apps in Android 17. To test your app, download Android 17 Beta 1 and update to targetSdkPreview = "CinnamonBun" or use the app compatibility framework to enable specific changes.
The future of Android is adaptive, and we're here to help you get there. As you prepare for Android 17, we encourage you to review our guides for building adaptive layouts and our large screen quality guidelines. These resources are designed to help you handle multiple form factors and window sizes with confidence.
Don't wait. Start getting ready for Android 17 today!
13 Feb 2026 7:34pm GMT
The First Beta of Android 17

Posted by Matthew McCullough, VP of Product Management, Android Developer
Today we're releasing the first beta of Android 17, continuing our work to build a platform that prioritizes privacy, security, and refined performance. This build continues our work for more adaptable Android apps, introduces significant enhancements to camera and media capabilities, new tools for optimizing connectivity, and expanded profiles for companion devices. This release also highlights a fundamental shift in the way we're bringing new releases to the developer community, from the traditional Developer Preview model to the Android Canary program
Beyond the Developer Preview
Android has replaced the traditional "Developer Preview" with a continuous Canary channel. This new "always-on" model offers three main benefits:
- Faster Access: Features and APIs land in Canary as soon as they pass internal testing, rather than waiting for a quarterly release.
- Better Stability: Early "battle-testing" in Canary results in a more polished Beta experience with new APIs and behavior changes that are closer to being final.
- Easier Testing: Canary supports OTA updates (no more manual flashing) and, as a separate update channel, more easily integrates with CI workflows and gives you the earliest window to give immediate feedback on upcoming potential changes.
The Android 17 schedule
With the release of the Android 17 Beta, we're moving to the next phase of our adaptive roadmap: Android 17 (API level 37) removes the developer opt-out for orientation and resizability restrictions on large screen devices (sw > 600 dp).
When your app targets SDK 37, it must be ready to adapt. Users expect their apps to work everywhere-whether multitasking on a tablet, unfolding a device, or using a desktop windowing environment-and they expect the UI to fill the space and respect their device posture.
Key Changes for SDK 37
Apps targeting Android 17 must ensure compatibility with the phase-out of manifest attributes and runtime APIs introduced in Android 16. When running on a large screen (smaller dimension ≥ 600dp), the following attributes and APIs will be ignored:| Manifest attributes/API | Ignored values |
| screenOrientation | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
| setRequestedOrientation() | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
| resizeableActivity | all |
| minAspectRatio | all |
| maxAspectRatio | all |
These changes are specific to large screens; they do not apply to screens smaller than sw600dp (including traditional slate form factor phones). Additionally, apps categorized as games (based on the android:appCategory flag) are exempt from these restrictions.
It is also important to note that users remain in control. They can explicitly opt-in/out to using an app's default behavior via the system's aspect ratio settings.
Updates to configuration changesPerformance
Lock-free MessageQueue
In Android 17, apps targeting SDK 37 or higher will receive a new implementation of android.os.MessageQueue where the implementation is lock-free. The new implementation improves performance and reduces missed frames, but may break clients that reflect on MessageQueue private fields and methods.
Generational garbage collection
Android 17 introduces generational garbage collection to ART's Concurrent Mark-Compact collector. This optimization introduces more frequent, less resource-intensive young-generation collections alongside full-heap collections. aiming to reduce overall garbage collection CPU cost and time duration. ART improvements are also available to over a billion devices running Android 12 (API level 31) and higher through Google Play System updates.
Static final fields now truly final
Starting from Android 17 apps targeting Android 17 or later won't be able to modify "static final" fields, allowing the runtime to apply performance optimizations more aggressively. An attempt to do so via reflection (and deep reflection) will always lead to IllegalAccessException being thrown. Modifying them via JNI's SetStatic<Type>Field methods family will immediately crash the application.
Custom Notification View Restrictions
To reduce memory usage we are restricting the size of custom notification views. This update closes a loophole that allows apps to bypass existing limits using URIs. This behavior is gated by the target SDK version and takes effect for apps targeting API 37 and higher.
New performance debugging ProfilingManager triggers
We've introduced several new system triggers to ProfilingManager to help you collect in-depth data to debug performance issues. These triggers are TRIGGER_TYPE_COLD_START, TRIGGER_TYPE_OOM, and TRIGGER_TYPE_KILL_EXCESSIVE_CPU_USAGE.
To understand how to set up the new system triggers, check out the trigger-based profiling and retrieve and analyze profiling data documentation.
fun updateCameraSession(session: CameraCaptureSession, newOutputConfigs: List<OutputConfiguration>)) { // Dynamically update the session without closing and reopening try { // Update the output configurations session.updateOutputConfigurations(newOutputConfigs) } catch (e: CameraAccessException) { // Handle error } }
Logical multi-camera device metadata
When working with logical cameras that combine multiple physical camera sensors, you can now request additional metadata from all active physical cameras involved in a capture, not just the primary one. Previously, you had to implement workarounds, sometimes allocating unnecessary physical streams, to obtain metadata from secondary active cameras (e.g., during a lens switch for zoom where a follower camera is active). This feature introduces a new key, LOGICAL_MULTI_CAMERA_ADDITIONAL_RESULTS, in CaptureRequest and CaptureResult. By setting this key to ON in your CaptureRequest, the TotalCaptureResult will include metadata from these additional active physical cameras. You can access this comprehensive metadata using TotalCaptureResult.getPhysicalCameraTotalResults() to get more detailed information that may enable you to optimize resource usage in your camera applications.
Versatile Video Coding (VVC) Support
Android 17 adds support for the Versatile Video Coding (VVC) standard. This includes defining the video/vvc MIME type in MediaFormat, adding new VVC profiles in MediaCodecInfo, and integrating support into MediaExtractor. This feature will be coming to devices with hardware decode support and capable drivers.
Constant Quality for Video Recording
We have added setVideoEncodingQuality() to MediaRecorder. This allows you to configure a constant quality (CQ) mode for video encoders, giving you finer control over video quality beyond simple bitrate settings.
Background Audio Hardening
Starting in Android 17, the audio framework will enforce restrictions on background audio interactions including audio playback, audio focus requests, and volume change APIs to ensure that these changes are started intentionally by the user.
If the app tries to call audio APIs while the application is not in a valid lifecycle, the audio playback and volume change APIs will fail silently without an exception thrown or failure message provided. The audio focus API will fail with the result code AUDIOFOCUS_REQUEST_FAILED.
Privacy and Security
Deprecation of Cleartext Traffic Attribute
The android:usesCleartextTraffic attribute is now deprecated. If your app targets (Android 17) or higher and relies on usesCleartextTraffic="true" without a corresponding Network Security Configuration, it will default to disallowing cleartext traffic. You are encouraged to migrate to Network Security Configuration files for granular control.
We are introducing a public Service Provider Interface (SPI) for an implementation of HPKE hybrid cryptography, enabling secure communication using a combination of public key and symmetric encryption (AEAD).
Connectivity and Telecom
Enhanced VoIP Call History
We are introducing user preference management for app VoIP call history integration. This includes support for caller and participant avatar URIs in the system dialer, enabling granular user control over call log privacy and enriching the visual display of integrated VoIP call logs.
Wi-Fi Ranging and Proximity
Wi-Fi Ranging has been enhanced with new Proximity Detection capabilities, supporting continuous ranging and secure peer-to-peer discovery. Updates to Wi-Fi Aware ranging include new APIs for peer handles and PMKID caching for 11az secure ranging.
Developer Productivity and Tools
Updates for companion device apps
We have introduced two new profiles to the CompanionDeviceManager to improve device distinction and permission handling:
-
Medical Devices: This profile allows medical device mobile applications to request all necessary permissions with a single tap, simplifying the setup process.
-
Fitness Trackers: The DEVICE_PROFILE_FITNESS_TRACKER profile allows companion apps to explicitly indicate they are managing a fitness tracker. This ensures accurate user experiences with distinct icons while reusing existing watch role permissions.
Also, the CompanionDeviceManager now offers a unified dialog for device association and Nearby permission requests. You can leverage the new setExtraPermissions method in AssociationRequest.Builder to bundle nearby permission prompts within the existing association flow, reducing the number of dialogs presented to the user.
Get started with Android 17You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.
If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 1.
If you have Android 26Q1 Beta and would like to take the final stable release of 26Q1 and exit Beta, you need to ignore the over-the-air update to 26Q2 Beta 1 and wait for the release of 26Q1.
We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.
For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you're set up, here are some of the things you should do:
-
Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
-
Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it.
We'll update the preview/beta system images and SDK regularly throughout the Android 17 release cycle. Once you've installed a beta build, you'll automatically get future updates over-the-air for all later previews and Betas.
For complete information, visit the Android 17 developer site.
Join the conversation
As we move toward Platform Stability and the final stable release of Android 17 later this year, your feedback remains our most valuable asset. Whether you're an early adopter on the Canary channel or an app developer testing on Beta 1, consider joining our communities and filing feedback. We're listening.
13 Feb 2026 7:23pm GMT
29 Jan 2026
Android Developers Blog
Accelerating your insights with faster, smarter monetization data and recommendations
Posted by Phalene Gowling, Product Manager, Google Play
To build a thriving business on Google Play, you need more than just data - you need a clear path to action. Today, we're announcing a suite of upgrades to the Google Play Console and beyond, giving you greater visibility into your financial performance and specific, data-backed steps to improve it.
From new, actionable recommendations to more granular sales reporting, here's how we're helping you maximize your ROI.
New: Monetization insights and recommendations
Launch Status: Rolling out today
The Monetize with Play overview page is designed to be your ultimate command center. Today, we are upgrading it with a new dynamic insights section designed to give you a clearer view of your revenue drivers.
.gif)
- Optimize conversion: Track your new Cart Conversion Rate.
-
Reduce churn: Track cancelled subscriptions over time.
-
Optimize pricing: Monitor your Average Revenue Per Paying User (ARPPU).
-
Increase buyer reach: Analyze how much of your engaged audience convert to buyers.

We recently rolled out new Sales Channel data in your financial reporting. This allows you to attribute revenue to specific surfaces - including your app, the Play Store, and platforms like Google Play Games on PC.
For native-PC game developers and media & entertainment subscription businesses alike, this granularity allows you to calculate the precise ROI of your cross-platform investments and understand exactly which channels are driving your growth. Learn more.

The Orders API provides programmatic access to one-time and recurring order transaction details. If you haven't integrated it yet, this API allows you to ingest real-time data directly into your internal dashboards for faster reconciliation and improved customer support.

Level Infinite (Tencent) says the API "works so well that we want every app to use it."
Continuous improvements towards objective-led reporting
You've told us that the biggest challenge isn't just accessing data, but connecting the dots across different metrics to see the full picture. We're enhancing reporting that goes beyond data dumps to provide straightforward, actionable insights that help you reach business objectives faster.
Our goal is to create a more cohesive product experience centered around your objectives. By shifting from static reporting to dynamic, goal-orientated tools, we're making it easier to track and optimize for revenue, conversion rates, and churn. These updates are just the beginning of a transformation designed to help you turn data into measurable growth.
29 Jan 2026 5:00pm GMT
28 Jan 2026
Android Developers Blog
How Automated Prompt Optimization Unlocks Quality Gains for ML Kit’s GenAI Prompt API
Posted by Chetan Tekur, PM at AI Innovation and Research, Chao Zhao, SWE at AI Innovation and Research, Paul Zhou, Prompt Quality Lead at GCP Cloud AI and Industry Solutions, and Caren Chang, Developer Relations Engineer at Android
Automated Prompt Optimization (APO)
To further help bring your ML Kit Prompt API use cases to production, we are excited to announce Automated Prompt Optimization (APO) targeting On-Device models on Vertex AI. Automated Prompt Optimization is a tool that helps you automatically find the optimal prompt for your use cases.
The era of On-Device AI is no longer a promise-it is a production reality. With the release of Gemini Nano v3, we are placing unprecedented language understanding and multimodal capabilities directly into the palms of users. Through the Gemini Nano family of models, we have wide coverage of supported devices across the Android Ecosystem. But for developers building the next generation of intelligent apps, access to a powerful model is only step one. The real challenge lies in customization: How do you tailor a foundation model to expert-level performance for your specific use case without breaking the constraints of mobile hardware?
In the server-side world, the larger LLMs tend to be highly capable and require less domain adaptation. Even when needed, more advanced options such as LoRA (Low-Rank Adaptation) fine-tuning can be feasible options. However, the unique architecture of Android AICore prioritizes a shared, memory-efficient system model. This means that deploying custom LoRA adapters for every individual app comes with challenges on these shared system services.
But there is an alternate path that can be equally impactful. By leveraging Automated Prompt Optimization (APO) on Vertex AI, developers can achieve quality approaching fine-tuning, all while working seamlessly within the native Android execution environment. By focusing on superior system instruction, APO enables developers to tailor model behavior with greater robustness and scalability than traditional fine-tuning solutions.
Note: Gemini Nano V3 is a quality optimized version of the highly acclaimed Gemma 3N model. Any prompt optimizations that are made on the open source Gemma 3N model will apply to Gemini Nano V3 as well. On supported devices, ML Kit GenAI APIs leverage the nano-v3 model to maximize the quality for Android Developers
APO treats the prompt not as a static text, but as a programmable surface that can be optimized. It leverages server-side models (like Gemini Pro and Flash) to propose prompts, evaluate variations and find the optimal one for your specific task. This process employs three specific technical mechanisms to maximize performance:
-
Automated Error Analysis: APO analyzes error patterns from training data to Automatically identify specific weaknesses in the initial prompt.
-
Semantic Instruction Distillation: It analyzes massive training examples to distill the "true intent" of a task, creating instructions that more accurately reflect the real data distribution.
-
Parallel Candidate Testing: Instead of testing one idea at a time, APO generates and tests numerous prompt candidates in parallel to identify the global maximum for quality.
Why APO Can Approach Fine Tuning Quality
It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:
-
Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model's weights to over-index on a specific distribution of data. This often leads to "catastrophic forgetting," where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.
-
Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model's latent capabilities, often discovering strategies that might be hard for human engineers to find.
To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.
|
Use Case |
Task Type |
Task Description |
Metric |
APO Improvement |
|
Topic classification |
Text classification |
Classify a news article into topics such as finance, sports, etc |
Accuracy |
+5% |
|
Intent classification |
Text classification |
Classify a customer service query into intents |
Accuracy |
+8.0% |
|
Webpage translation |
Text translation |
Translate a webpage from English to a local language |
BLEU |
+8.57% |
A Seamless, End-to-End Developer Workflow
It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:
-
Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model's weights to over-index on a specific distribution of data. This often leads to "catastrophic forgetting," where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.
-
Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model's latent capabilities, often discovering strategies that might be hard for human engineers to find.
To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.
Conclusion
The release of Automated Prompt Optimization (APO) marks a turning point for on-device generative AI. By bridging the gap between foundation models and expert-level performance, we are giving developers the tools to build more robust mobile applications. Whether you are just starting with Zero-Shot Optimization or scaling to production with Data-Driven refinement, the path to high-quality on-device intelligence is now clearer. Launch your on-device use cases to production today with ML Kit's Prompt API and Vertex AI's Automated Prompt Optimization.
Relevant links:
28 Jan 2026 5:00pm GMT
27 Jan 2026
Android Developers Blog
The Embedded Photo Picker
Posted by Roxanna Aliabadi Walker, Product Manager and Yacine Rezgui, Developer Relations Engineer
The Embedded Photo Picker: A more seamless way to privately request photos and videos in your app
Seamless integration, enhanced privacy
- Intuitive placement: The photo picker sits right below the camera button, giving users a clear choice between capturing a new photo or selecting an existing one.
- Dynamic preview: Immediately after a user taps a photo, they see a large preview, making it easy to confirm their selection. If they deselect the photo, the preview disappears, keeping the experience clean and uncluttered.
- Expand for more content: The initial view is simplified, offering easy access to recent photos. However, users can easily expand the photo picker to browse and choose from all photos and videos in their library, including cloud content from Google Photos.
- Respecting user choices: The embedded photo picker only grants access to the specific photos or videos the user selects, meaning they can stop requesting the photo and video permissions altogether. This also saves the Messages from needing to handle situations where users only grant limited access to photos and videos.
Integrating the embedded photo picker is made easy with the Photo Picker Jetpack library.
implementation("androidx.photopicker:photopicker-compose:1.0.0-alpha01")@Composable fun EmbeddedPhotoPickerDemo() { // We keep track of the list of selected attachments var attachments by remember { mutableStateOf(emptyList<Uri>()) } val coroutineScope = rememberCoroutineScope() // We hide the bottom sheet by default but we show it when the user clicks on the button val scaffoldState = rememberBottomSheetScaffoldState( bottomSheetState = rememberStandardBottomSheetState( initialValue = SheetValue.Hidden, skipHiddenState = false ) ) // Customize the embedded photo picker val photoPickerInfo = EmbeddedPhotoPickerFeatureInfo .Builder() // Set limit the selection to 5 items .setMaxSelectionLimit(5) // Order the items selection (each item will have an index visible in the photo picker) .setOrderedSelection(true) // Set the accent color (red in this case, otherwise it follows the device's accent color) .setAccentColor(0xFF0000) .build() // The embedded photo picker state will be stored in this variable val photoPickerState = rememberEmbeddedPhotoPickerState( onSelectionComplete = { coroutineScope.launch { // Hide the bottom sheet once the user has clicked on the done button inside the picker scaffoldState.bottomSheetState.hide() } }, onUriPermissionGranted = { // We update our list of attachments with the new Uris granted attachments += it }, onUriPermissionRevoked = { // We update our list of attachments with the Uris revoked attachments -= it } ) SideEffect { val isExpanded = scaffoldState.bottomSheetState.targetValue == SheetValue.Expanded // We show/hide the embedded photo picker to match the bottom sheet state photoPickerState.setCurrentExpanded(isExpanded) } BottomSheetScaffold( topBar = { TopAppBar(title = { Text("Embedded Photo Picker demo") }) }, scaffoldState = scaffoldState, sheetPeekHeight = if (scaffoldState.bottomSheetState.isVisible) 400.dp else 0.dp, sheetContent = { Column(Modifier.fillMaxWidth()) { // We render the embedded photo picker inside the bottom sheet EmbeddedPhotoPicker( state = photoPickerState, embeddedPhotoPickerFeatureInfo = photoPickerInfo ) } } ) { innerPadding -> Column(Modifier.padding(innerPadding).fillMaxSize().padding(horizontal = 16.dp)) { Button(onClick = { coroutineScope.launch { // We expand the bottom sheet, which will trigger the embedded picker to be shown scaffoldState.bottomSheetState.partialExpand() } }) { Text("Open photo picker") } LazyVerticalGrid(columns = GridCells.Adaptive(minSize = 64.dp)) { // We render the image using the Coil library itemsIndexed(attachments) { index, uri -> AsyncImage( model = uri, contentDescription = "Image ${index + 1}", contentScale = ContentScale.Crop, modifier = Modifier.clickable { coroutineScope.launch { // When the user clicks on the media from the app's UI, we deselect it // from the embedded photo picker by calling the method deselectUri photoPickerState.deselectUri(uri) } } ) } } } } }
implementation("androidx.photopicker:photopicker:1.0.0-alpha01")<view class="androidx.photopicker.EmbeddedPhotoPickerView" android:id="@+id/photopicker" android:layout_width="match_parent" android:layout_height="match_parent" />
// We keep track of the list of selected attachments private val _attachments = MutableStateFlow(emptyList<Uri>()) val attachments = _attachments.asStateFlow() private lateinit var picker: EmbeddedPhotoPickerView private var openSession: EmbeddedPhotoPickerSession? = null val pickerListener = object EmbeddedPhotoPickerStateChangeListener { override fun onSessionOpened (newSession: EmbeddedPhotoPickerSession) { openSession = newSession } override fun onSessionError (throwable: Throwable) {} override fun onUriPermissionGranted(uris: List<Uri>) { _attachments += uris } override fun onUriPermissionRevoked (uris: List<Uri>) { _attachments -= uris } override fun onSelectionComplete() { // Hide the embedded photo picker as the user is done with the photo/video selection } } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.main_view) // // Add the embedded photo picker to a bottom sheet to allow the dragging to display the full photo library // picker = findViewById(R.id.photopicker) picker.addEmbeddedPhotoPickerStateChangeListener(pickerListener) picker.setEmbeddedPhotoPickerFeatureInfo( // Set a custom accent color EmbeddedPhotoPickerFeatureInfo.Builder().setAccentColor(0xFF0000).build() ) }
// Notify the embedded picker of a configuration change openSession.notifyConfigurationChanged(newConfig) // Update the embedded picker to expand following a user interaction openSession.notifyPhotoPickerExpanded(/* expanded: */ true) // Resize the embedded picker openSession.notifyResized(/* width: */ 512, /* height: */ 256) // Show/hide the embedded picker (after a form has been submitted) openSession.notifyVisibilityChanged(/* visible: */ false) // Remove unselected media from the embedded picker after they have been // unselected from the host app's UI openSession.requestRevokeUriPermission(removedUris)
For enhanced user privacy and security, the system renders the embedded photo picker in a way that prevents any drawing or overlaying. This intentional design choice means that your UX should consider the photo picker's display area as a distinct and dedicated element, much like you would plan for an advertising banner.
If you have any feedback or suggestions, submit tickets to our issue tracker.
27 Jan 2026 5:00pm GMT
Beyond the smartphone: How JioHotstar optimized its UX for foldables and tablets
Posted by Prateek Batra, Developer Relations Engineer, Android Adaptive Apps
To help ensure a premium experience for its vast audience, JioHotstar elevated the viewing experience by optimizing their app for foldables and tablets. They accomplished this by following Google's adaptive app guidance and utilizing resources like samples, codelabs, cookbooks, and documentation to help create a consistently seamless and engaging experience across all display sizes.
JioHotstar's large screen challenge
JioHotstar offered an excellent user experience on standard phones and the team wanted to take advantage of new form factors. To start, the team evaluated their app against the large screen app quality guidelines to understand the optimizations required to extend their user experience to foldables and tablets. To achieve Tier 1 large screen app status, the team implemented two strategic updates to adapt the app across various form factors and differentiate on foldables. By addressing the unique challenges posed by foldable and tablet devices, JioHotstar aims to deliver a high-quality and immersive experience across all display sizes and aspect ratios.
What they needed to do
JioHotstar's user interface, designed primarily for standard phone displays, encountered challenges in adapting hero image aspect ratios, menus, and show screens to the diverse screen sizes and resolutions of other form factors. This often led to image cropping, letterboxing, low resolution, and unutilized space, particularly in landscape mode. To help fully leverage the capabilities of tablets and foldables and deliver an optimized user experience across these device types, JioHotstar focused on refining the UI to ensure optimal layout flexibility, image rendering, and navigation across a wider range of devices.
What they did
For a better viewing experience on large screens, JioHotstar took the initiative to enhance its app by incorporating WindowSizeClass and creating optimized layouts for compact, medium and extended widths. This allowed the app to adapt its user interface to various screen dimensions and aspect ratios, ensuring a consistent and visually appealing UI across different devices.
JioHotstar followed this pattern using Material 3 Adaptive library to know how much space the app has available. First invoking the currentWindowAdaptiveInfo() function, then using new layouts accordingly for the three window size classes:
val sizeClass = currentWindowAdaptiveInfo().windowSizeClass if(sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_EXPANDED_LOWER_BOUND)) { showExpandedLayout() } else if(sizeClass.isHeightAtLeastBreakpoint(WIDTH_DP_MEDIUM_LOWER_BOUND)) { showMediumLayout() } else { showCompactLayout() }
The breakpoints are in order, from the biggest to the smallest, as internally the API checks for with a greater or equal then, so any width that is at least greater or equal then EXPANDED will always be greater than MEDIUM.
JioHotstar is able to provide the premium experience unique to foldable devices: Tabletop Mode. This feature conveniently relocates the video player to the top half of the screen and the video controls to the bottom half when a foldable device is partially folded for a handsfree experience.
val isTabletTop = currentWindowAdaptiveInfo().windowPosture.isTabletop
if(isTabletopMode) {
Column {
Player(Modifier.weight(1f))
Controls(Modifier.weight(1f))
}
} else {
usualPlayerLayout()
}
JioHotstar is now meeting the Large Screen app quality guidelines for Tier 1. The team leveraged adaptive app guidance, utilizing samples, codelabs, cookbooks, and documentation to incorporate these recommendations.
To further improve the user experience, JioHotstar increased touch target sizes, to the recommended 48dp, on video discovery pages, ensuring accessibility across large screen devices. Their video details page is now adaptive, adjusting to screen sizes and orientations. They moved beyond simple image scaling, instead leveraging window size classes to detect window size and density in real time and load the most appropriate hero image for each form factor, helping to enhance visual fidelity. Navigation was also improved, with layouts adapting to suit different screen sizes.
Now users can view their favorite content from JioHotstar on large screens devices with an improved and highly optimized viewing experience.
Achieving Tier 1 large screen app status with Google is a milestone that reflects the strength of our shared vision. At JioHotstar, we have always believed that optimizing for large screen devices goes beyond adaptability, it's about elevating the viewing experience for audiences who are rapidly embracing foldables, tablets, and connected TVs.
Leveraging Google's Jetpack libraries and guides allowed us to combine our insights on content consumption with their expertise in platform innovation. This collaboration allowed both teams to push boundaries, address gaps, and co-create a seamless, immersive experience across every screen size.
Together, we're proud to bring this enhanced experience to millions of users and to set new benchmarks in how India and the world experience streaming.
Sonu Sanjeev
Senior Software Development Engineer
27 Jan 2026 3:30am GMT
26 Jan 2026
Android Developers Blog
Trade-in mode on Android 16+
Supporting Longevity through Faster Diagnostics
Posted by Rachel S, Android Product Manager
Trade-in mode: faster assessment of a factory-reset phone or tablet, bypassing setup wizard, a new feature on Android 16 and above.
Supporting device longevity
Android is committed to making devices last longer. With device longevity comes device circularity: phones and tablets traded-in and resold. GSMA reported that secondhand phones have around 80-90% lower carbon emissions than new phones. The secondhand device market has grown substantially both in volume and value, a trend projected to continue.
Android 16 and above offers an easy way to access device information on any factory reset phone or tablet via the new tradeinmode parameter, accessed via adb commands. This means you can view quality indicators of a phone or tablet, skipping each setup wizard step. Simply connect a phone or tablet with adb, and use tradeinmode commands to get information about the device.
Trade-in mode: What took minutes, now takes seconds
Faster trade-in processing - By bypassing setup wizard, trade-in mode improves device trade ins. The mode enables immediate access to understand the 'health' of a device, helping everyone along the secondhand value chain check the quality of devices that are wiped. We've already seen significant increases in processing secondhand Android devices!
Secure evaluation - To ensure the device information is only accessed in secure situations, the device must 1) be factory reset, 2) not have cellular service, 3) not have connectivity or a connected account, and 4) be running a non-debuggable build.
Get device health information with one command - You can view all the below device information with adb command from your workstation adb shell tradeinmode getstatus, skipping setup wizard:
-
Device information
-
-
Device IMEI(s)
-
Device serial number
-
Brand
-
Model
-
Manufacturer
-
Device model, e.g., Pixel 9
-
Device brand, e.g., Google
-
Device manufacturer, e.g., Google
-
Device name, e.g., tokay
-
API level to ensure correct OS version, e.g., launch_level : 34
-
-
Battery heath
-
-
Cycle count
-
Health
-
State, e.g., unknown, good, overheat, dead, over_voltage, unspecified_failure, cold, fair, not_available, inconsistent
-
Battery manufacturing date
-
Date first used
-
Serial number (to help provide indication of genuine parts, if OEM supported)
-
Part status, e.g., replaced, original, unsupported
-
-
Storage
-
-
Useful lifetime remaining
-
Total capacity
-
-
Screen Part status, e.g., replaced, original, unsupported
-
Foldables (number of times devices has been folded and total fold lifespan)
-
Moisture intrusion
-
UICCS information i.e., Indication if there is an e-SIM or removable SIM and the microchip ID for the SIM slot
-
Camera count and location, e.g., 3 cameras on front and 2 on back
-
Lock detection for select device locks
-
And the list keeps growing! Stay up to date here.
Run your own tests - Trade-in mode enables you to run your own diagnostic commands or applications by entering the evaluation flow using tradeinmode evaluate. The device will automatically factory reset on reboot after evaluation mode to ensure nothing remains on the device.
Ensure the device is running an approved build - Further, when connected to the internet, with a single command tradeinmode getstatus --challenge CHALLENGE you can test the device's operating system (OS) authenticity, to be sure the device is running a trusted build. If the build passes the test, you can be sure the diagnostics results are coming from a trusted OS.
There's more - You can use commands to factory reset, power off, reboot, reboot directly into trade-in mode, check if trade-in mode is active, revert to the previous mode, and pause tests until system services are ready.
Want to try it? Learn more about the developer steps and commands.
26 Jan 2026 5:00pm GMT
21 Jan 2026
Android Developers Blog
Ready to review some changes but not others? Try using Play Console’s new Save for later feature

Posted by Georgia Doyle, Senior UX Writer and Content Designer, and Kanu Tibrewal, Software Engineer
We've launched a new Save for later feature on Google Play Console's Publishing overview to give you more control over when you send changes for review.
In the past, changes to your app were bundled together before being sent for review. This presented challenges if you needed to reprioritize changes, or if the changes were no longer relevant. For example, updates to your test tracks grouped with marketing changes that need to be rescheduled. This lack of flexibility meant that if some changes were ready for review but not others, you could end up delaying urgent fixes, or publishing changes that you weren't quite ready to make.
Now, you have the ability to hold back the changes you're not ready to have reviewed.
How it works
In the 'Changes not yet sent for review' section of the Publishing overview page, select 'Save for later' on the groups of changes that you don't want to include in your next review. You can view and edit the list of saved changes, and return them to the Publishing overview if you change your mind. Once the review has started, your saved changes will be added back to 'Changes not yet sent for review'.
- If issues are isolated to an individual track, we'll show you an error beside that change, so you know what to save for later in order to proceed to review with your other changes.
- If you have issues that affect your whole app, for example, App content issues, Save for later will be unavailable and you will need to fix them before you can send any changes for review.
Greater flexibility in your workflows
Our goal for Save for later is to give you greater flexibility over your release schedule. With this feature you can manage what changes you send for review, and address issues affecting individual tracks without holding up ready-to-release changes, so you can iterate faster and minimize the impact of rejections on your release timeline.
So, what's next?
We're excited to see how Save for later helps you to streamline your release process and bring your app innovations to users even faster.
21 Jan 2026 5:00pm GMT
15 Jan 2026
Android Developers Blog
LLM flexibility, Agent Mode improvements, and new agentic experiences in Android Studio Otter 3 Feature Drop
Posted by Sandhya Mohan, Senior Product Manager and Trevor Johns, Developer Relations Engineer
- Bring Your Own Model: You can now use any LLM to power the AI functionality in Android Studio.
- Agent Mode Enhancements: You can now more easily have Agent Mode interact with your app on devices, review and accept suggested changes, and have multiple conversations threads.
- Run user journey tests using natural language: with Journeys in Android Studio.
- Enable Agent Mode to connect to more tools: including the ability to connect to remote servers via MCP.
- Build, iterate and test your UI: with UI agentic experiences in Android Studio.
- Build deep links using natural language: with the new app links assistant.
- Debug R8 optimized code: with Automatic Logcat retracing.
- Simplify Android library modules: with the Fused library plugin.
Here's a deep dive into what's new:
Bring Your Own Model (BYOM)
Every developer has a unique workflow when using AI, and different companies have different policies on AI model usage. With this release, Android Studio now brings you more flexibility by allowing you to choose the LLM that powers the AI functionality in Android Studio, giving you more control over performance, privacy, and cost.
Use a remote model
You can now integrate remote models-such as OpenAI's GPT, Anthropic's Claude, or a similar model-directly into Android Studio. This allows you to leverage your preferred model provider without changing your IDE. To get started, configure a remote model provider in Settings by adding your API endpoint and key. Once configured, you can select your custom model directly from the picker in the AI chat window.
Use a local model
Use your Gemini API key
While Android Studio includes access to a default Gemini model with generous quotas at no cost, some developers need more. By adding your Gemini API key, Android Studio can directly access all the latest Gemini models available from Google.
For example, this allows you to use the most recent Gemini 3 Pro and Gemini 3 Flash models (among others) with expanded context windows and quota. This is especially useful for developers who are using Agent Mode for extended coding sessions, where this additional processing power can provide higher fidelity responses.
Agent Mode enhancements
Run your app and interact with it on devices
Agent Mode can now deploy an application to the connected device, inspect what is currently shown on the screen, take screenshots, check Logcat for errors, and interact with the running application. This lets the agent help you with changes or fixes that involve re-running the application, checking for errors, and verifying that a particular update was made successfully (for example, by taking and reviewing screenshots).
Find and review changes using the changes drawer
Manage multiple conversation threads
Journeys for Android Studio
Support for remote MCP servers
Supercharge your UI development with Agent Mode
Create new UI from a design mock
Match your UI with a target image
Iterate on your UI with natural language
Find and fix UI quality issues
Beyond iterating on your UI, Gemini also helps streamline your development environment.
To accelerate your setup, you can:- Generate Compose Previews: This feature is now enhanced by Agent Mode to provide more accurate results. When working in a file that has Composable functions but no @Preview annotations, you can right-click on the Composable and select Gemini > Generate [Composable name] Preview. The agent will now better analyze your Composable to generate the necessary boilerplate with correct parameters, to help verify that a successfully rendered preview is added.
- Fix Preview rendering errors: When a Compose Preview fails to render, Gemini can now analyze the error message and your code to find the root cause and apply a fix.
App Links Assistant
The App Links Assistant now integrates with Agent Mode to automate the creation of deep link logic, simplifying one of the most time-consuming steps of implementation. Instead of manually writing code to parse incoming intents and navigate users to the correct screen, you can now let Gemini generate the necessary code and tests. Gemini presents a diff view of the suggested code changes for your review and approval, streamlining the process of handling deep links and ensuring users are seamlessly directed to the right content in your app.
To get started, open the App Links Assistant through the tools menu, then choose Create Applink. In the second step, Add logic to handle the intent, select Generate code with AI assistance. If a sample URL is available, enter it, and then click Insert Code.
Automatic Logcat Retracing
Debugging R8-optimized code just became seamless. Previously, when R8 was enabled (minifyEnabled = true in your build.gradle.kts file), it would obfuscate stack traces, changing class names, methods, and line numbers. To find the source of a crash, developers had to manually use the R8 retrace command line tool.
Starting with Android Studio Otter 3 Feature Drop with AGP versions 8.12 and above, this extra step is no longer necessary. Logcat now automatically detects and retraces R8-processed stack traces, so you can see the original, human-readable stack trace directly in the IDE. This provides a much-improved debugging experience with no extra work required.Fused Library Plugin: Publish multiple Android libraries as one
Get started
Ready to dive in and accelerate your development? Download Android Studio Otter 3 Feature Drop and start exploring these powerful new features today!
As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!
15 Jan 2026 5:18pm GMT
08 Jan 2026
Android Developers Blog
Ultrahuman launches features 15% faster with Gemini in Android Studio
Posted by Amrit Sanjeev, Developer Relations Engineer and Trevor Johns, Developer Relations Engineer
Ultrahuman is a consumer health-tech startup that provides daily well-being insights to users based on biometric data from the company's wearables, like the RING Air and the M1 Live Continuous Glucose Monitor (CGM). The Ultrahuman team leaned on Gemini in Android Studio's contextually aware tools to streamline and accelerate their development process.
Ultrahuman's app is maintained by a lean team of just eight developers. They prioritize building features that their users love, and have a backlog of bugs and needed performance improvements that take a lot of time. The team needed to scale up their output of feature improvements, and also needed to handle their performance improvements, without increasing headcount. One of their biggest opportunities was reducing the amount of time and effort for their backlog: every hour saved on maintenance could be reinvested into working on features for their users.
Solving technical hurdles and boosting performance with Gemini
The team integrated Gemini in Android Studio to see if the AI enhanced tools could improve their workflow by handling many Android tasks. First, the team turned to the Gemini chat inside Android Studio. The goal was to prototype a GATT Server implementation for their application's Bluetooth Low Energy (BLE) connectivity.
As Ultrahuman's Android Development Lead, Arka, noted, "Gemini helped us reach a working prototype in under an hour-something that would have otherwise taken us several hours." The BLE implementation provided by Gemini worked perfectly for syncing large amounts of health sensor data while the app ran in the background, improving the data syncing process and saving battery life on both the user's Android phone and Ultrahuman's paired wearable device.
Beyond this core challenge, Gemini also proved invaluable for finding algorithmic optimizations in a custom open-source library, pointing to helpful documentation, assisting with code commenting, and analyzing crash logs. The Ultrahuman team also used code completion to help them breeze through writing otherwise repetitive code, Jetpack Compose Preview Generation to enable rapid iteration during UI design, and Agent Mode for managing complex, project-wide changes, such as rendering a new stacked bar graph that mapped to backend data models and UI models.
Transforming productivity and accelerating feature delivery
These improvements have saved the team dozens of hours each week. This reclaimed time is being used to deliver new features to Ultrahuman's beta users 10-15% faster. For example, the team built a new in-app AI assistant for users, powered by Gemini 2.5 Flash. The UI design, architecture, and parts of the user experience for this new feature were initially suggested by Gemini in Android Studio-showcasing a full-circle AI-assisted development process.
Accelerate your Android development with Gemini
Gemini's expert Android advice, closely integrated throughout Android Studio, helps Android developers spend less time digging through documentation and writing boilerplate code-freeing up more time to innovate.
Learn how Gemini in Android Studio can help your team resolve complex issues, streamline workflows, and ship new features faster.
08 Jan 2026 10:00pm GMT
19 Dec 2025
Android Developers Blog
Media3 1.9.0 - What’s new
Posted by Kristina Simakova, Engineering Manager
Media3 1.9.0 - What's new?
-
media3-inspector - Extract metadata and frames outside of playback
-
media3-ui-compose-material3 - Build a basic Material3 Compose Media UI in just a few steps
-
media3-cast - Automatically handle transitions between Cast and local playbacks
-
media3-decoder-av1 - Consistent AV1 playback with the rewritten extension decoder based on the dav1d library
We also added caching and memory management improvements to PreloadManager, and provided several new ExoPlayer, Transformer and MediaSession simplifications.
This release also gives you the first experimental access to CompositionPlayer to preview media edits.
Read on to find out more, and as always please check out the full release notes for a comprehensive overview of changes in this release.
Extract metadata and frames outside of playback
There are many cases where you want to inspect media without starting a playback. For example, you might want to detect which formats it contains or what its duration is, or to retrieve thumbnails.The new media3-inspector module combines all utilities to inspect media without playback in one place:
-
MetadataRetriever to read duration, format and static metadata from a MediaItem.
-
FrameExtractor to get frames or thumbnails from an item.
-
MediaExtractorCompat as a direct replacement for the Android platform MediaExtractor class, to get detailed information about samples in the file.
suspend fun extractThumbnail(mediaItem: MediaItem) { FrameExtractor.Builder(context, mediaItem).build().use { val thumbnail = frameExtractor.getThumbnail().await() } }
Build a basic Material3 Compose Media UI in just a few steps
In previous releases we started providing connector code between Compose UI elements and your Player instance. With Media3 1.9.0, we added a new module media3-ui-compose-material3 with fully-styled Material3 buttons and content elements. They allow you to build a media UI in just a few steps, while providing all the flexibility to customize style. If you prefer to build your own UI style, you can use the building blocks that take care of all the update and connection logic, so you only need to concentrate on designing the UI element. Please check out our extended guide pages for the Compose UI modules.We are also still working on even more Compose components, like a prebuilt seek bar, a complete out-of-the-box replacement for PlayerView, as well as subtitle and ad integration.
@Composable fun SimplePlayerUI(player: Player, modifier: Modifier = Modifier) { Column(modifier) { ContentFrame(player) // Video surface and shutter logic Row (Modifier.align(Alignment.CenterHorizontally)) { SeekBackButton(player) // Simple controls PlayPauseButton(player) SeekForwardButton(player) } } }
Simple Compose player UI with out-of-the-box elements
Automatically handle transitions between Cast and local playbacks
When you set up your MediaSession, simply build a CastPlayer around your ExoPlayer and add a MediaRouteButton to your UI and you're done!
// MediaSession setup with CastPlayer val exoPlayer = ExoPlayer.Builder(context).build() val castPlayer = CastPlayer.Builder(context).setLocalPlayer(exoPlayer).build() val session = MediaSession.Builder(context, castPlayer).build() // MediaRouteButton in UI @Composable fun UIWithMediaRouteButton() { MediaRouteButton() }
New CastPlayer integration in Media3 session demo app
Consistent AV1 playback with the rewritten extension based on dav1d
The 1.9.0 release contains a completely rewritten AV1 extension module based on the popular dav1d library.As with all extension decoder modules, please note that it requires building from source to bundle the relevant native code correctly. Bundling a decoder provides consistency and format support across all devices, but because it runs the decoding in your process, it's best suited for content you can trust.
Integrate caching and memory management into PreloadManager
-
Caching support - When defining how far to preload, you can now choose PreloadStatus.specifiedRangeCached(0, 5000) as a target state for preloaded items. This will add the specified range to your cache on disk instead of loading the data to memory. With this, you can provide a much larger range of items for preloading as the ones further away from the current item no longer need to occupy memory. Note that this requires setting a Cache in DefaultPreloadManager.Builder.
-
Automatic memory management - We also updated our LoadControl interface to better handle the preload case so you are now able to set an explicit upper memory limit for all preloaded items in memory. It's 144 MB by default, and you can configure the limit in DefaultLoadControl.Builder. The DefaultPreloadManager will automatically stop preloading once the limit is reached, and automatically releases memory of lower priority items if required.
Rely on new simplified default behaviors in ExoPlayer
As always, we added lots of incremental improvements to ExoPlayer as well. To name just a few:-
Mute and unmute - We already had a setVolume method, but have now added the convenience mute and unmute methods to easily restore the previous volume without keeping track of it yourself.
-
Stuck player detection - In some rare cases the player can get stuck in a buffering or playing state without making any progress, for example, due to codec issues or misconfigurations. Your users will be annoyed, but you never see these issues in your analytics! To make this more obvious, the player now reports a StuckPlayerException when it detects a stuck state.
-
Wakelock by default - The wake lock management was previously opt-in, resulting in hard to find edge cases where playback progress can be delayed a lot when running in the background. Now this feature is opt-out, so you don't have to worry about it and can also remove all manual wake lock handling around playback.
-
Simplified setting for CC button logic - Changing TrackSelectionParameters to say "turn subtitles on/off" was surprisingly hard to get right, so we added a simple boolean selectTextByDefault option for this use case.
Simplify your media button preferences in MediaSession
Until now, defining your preferences for which buttons should show up in the media notification drawer on Android Auto or WearOS required defining custom commands and buttons, even if you simply wanted to trigger a standard player method.Media3 1.9.0 has new functionality to make this a lot simpler - you can now define your media button preferences with a standard player command, requiring no custom command handling at all.
session.setMediaButtonPreferences(listOf(
CommandButton.Builder(CommandButton.ICON_FAST_FORWARD) // choose an icon
.setDisplayName(R.string.skip_forward)
.setPlayerCommand(Player.COMMAND_SEEK_FORWARD) // choose an action
.build()
))
Media button preferences with fast forward button
CompositionPlayer for real-time preview
The 1.9.0 release introduces CompositionPlayer under a new @ExperimentalApi annotation. The annotation indicates that it is available for experimentation, but is still under development.CompositionPlayer is a new component in the Media3 editing APIs designed for real-time preview of media edits. Built upon the familiar Media3 Player interface, CompositionPlayer allows users to see their changes in action before committing to the export process. It uses the same Composition object that you would pass to Transformer for exporting, streamlining the editing workflow by unifying the data model for preview and export.
We encourage you to start using CompositionPlayer and share your feedback, and keep an eye out for forthcoming posts and updates to the documentation for more details.
InAppMuxer as a default muxer in Transformer
New speed adjustment APIs
val speedProvider = object : SpeedProvider {
override fun getSpeed(presentationTimeUs: Long): Float {
return speed
}
override fun getNextSpeedChangeTimeUs(timeUs: Long): Long {
return C.TIME_UNSET
}
}
EditedMediaItem speedEffectItem = EditedMediaItem.Builder(mediaItem)
.setSpeed(speedProvider)
.build()
This new approach replaces the previous method of using Effects#createExperimentalSpeedChangingEffects(), which we've deprecated and will remove in a future release.
Introducing track types for EditedMediaItemSequence
This is done via a new EditedMediaItemSequence.Builder constructor that accepts a set of track types (e.g., C.TRACK_TYPE_AUDIO, C.TRACK_TYPE_VIDEO).
To simplify creation, we've added new static convenience methods:
-
EditedMediaItemSequence.withAudioFrom(List<EditedMediaItem>)
-
EditedMediaItemSequence.withVideoFrom(List<EditedMediaItem>)
-
EditedMediaItemSequence.withAudioAndVideoFrom(List<EditedMediaItem>)
We encourage you to migrate to the new constructor or the convenience methods for clearer and more reliable sequence definitions.
Example of creating a video-only sequence:
EditedMediaItemSequence videoOnlySequence =
EditedMediaItemSequence.Builder(setOf(C.TRACK_TYPE_VIDEO))
.addItem(editedMediaItem)
.build()
---
Please get in touch via the Media3 issue Tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!
19 Dec 2025 10:00pm GMT
Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps
Posted by Fahd Imtiaz - Product Manager, Android Developer
Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps
In 2025 the Android ecosystem has grown far beyond the phone. Today, developers have the opportunity to reach over 500 million active devices, including foldables, tablets, XR, Chromebooks, and compatible cars.
These aren't just additional screens; they represent a higher-value audience. We've seen that users who own both a phone and a tablet spend 9x more on apps and in-app purchases than those with just a phone. For foldable users, that average spend jumps to roughly 14x more*.
This engagement signals a necessary shift in development: goodbye mobile apps, hello adaptive apps.
To help you build for that future, we spent this year releasing tools that make adaptive the default way to build. Here are three key updates from 2025 designed to help you build these experiences.
Standardizing adaptive behavior with Android 16
To support this shift, Android 16 introduced significant changes to how apps can restrict orientation and resizability. On displays of at least 600dp, manifest and runtime restrictions are ignored, meaning apps can no longer lock themselves to a specific orientation or size. Instead, they fill the entire display window, ensuring your UI scales seamlessly across portrait and landscape modes.
Because this means your app context will change more frequently, it's important to verify that you are preserving UI state during configuration changes. While Android 16 offers a temporary opt-out to help you manage this transition, Android 17 (SDK37) will make this behavior mandatory. To ensure your app behaves as expected under these new conditions, use the resizable emulator in Android Studio to test your adaptive layouts today.
Supporting screens beyond the tablet with Jetpack WindowManager 1.5.0
As devices evolve, our existing definitions of "large" need to evolve with them. In October, we released Jetpack WindowManager 1.5.0 to better support the growing number of very large screens and desktop environments.
On these surfaces, the standard "Expanded" layout, which usually fits two panes comfortably, often isn't enough. On a 27-inch monitor, two panes can look stretched and sparse, leaving valuable screen real estate unused. To solve this, WindowManager 1.5.0 introduced two new width window size classes: Large (1200dp to 1600dp) and Extra-large (1600dp+).
These new breakpoints signal when to switch to high-density interfaces. Instead of stretching a typical list-detail view, you can take advantage of the width to show three or even four panes simultaneously. Imagine an email client that comfortably displays your folders, the inbox list, the open message, and a calendar sidebar, all in a single view. Support for these window size classes was added to Compose Material 3 adaptive in the 1.2 release.
Rethinking user journeys with Jetpack Navigation 3
Building a UI that morphs from a single phone screen to a multi-pane tablet layout used to require complex state management. This often meant forcing a navigation graph designed for single destinations to handle simultaneous views. First announced at I/O 2025, Jetpack Navigation 3 is now stable, introducing a new approach to handling user journeys in adaptive apps.
Built for Compose, Nav3 moves away from the monolithic graph structure. Instead, it provides decoupled building blocks that give you full control over your back stack and state. This solves the single source of truth challenge common in split-pane layouts. Because Nav3 uses the Scenes API, you can display multiple panes simultaneously without managing conflicting back stacks, simplifying the transition between compact and expanded views.
A foundation for an adaptive future
This year delivered the tools you need, from optimizing for expansive layouts to the granular controls of WindowManager and Navigation 3. And, Android 16 began the shift toward truly flexible UI, with updates coming next year to deliver excellent adaptive experiences across all form factors. To learn more about adaptive development principles and get started, head over to d.android.com/adaptive-apps.
The tools are ready, and the users are waiting. We can't wait to see what you build!
*Source: internal Google data
19 Dec 2025 5:00pm GMT
18 Dec 2025
Android Developers Blog
Bringing Androidify to Wear OS with Watch Face Push

Posted by Garan Jenkin - Developer Relations Engineer
A few months ago we relaunched Androidify as an app for generating personalized Android bots. Androidify transforms your selfie photo into a playful Android bot using Gemini and Imagen.
However, given that Android spans multiple form factors, including our most recent addition, XR, we thought, how could we bring the fun of Androidify to Wear OS?
An Androidify watch face
As Androidify bots are highly-personalized, the natural place to showcase them is the watch face. Not only is it the most frequently visible surface but also the most personal surface, allowing you to represent who you are.

Personalized Androidify watch face, generated from selfie image
Androidify now has the ability to generate a watch face dynamically within the phone app and then send it to your watch, where it will automatically be set as your watch face. All of this happens within seconds!
High-level design
End-to-end flow for watch face creation and installation
In order to achieve the end-to-end experience, a number of technologies need to be combined together, as shown in this high-level design diagram.
First of all, the user's avatar is combined with a pre-existing Watch Face Format template, which is then packaged into an APK. This is validated - for reasons which will be explained! - and sent to the watch.
On being received by the watch, the new Watch Face Push API - part of Wear OS 6- is used to install and activate the watch face.
Let's explore the details:
Creating the watch face templates
The watch face is created from a template, itself designed in Watch Face Designer. This is our new Figma plugin that allows you to create Watch Face Format watch faces directly within Figma.
An Androidify watch face template in Watch Face Designer
The plugin allows the watch face to be exported in a range of different ways, including as Watch Face Format (WFF) resources. These can then be easily incorporated as assets within the Androidify app, for dynamically building the finalized watch face.
Packaging and validation
Once the template and avatar have been combined, the Portable Asset Compiler Kit (Pack) is used to assemble an APK.
In Androidify, Pack is used as a native library on the phone. For more details on how Androidify interfaces with the Pack library, see the GitHub repository.
As a final step before transmission, the APK is checked by the Watch Face Push validator.
This validator checks that the APK is suitable for installation. This includes checking the contents of the APK to ensure it is a valid watch face, as well as some performance checks. If it is valid, then the validator produces a token.
This token is required by the watch for installation.
Sending the watch face
The Androidify app on Wear OS uses WearableListenerService to listen for events on the Wearable Data Layer.
The phone app transfers the watch face by using a combination of MessageClient to set up the process, then ChannelClient to stream the APK.
Installing the watch face on the watch
Once the watch face is received on the Wear OS device, the Androidify app uses the new Watch Face Push API to install the watch face:
val wfpManager =
WatchFacePushManagerFactory.createWatchFacePushManager(context)
val response = wfpManager.listWatchFaces()
try {
if (response.remainingSlotCount > 0) {
wfpManager.addWatchFace(apkFd, token)
} else {
val slotId = response.installedWatchFaceDetails.first().slotId
wfpManager.updateWatchFace(slotId, apkFd, token)
}
} catch (a: WatchFacePushManager.AddWatchFaceException) {
return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR
} catch (u: WatchFacePushManager.UpdateWatchFaceException) {
return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR
}
Androidify uses either the addWatchFace or updateWatchFace method, depending on the scenario: Watch Face Push defines a concept of "slots" - how many watch faces a given app can have installed at any time. For Wear OS 6, this value is in fact 1.
Androidify's approach is to install the watch face if there is a free slot, and if not, any existing watch face is swapped out for the new one.
Setting the active watch face
Installing the watch face programmatically is a great step, but Androidify seeks to ensure the watch face is also the active watch face.
Watch Face Push introduces a new runtime permission which must be granted in order for apps to be able to achieve this:
com.google.wear.permission.SET_PUSHED_WATCH_FACE_AS_ACTIVE
Once this permission has been acquired, the wfpManager.setWatchFaceAsActive() method can be called, to set an installed watch face to being the active watch face.
However, there are a number of considerations that Androidify has to navigate:
-
setWatchFaceAsActive can only be used once.
-
SET_PUSHED_WATCH_FACE_AS_ACTIVE cannot be re-requested after being denied by the user.
-
Androidify might already be in control of the active watch face.
For more details see how Androidify implements the set active logic.
Get started with Watch Face Push for Wear OS
Watch Face Push is a versatile API, equally suited to enhancing Androidify as it is to building fully-featured watch face marketplaces.
Perhaps you have an existing phone app and are looking for opportunities to further engage and delight your users?
Or perhaps you're an existing watch face developer looking to create your own community and gallery through releasing a marketplace app?
Take a look at these resources:
And also check out the accompanying video for a greater-depth look at how we brought Androidify to Wear OS!
We're looking forward to what you'll create with Watch Face Push!
18 Dec 2025 5:00pm GMT
17 Dec 2025
Android Developers Blog
Brighten Your Real-Time Camera Feeds with Low Light Boost
Posted by Donovan McMurray, Developer Relations Engineer
Today, we're diving into Low Light Boost (LLB), a powerful feature designed to brighten real-time camera streams. Unlike Night Mode, which requires a hold-still capture duration, Low Light Boost works instantaneously on your live preview and video recordings. LLB automatically adjusts how much brightening is needed based on available light, so it's optimized for every environment.
With a recent update, LLB allows Instagram users to line up the perfect shot, and then their existing Night Mode implementation results in the same high quality low-light photos their users have been enjoying for over a year.
Why Real-time Brightness Matters
While Night Mode aims to improve final image quality, Low Light Boost is intended for usability and interactivity in dark environments. Another important factor to consider is that - even though they work together very well - you can use LLB and Night Mode independently, and you'll see with some of these use cases, LLB has value on its own when Night Mode photos aren't needed. Here is how LLB improves the user experience:
-
Better Framing & Capture: In dimly lit scenes, a standard camera preview can be pitch black. LLB brightens the viewfinder, allowing users to actually see what they are framing before they hit the shutter button. For this experience, you can use Night Mode for the best quality low-light photo result, or you can let LLB give the user a "what you see is what you get" photo result.
-
Reliable Scanning: QR codes are ubiquitous, but scanning them in a dark restaurant or parking garage is often frustrating. With a significantly brighter camera feed, scanning algorithms can reliably detect and decode QR codes even in very dim environments.
-
Enhanced Interactions: For apps involving live video interactions (like AI assistants or video calls) LLB increases the amount of perceivable information, ensuring the computer vision models have enough data to work with
The Difference in Instagram
It's easy to imagine the difference this makes in the user experience. If users aren't able to see what they're capturing, then there's a higher chance they'll abandon the capture.
Choosing Your Implementation
There are two ways to implement Low Light Boost to provide the best experience across the widest range of devices:
-
Low Light Boost AE Mode: This is a hardware-layer auto-exposure mode. It offers the highest quality and performance because it fine-tunes the Image Signal Processor (ISP) pipeline directly. Always check for this first.
-
Google Low Light Boost: If the device doesn't support the AE mode, you can fall back to this software-based solution provided by Google Play services. It applies post-processing to the camera stream to brighten it. As an all-software solution, it is available on more devices, so this implementation helps you reach more devices with LLB.
Low Light Boost AE Mode (Hardware)
Mechanism:
This mode is supported on devices running Android 15 and newer and requires the OEM to have implemented the support in HAL (currently available on Pixel 10 devices). It integrates directly with the camera's Image Signal Processor (ISP). If you set CaptureRequest.CONTROL_AE_MODE to CameraMetadata.CONTROL_AE_MODE_ON_LOW_LIGHT_BOOST_BRIGHTNESS_PRIORITY, the camera system takes control.
Behavior:
The HAL/ISP analyzes the scene and adjusts sensor and processing parameters, often including increasing exposure time, to brighten the image. This can yield frames with a significantly improved signal-to-noise ratio (SNR) because the extended exposure time, rather than an increase in digital sensor gain (ISO), allows the sensor to capture more light information.
Advantage:
Potentially better image quality and power efficiency as it leverages dedicated hardware pathways.
Trade off:
May result in a lower frame rate in very dark conditions as the sensor needs more time to capture light. The frame rate can drop to as low as 10 FPS in very low light conditions.
Google Low Light Boost (Software via Google Play Services)
Mechanism:
This solution, distributed as an optional module via Google Play services, applies post-processing to the camera stream. It uses a sophisticated realtime image enhancement technology called HDRNet.
Google HDRNet:
This deep learning model analyzes the image at a lower resolution to predict a compact set of parameters (a bilateral grid). This grid then guides the efficient, spatially-varying enhancement of the full-resolution image on the GPU. The model is trained to brighten and improve image quality in low-light conditions, with a focus on face visibility.
Process Orchestration:
The HDRNet model and its accompanying logic are orchestrated by the Low Light Boost processor. This includes:
-
Scene Analysis:
A custom calculator that estimates the true scene brightness using camera metadata (sensor sensitivity, exposure time, etc.) and image content. This analysis determines the boost level. -
HDRNet Processing:
Applies the HDRNet model to brighten the frame. The model used is tuned for low light scenes and optimized for realtime performance. -
Blending:
The original and HDRNet processed frames are blended. The amount of blending applied is dynamically controlled by the scene brightness calculator, ensuring a smooth transition between boosted and unboosted states.
Advantage:
Works on a broader range of devices (currently supports Samsung S22 Ultra, S23 Ultra, S24 Ultra, S25 Ultra, and Pixel 6 through Pixel 9) without requiring specific HAL support. Maintains the camera's frame rate as it's a post-processing effect.
Trade-off:
As a post-processing method, the quality is limited by the information present in the frames delivered by the sensor. It cannot recover details lost due to extreme darkness at the sensor level.
By offering both hardware and software pathways, Low Light Boost provides a scalable solution to enhance low-light camera performance across the Android ecosystem. Developers should prioritize the AE mode where available and use the Google Low Light Boost as a robust fallback.
Implementing Low Light Boost in Your App
Now let's look at how to implement both LLB offerings. You can implement the following whether you use CameraX or Camera2 in your app. For the best results, we recommend implementing both Step 1 and Step 2.
Step 1: Low Light Boost AE Mode
Available on select devices running Android 15 and higher, LLB AE Mode functions as a specific Auto-Exposure (AE) mode.
1. Check for Availability
First, check if the camera device supports LLB AE Mode.
val cameraInfo = cameraProvider.getCameraInfo(cameraSelector) val isLlbSupported = cameraInfo.isLowLightBoostSupported
2. Enable the Mode
If supported, you can enable LLB AE Mode using CameraX's CameraControl object.
// After setting up your camera, use the CameraInfo object to enable LLB AE Mode. camera = cameraProvider.bindToLifecycle(...) if (isLlbSupported) { try { // The .await() extension suspends the coroutine until the // ListenableFuture completes. If the operation fails, it throws // an exception which we catch below. camera?.cameraControl.enableLowLightBoostAsync(true).await() } catch (e: IllegalStateException) { Log.e(TAG, "Failed to enable low light boost: not available on this device or with the current camera configuration", e) } catch (e: CameraControl.OperationCanceledException) { Log.e(TAG, "Failed to enable low light boost: camera is closed or value has changed", e) } }
3. Monitor the State
Just because you requested the mode doesn't mean it's currently "boosting." The system only activates the boost when the scene is actually dark. You can set up an Observer to update your UI (like showing a moon icon) or convert to a Flow using the extension function asFlow().
if (isLlbSupported) { camera?.cameraInfo.lowLightBoostState.asFlow().collectLatest { state -> // Update UI accordingly updateMoonIcon(state == LowLightBoostState.ACTIVE) } }
You can read the full guide on Low Light Boost AE Mode here.
Step 2: Google Low Light Boost
For devices that don't support the hardware AE mode, Google Low Light Boost acts as a powerful fallback. It uses a LowLightBoostSession to intercept and brighten the stream.
1. Add Dependencies
This feature is delivered via Google Play services.
implementation("com.google.android.gms:play-services-camera-low-light-boost:16.0.1-beta06") // Add coroutines-play-services to simplify Task APIs implementation("org.jetbrains.kotlinx:kotlinx-coroutines-play-services:1.10.2")
2. Initialize the Client
Before starting your camera, use the LowLightBoostClient to ensure the module is installed and the device is supported.
val llbClient = LowLightBoost.getClient(context) // Check support and install if necessary val isSupported = llbClient.isCameraSupported(cameraId).await() val isInstalled = llbClient.isModuleInstalled().await() if (isSupported && !isInstalled) { // Trigger installation llbClient.installModule(installCallback).await() }
3. Create a LLB Session
Google LLB processes each frame, so you must give your display Surface to the LowLightBoostSession, and it gives you back a Surface that has the brightening applied. For Camera2 apps, you can add the resulting Surface with CaptureRequest.Builder.addTarget(). For CameraX, this processing pipeline aligns best with the CameraEffect class, where you can apply the effect with a SurfaceProcessor and provide it back to your Preview with a SurfaceProvider, as seen in this code.
// With a SurfaceOutput from SurfaceProcessor.onSurfaceOutput() and a // SurfaceRequest from Preview.SurfaceProvider.onSurfaceRequested(), // create a LLB Session. suspend fun createLlbSession(surfaceRequest: SurfaceRequest, outputSurfaceForLlb: Surface) { // 1. Create the LLB Session configuration val options = LowLightBoostOptions( outputSurfaceForLlb, cameraId, surfaceRequest.resolution.width, surfaceRequest.resolution.height, true // Start enabled ) // 2. Create the session. val llbSession = llbClient.createSession(options, callback).await() // 3. Get the surface to use. val llbInputSurface = llbSession.getCameraSurface() // 4. Provide the surface to the CameraX Preview UseCase. surfaceRequest.provideSurface(llbInputSurface, executor, resultListener) // 5. Set the scene detector callback to monitor how much boost is being applied. val onSceneBrightnessChanged = object : SceneDetectorCallback { override fun onSceneBrightnessChanged( session: LowLightBoostSession, boostStrength: Float ) { // Monitor the boostStrength from 0 (no boosting) to 1 (maximum boosting) } } llbSession.setSceneDetectorCallback(onSceneBrightnessChanged, null) }
4. Pass in the Metadata
For the algorithm to work, it needs to analyze the camera's auto-exposure state. You must pass capture results back to the LLB session. In CameraX, this can be done by extending your Preview.Builder with Camera2Interop.Extender.setSessionCaptureCallback().
Camera2Interop.Extender(previewBuilder).setSessionCaptureCallback( object : CameraCaptureSession.CaptureCallback() { override fun onCaptureCompleted( session: CameraCaptureSession, request: CaptureRequest, result: TotalCaptureResult ) { super.onCaptureCompleted(session, request, result) llbSession?.processCaptureResult(result) } } )
Detailed implementation steps for the client and session can be found in the Google Low Light Boost guide.
Next Steps
By implementing these two options, you ensure that your users can see clearly, scan reliably, and interact effectively, regardless of the lighting conditions.
To see these features in action within a complete, production-ready codebase, check out the Jetpack Camera App on GitHub. It implements both LLB AE Mode and Google LLB, giving you a reference for your own integration.17 Dec 2025 5:00pm GMT
.png)


.png)

.png)










































.png)

