30 Mar 2026

feedTalkAndroid

Top Promo Code for Plinko

Plinko has quickly become a standout in the crypto gaming world thanks to its simplicity and fast-paced results. Originally introduced on The Price is Right, the game is purely chance-based: players drop a ball from the top of a pegged board, watching it bounce unpredictably into slots with different multipliers.

30 Mar 2026 8:00am GMT

Spider-Man crowned Sam Raimi’s ultimate masterpiece by fans—here’s why this “magical and unforgettable” film still reigns supreme

More than two decades after its release, Sam Raimi's first Spider-Man film continues to captivate audiences and critics…

30 Mar 2026 6:30am GMT

This Netflix series just dropped an explosive new season—and critics can’t stop raving

If you thought Netflix was done surprising viewers, think again: a new season of a critically acclaimed series…

30 Mar 2026 6:00am GMT

29 Mar 2026

feedTalkAndroid

Hypershell Spring Sale: Save Up to £250 on Exoskeletons

Hypershell Is Running Its Biggest Spring Sale Yet

29 Mar 2026 3:58pm GMT

Forgotten Google Photos Tool Instantly Transforms Shaky Videos—Here’s Why Android Users Have the Edge

Think Google Photos is just your cloud attic for digital memories? Think again. Hidden within its features is…

29 Mar 2026 3:30pm GMT

The new AI photo editing features on iPhone everyone’s talking about—will they change how you edit forever?

Photo editing on your iPhone got an AI-powered upgrade-at least if you use Google Photos in the US.…

29 Mar 2026 3:00pm GMT

Android 17 Is Racing Ahead With Major Changes Before Android 16 Ships

We'll be seeing a public Android 17 release before know it.

29 Mar 2026 2:31pm GMT

OnePlus Expands Repair Network to Reach More Cities and Cut Wait Times

Repairs should be easy, and that's what OnePlus and Samsung want to make sure of.

29 Mar 2026 11:45am GMT

This haunting K-drama rated 9.3/10 is leaving Netflix soon—why everyone’s still obsessed

Looking for a new K-drama to binge before it disappears from Netflix? You're in luck. With a stellar…

29 Mar 2026 6:30am GMT

How to Stop Google Photos From Syncing to Your Phone—The Essential Step for More Storage Freedom

Let's face it - our phones are loaded with photos and videos, and cloud storage fills up fast.…

29 Mar 2026 6:00am GMT

Boba Story Lid Recipes – 2026

Look no further for all the latest Boba Story Lid Recipes. They are all right here!

29 Mar 2026 4:10am GMT

Dice Dreams Free Rolls – Updated Daily

Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.

29 Mar 2026 4:09am GMT

28 Mar 2026

feedTalkAndroid

T-Mobile Bumps Return Fees To $75 And Ends Apple Fee Loophole

You may want to rethink if buying that new phone is really worth it.

28 Mar 2026 6:21pm GMT

Unlock the hidden speed camera alerts on Google Maps: here’s how it’s done

Are you a loyal Google Maps user on Android, but fed up with not getting speed camera alerts…

28 Mar 2026 4:30pm GMT

The terrifying new scam that can hijack your WhatsApp without your password—here’s how to spot it before it’s too late

Think your WhatsApp account is safe as long as you never share your password? Think again. A new…

28 Mar 2026 4:30pm GMT

6G speeds break all records: Forget fiber, your next download could take just seconds

Blink and you'll miss it: just as you're getting used to your zippy new 5G smartphone, researchers are…

28 Mar 2026 4:00pm GMT

26 Mar 2026

feedAndroid Developers Blog

Redefining Location Privacy: New Tools and Improvements for Android 17

Posted by Robert Clifford, Developer Relations Engineer and Manjeet Rulhania, Software Engineer













A pillar of the Android ecosystem is our shared commitment to user trust. As the mobile landscape has evolved, so does our approach to protecting sensitive information. In Android 17, we're introducing a suite of new location privacy features designed to give users more control and provide developers elegant solutions for data minimization and product safety. Our strategy focuses on introducing new tools to balance high-quality experiences with robust privacy protections, and improving transparency for users to help manage their data.

Introducing the location button: simplified access for one time use

For many common tasks, like finding a nearby shop or tagging a social post, your app doesn't need permanent or background access to a user's precise location.With Android 17, we are introducing the location button, a new UI element designed to provide a well-lit path for responsible one time precise location access. Industry partners have requested this new feature as a way to bring a simpler, and more private location flow to their users.



Users get better privacy protection

Moving the decision making for location sharing to the point where a user takes action, helps the user make a clearer choice about how much information they want to share and for how long. This empowers users to limit data sharing to only what apps need in that session. Once consent is provided, this session based access eliminates repeated prompts for location dependent features. This benefits developers by creating a smoother experience for their users and providing high confidence in user intent, as access is explicitly requested at the moment of action.

Full UI customization to match your app's aesthetic

The location button provides extensive customization options to ensure integration with your app's aesthetic while maintaining system-wide recognizability. You can modify the button's visual style including:


Additionally, you can select the appropriate text label from a predefined list of options. To ensure security and trust, the location icon itself remains mandatory and non-customizable, while the font size is system-managed to respect user accessibility settings




Simplified Integration with Jetpack and automatic backwards compatibility

The location button will be provided as a Jetpack library, ensuring easy integration into your existing app layouts similar to any other Jetpack view implementation, and simplifying how you request permission to access precise location. Additionally, when you implement location button with the Jetpack library it will automatically handle backwards compatibility by defaulting to the existing location prompt when a user taps it on a device running Android 16 or below.

The Android location button is available for testing as of Android 17 Beta 3.

Location access transparency

Users often struggle to understand the tools they can use to monitor and control access to their location data. In Android 17, we are aligning location permission transparency with the high standards already set for the Microphone and Camera.





Strengthening user privacy with density-based Coarse Location

Android 17 is also improving the algorithm for approximate (coarse) locations to be aware of population density. Previously, coarse locations used a static 2 km-wide grid, which in low-population areas may not be sufficiently private since a 2km square could often contain only a handful of users. The new approach replaces this fixed grid with a dynamically-sized area based on local population density. By increasing the grid for areas with lower population density, Android ensures a more consistent privacy guarantee across different environments from dense urban centers to remote regions.

Improved runtime permission dialog

The runtime permission dialog for location is one of the more complex flows for users to navigate, with users being asked to decide on the granularity and length of permission access they are willing to grant to each app. In an effort to help users to make the most informed privacy decisions with less friction, we've redesigned the dialog to make "Precise" and "Approximate" choices more visually distinct, encouraging users to select the level of access which best suits their needs.




Start building for Android 17

The new location privacy tools are available now in Beta 3. We're looking for your feedback to help refine these features before the general release.


Build a smoother, more private experience today.

26 Mar 2026 11:00pm GMT

The Third Beta of Android 17

Posted by Matthew McCullough, VP of Product Management, Android Developer


Android 17 has officially reached platform stability today with Beta 3. That means that the API surface is locked; you can perform final compatibility testing and push your Android 17-targeted apps to the Play Store. In addition, Beta 3 brings a host of new capabilities to help you build better, more secure, and highly integrated applications.

Get your apps, libraries, tools, and game engines ready!

If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your downstream developers know if updates are needed to fully support Android 17.

Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 17 Beta 3. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are some changes to focus on:

Media and camera enhancements

Photo Picker customization options

Android now allows you to tailor the visual presentation of the photo picker to better complement your app's user interface. By leveraging the new PhotoPickerUiCustomizationParams API, you can modify the grid view aspect ratio from the standard 1:1 square to a 9:16 portrait display. This flexibility extends to both the ACTION_PICK_IMAGES intent and the embedded photo picker, enabling you to maintain a cohesive aesthetic when users interact with media.

This is all part of our effort to help make the privacy-preserving Android photo picker fit seamlessly with your app experience. Learn more about how you can embed the photo picker directly into your app for the most native experience.

val params = PhotoPickerUiCustomizationParams.Builder()
    .setAspectRatio(PhotoPickerUiCustomizationParams.ASPECT_RATIO_PORTRAIT_9_16)
    .build()

val intent = Intent(MediaStore.ACTION_PICK_IMAGES).apply {
    putExtra(MediaStore.EXTRA_PICK_IMAGES_UI_CUSTOMIZATION_PARAMS, params)
}

startActivityForResult(intent, REQUEST_CODE)

Support for the RAW14 image format: Android 17 introduces support for the RAW14 image format - the de-facto industry standard for high-end digital photography - via the new ImageFormat.RAW14 constant. RAW14 is a single-channel, 14-bit per pixel format that uses a densely packed layout where every four consecutive pixels are packed into seven bytes.

Vendor-defined camera extensions: Android 17 adds Vendor-defined extensions to enable hardware partners define and implement custom camera extension modes to provide you access to the best and latest camera features, such as 'Super Resolution' or cutting-edge AI-driven enhancements. You can query for these modes using the isExtensionSupported(int) API.

Camera device type APIs: New Android 17 APIs allow you to query the underlying device type to identify if a camera is built-in hardware, an external USB webcam, or a virtual camera.

Bluetooth LE Audio hearing aid support

Android now includes a specific device category for Bluetooth Low Energy (BLE) Audio hearing aids. With the addition of the AudioDeviceInfo.TYPE_BLE_HEARING_AID constant, your app can now distinguish hearing aids from regular headsets.

val audioManager = getSystemService(Context.AUDIO_SERVICE) as AudioManager
val devices = audioManager.getDevices(AudioManager.GET_DEVICES_OUTPUTS)
val isHearingAidConnected = devices.any { it.type == AudioDeviceInfo.TYPE_BLE_HEARING_AID }

Granular audio routing for hearing aids

Android 17 allows users to independently manage where specific system sounds are played. They can choose to route notifications, ringtones, and alarms to connected hearing aids or the device's built-in speaker.

Extended HE-AAC software encoder

Android 17 introduces a system-provided Extended HE-AAC software encoder. This encoder supports both low and high bitrates using unified speech and audio coding. You can access this encoder via the MediaCodec API using the name c2.android.xheaac.encoder or by querying for the audio/mp4a-latm MIME type.

val encoder = MediaCodec.createByCodecName("c2.android.xheaac.encoder")
val format = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 48000, 1)
format.setInteger(MediaFormat.KEY_BIT_RATE, 24000)
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectXHE)
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)

Performance and Battery Enhancements

Reduce wakelocks with listener support for allow-while-idle alarms

Android 17 introduces a new variant of AlarmManager.setExactAndAllowWhileIdle that accepts an OnAlarmListener instead of a PendingIntent. This new callback-based mechanism is ideal for apps that currently rely on continuous wakelocks to perform periodic tasks, such as messaging apps maintaining socket connections.

val alarmManager = getSystemService(AlarmManager::class.java)
val listener = AlarmManager.OnAlarmListener {
    // Do work here
}

alarmManager.setExactAndAllowWhileIdle(
    AlarmManager.ELAPSED_REALTIME_WAKEUP,
    SystemClock.elapsedRealtime() + 60000,
    listener,
    null
)

Privacy updates

System-provided Location Button

Android is introducing a system-rendered location button that you will be able to embed directly into your app's layout using an Android Jetpack library. When a user taps this system button, your app is granted precise location access for the current session only. To implement this, you need to declare the USE_LOCATION_BUTTON permission.

Discrete password visibility settings for touch and physical keyboards

This feature splits the existing "Show passwords" system setting into two distinct user preferences: one for touch-based inputs and another for physical (hardware) keyboard inputs. Characters entered via physical keyboards are now hidden immediately by default.

val isPhysical = event.source and InputDevice.SOURCE_KEYBOARD == InputDevice.SOURCE_KEYBOARD
val shouldShow = android.text.ShowSecretsSetting.shouldShowPassword(context, isPhysical)

Security

Enforced read-only dynamic code loading

To improve security against code injection attacks, Android now enforces that dynamically loaded native libraries must be read-only. If your app targets Android 17 or higher, all native files loaded using System.load() must be marked as read-only beforehand.

val libraryFile = File(context.filesDir, "my_native_lib.so")
// Mark the file as read-only before loading to comply with Android 17+ security requirements
libraryFile.setReadOnly()

System.load(libraryFile.absolutePath)

Post-Quantum Cryptography (PQC) Hybrid APK Signing

To prepare for future advancements in quantum computing, Android is introducing support for Post-Quantum Cryptography (PQC) through the new v3.2 APK Signature Scheme. This scheme utilizes a hybrid approach, combining a classical signature with an ML-DSA signature.

User experience and system UI

Better support for widgets on external displays

This feature improves the visual consistency of app widgets when they are shown on connected or external displays with different pixel densities using DP or SP units.

val options = appWidgetManager.getAppWidgetOptions(appWidgetId)
val displayId = options.getInt(AppWidgetManager.OPTION_APPWIDGET_DISPLAY_ID)

val remoteViews = RemoteViews(context.packageName, R.layout.widget_layout)
remoteViews.setViewPadding(
    R.id.container,
    16f, 8f, 16f, 8f,
    TypedValue.COMPLEX_UNIT_DIP
)

Hidden app labels on the home screen

Android now provides a user setting to hide app names (labels) on the home screen workspace. Ensure your app icon is distinct and recognizable.

Desktop Interactive Picture-in-Picture

Unlike traditional Picture-in-Picture, these pinned windows remain interactive while staying always-on-top of other application windows in desktop mode.

val appTask: ActivityManager.AppTask = activity.getSystemService(ActivityManager::class.java).appTasks[0]
appTask.requestWindowingLayer(
    ActivityManager.AppTask.WINDOWING_LAYER_PINNED,
    context.mainExecutor,
    object : OutcomeReceiver<Int, Exception> {
        override fun onResult(result: Int) {
            if (result == ActivityManager.AppTask.WINDOWING_LAYER_REQUEST_GRANTED) {
                // Task successfully moved to pinned layer
            }
        }
        override fun onError(error: Exception) {}
    }
)

Redesigned screen recording toolbar

Core functionality

VPN app exclusion settings

By using the new ACTION_VPN_APP_EXCLUSION_SETTINGS Intent, your app can launch a system-managed Settings screen where users can select applications to bypass the VPN tunnel.

val intent = Intent(Settings.ACTION_VPN_APP_EXCLUSION_SETTINGS)
if (intent.resolveActivity(packageManager) != null) {
    startActivity(intent)
}

OpenJDK 25 and 21 API updates

This update brings extensive features and refinements from OpenJDK 21 and OpenJDK 25, including the latest Unicode support and enhanced SSL support for named groups in TLS.

Get started with Android 17

You can enroll any supported Pixel device or use the 64-bit system images with the Android Emulator.

For complete information, visit the Android 17 developer site.

26 Mar 2026 8:00pm GMT

25 Mar 2026

feedAndroid Developers Blog

Meet the class of 2026 for the Google Play Apps Accelerator

Posted by Robbie McLachlan, Developer Marketing


The wait is over! We are incredibly excited to share the Google Play Apps Accelerator class of 2026. We've handpicked a group of high-potential studios from across the globe to embark on a 12-week journey designed to supercharge their success.

Here's what's in store for the program's first ever class:


Without further ado, join us in congratulating them!


Google Play Apps Accelerator | Class of 2026

Americas

Anytune

AstroVeda

BetterYou

Changed

Focus Forge

Human Program

Know Your Lemons

kweliTV

Language Innovation

Matraquinha

MR ROCCO

MUU nutrition

NKENNE

Skarvo

Starcrossed

Wishfinity

Asia Pacific

Human Health

Kitakuji

Lazy Surfers

Mellers Tech

Reehee Company

Europe, Middle East & Africa

cabuu

Class54 Education

Digital Garden

EverPixel

Geolives

HelloMind

ifal

Idea Accelerator

Maposcope

Ochy

Picastro

Pixelbite

Record Scanner

Talkao

unorderly

Xeropan International



Congratulations again to all the founders selected, we can't wait to see your apps grow on our platform.

The Google Play Apps Accelerator is part of our mission to help businesses of all sizes grow on Google Play and reach their full potential. Discover more about Google Play's programs, resources and tools.

25 Mar 2026 5:00pm GMT

24 Mar 2026

feedAndroid Developers Blog

Contact Picker: Privacy-First Contact Sharing

Posted by Roxanna Aliabadi Walker, Senior Product Manager

Privacy and user control remain at the heart of the Android experience. Just as the photo picker made media sharing secure and easy to implement, we are now bringing that same level of privacy, simplicity, and great user experience to contact selection.

A New Standard for Contact Privacy

Historically, applications requiring access to a specific user's contacts relied on the broad READ_CONTACTS permission. While functional, this approach often granted apps more data than necessary. The new Android Contact Picker, introduced in Android 17, changes this dynamic by providing a standardized, secure, and searchable interface for contact selection.

This feature allows users to grant apps access only to the specific contacts they choose, aligning with Android's commitment to data transparency and minimized permission footprints.




How It Works

Developers can integrate the Contact Picker using the Intent.ACTION_PICK_CONTACTS intent. This updated API offers several powerful capabilities:

  • Optimized Performance: The Contact Picker returns a single Uri that allows for collective result querying, eliminating the need to query individual contact Uri separately as required by ACTION_PICK. This efficiency further reduces system overhead by utilizing a single Binder transaction.

Backward Compatibility and Implementation

For devices running Android 17 or higher, the system automatically upgrades legacy ACTION_PICK intents that specify contact data types to the new, more secure interface. However, to take full advantage of advanced features like multi-selection, developers are encouraged to update their implementation code and utilize the ContentResolver to query the returned Session URI.

Integrate the contact pickerTo integrate the Contact Picker, developers use the
ACTION_PICK_CONTACTS intent. Below is a code example demonstrating how to launch the picker and request specific data fields, such as email and phone numbers.

// State to hold the list of selected contacts
var contacts by remember { mutableStateOf<List<Contact>>(emptyList()) }

// Launcher for the Contact Picker intent
val pickContact = rememberLauncherForActivityResult(StartActivityForResult()) {
    if (it.resultCode == Activity.RESULT_OK) {
        val resultUri = it.data?.data ?: return@rememberLauncherForActivityResult

        // Process the result URI in a background thread
        coroutine.launch {
            contacts = processContactPickerResultUri(resultUri, context)
        }
    }
}

// Define the specific contact data fields you need
val requestedFields = arrayListOf(
    Email.CONTENT_ITEM_TYPE,
    Phone.CONTENT_ITEM_TYPE,
)

// Set up the intent for the Contact Picker
val pickContactIntent = Intent(ACTION_PICK_CONTACTS).apply {
    putExtra(EXTRA_PICK_CONTACTS_SELECTION_LIMIT, 5)
    putStringArrayListExtra(
        EXTRA_PICK_CONTACTS_REQUESTED_DATA_FIELDS,
        requestedFields
    )
    putExtra(EXTRA_PICK_CONTACTS_MATCH_ALL_DATA_FIELDS, false)
}

// Launch the picker
pickContact.launch(pickContactIntent)

After the user makes a selection, the app processes the result by querying the returned Session URI to extract the requested contact information.


// Data class representing a parsed Contact with selected details
data class Contact(val id: String, val name: String, val email: String?, val phone: String?)

// Helper function to query the content resolver with the URI returned by the Contact Picker.
// Parses the cursor to extract contact details such as name, email, and phone number
private suspend fun processContactPickerResultUri(
    sessionUri: Uri,
    context: Context
): List<Contact> = withContext(Dispatchers.IO) {
    // Define the columns we want to retrieve from the ContactPicker ContentProvider
    val projection = arrayOf(
        ContactsContract.Contacts._ID,
        ContactsContract.Contacts.DISPLAY_NAME_PRIMARY,
        ContactsContract.Data.MIMETYPE, // Type of data (e.g., email or phone)
        ContactsContract.Data.DATA1, // The actual data (Phone number / Email string)
    )

    val results = mutableListOf<Contact>()

    // Note: The Contact Picker Session Uri doesn't support custom selection & selectionArgs.
    context.contentResolver.query(sessionUri, projection, null, null, null)?.use { cursor ->
        // Get the column indices for our requested projection
        val contactIdIdx = cursor.getColumnIndex(ContactsContract.Contacts._ID)
        val mimeTypeIdx = cursor.getColumnIndex(ContactsContract.Data.MIMETYPE)
        val nameIdx = cursor.getColumnIndex(ContactsContract.Contacts.DISPLAY_NAME_PRIMARY)
        val data1Idx = cursor.getColumnIndex(ContactsContract.Data.DATA1)

        while (cursor.moveToNext()) {
            val contactId = cursor.getString(contactIdIdx)
            val mimeType = cursor.getString(mimeTypeIdx)
            val name = cursor.getString(nameIdx) ?: ""
            val data1 = cursor.getString(data1Idx) ?: ""

            // Determine if the current row represents an email or a phone number
            val email = if (mimeType == Email.CONTENT_ITEM_TYPE) data1 else null
            val phone = if (mimeType == Phone.CONTENT_ITEM_TYPE) data1 else null

            // Add the parsed contact to our results list
            results.add(Contact(contactId, name, email, phone))
        }
    }

    return@withContext results
}


Check out the full documentation here.

Best Practices for Developers

To provide the best user experience and maintain high security standards, we recommend the following:
  • Data Minimization: Only request the specific data fields (e.g., email) your app needs.
  • Immediate Persistence: Persist selected data immediately, as the Session URI access is temporary.

24 Mar 2026 8:00pm GMT

Beyond Infotainment: Extending Android Automotive OS for Software-defined Vehicles

Posted by Eser Erdem, Senior Engineering Manager, Android Automotive

At Google we're deeply committed to the automotive industry--not just as a technology provider, but as a partner in the industry's transformation. We believe that car makers and users should have choice and flexibility, and that open platforms are the best enablers. For over a decade, we have provided Android Automotive OS (AAOS) as an open platform for infotainment, enabling rich innovation and differentiation in the in-vehicle digital experience. However, as vehicles modernize, car makers face new hurdles: fragmented software across compute components, poor portability between architectures, and a lack of granular update capabilities. To address these problems, we are expanding AAOS beyond infotainment with Android Automotive OS for Software Defined Vehicles (AAOS SDV)--an open platform featuring a modular structure, a topology-agnostic communication layer, and the support for granular updates.

The transition toward SDVs is an incredible industry transformation, and we are eager to contribute to the broader ecosystem making it happen. Later this year, AAOS SDV will be available in the Android Open Source Project (AOSP) for uses beyond infotainment. By bringing our SDV platform into the open-source domain, we empower the industry to develop or enhance features that lower costs, accelerate time to market, and provide significant advantages across the automotive landscape.



A Foundation for the Software-Defined Vehicle

AAOS SDV is engineered to address the core challenges of modern vehicle development. This new AAOS expansion provides a compact, performant and scalable software foundation based on a headless Android native stack, extending much deeper into the vehicle architecture to power software components throughout the vehicle such as the seat actuator, instrument cluster, climate control, lighting, cameras, mirrors, telemetry, and more.

AAOS SDV's core is a lightweight Android-based operating system incorporating low-level automotive specific frameworks for communications, diagnostics, software updates, and more. This enables AAOS SDV to power many different vehicle controllers, tackling Core Compute, Body Controls, and Cluster domains.

In addition, the AAOS SDV platform includes a new framework, Display Safety, for implementing instrument cluster applications including audible chimes, regulatory camera, and sophisticated graphics that blend seamlessly with AAOS IVI content. Display Safety includes a safety design toolchain and a reference safety monitor, allowing OEMs to meet functional safety requirements leveraging the diverse platform safety mechanisms of Automotive SoCs.








Flexible Deployment for AAOS SDV
Engineered for flexibility, the AAOS SDV framework can utilize hypervisor-backed virtualization with virtio support to separate software domains, or it can be deployed on bare metal for optimal low-latency performance.

Transforming the Developer Experience

AAOS SDV is designed to power modern vehicles, but it was also designed to change how modern vehicle software is developed, tested and delivered with the goals to reduce development time and cost while increasing innovation and agility. With its optimized development workflows, our open-source SDV platform provides a wide range of benefits across the automotive industry:
  • Accelerated Time-to-Market: AAOS SDV components can accelerate development with production ready software for various components that can be further modified.
  • Standard Signal Catalog: A new standard signal catalog to bring OEMs and automotive suppliers onto the same page eliminates redundant engineering efforts and significantly reduces platform development costs.
  • Optimized for virtual cloud development: AAOS SDV was designed ground-up to support virtual cloud development - enabling partners to design, test and validate components in the car well ahead of hardware availability. AAOS SDV already runs on Android Virtual Device (Cuttlefish), and works well with existing Google Cloud integrations such as Google Cloud Horizon, enabling a digital twin solution at scale.
  • A Service-Oriented Architecture: Vehicle functions are developed as topology-agnostic services which are reusable across different architectures. The platform treats the vehicle as a dynamic, connected system. This allows for granular, service-level updates with built-in dependency handling, enabling you to deploy new features over-the-air and create continuous improvement loops.
  • Future-Ready for new services: The platform is designed to simplify the development of telemetry, AI training feedback loops, accelerating the deployment of advanced features for both enterprise fleets and consumer vehicles.

Production Ready: Partnering with Renault

We are proud to highlight our deep partnership with Renault to underscore the production readiness of the AAOS SDV platform. Renault is currently leveraging the Android Automotive OS SDV platform for its upcoming Renault Trafic e-Tech, "[...] production set to begin in late 2026". The Renault Trafic e-Tech validates the platform's ability to accelerate development and enable a new generation of software-defined commercial vehicles.

Scaling Ready: Partnering with Qualcomm

Qualcomm is scaling the Android Automotive OS SDV platform through our strategic partnership. At CES 2026, Qualcomm introduced Snapdragon vSoC on Google Cloud and announced a scaling collaboration to deliver a turnkey, pre-integrated AAOS SDV stack on Snapdragon Digital Chassis platforms.

Building an Open AAOS Ecosystem

The power of AAOS comes from its vibrant ecosystem. To prepare for the open source release later this year, we are proactively working with leading industry carmakers, suppliers, silicon platforms, and software vendors to ensure that the AAOS SDV platform is well supported and robustly integrated within the automotive ecosystem. We look forward to sharing more updates with our partners in the months ahead.

24 Mar 2026 4:00pm GMT

19 Mar 2026

feedAndroid Developers Blog

Android developer verification: Balancing openness and choice with safety

Posted by Matthew Forsythe, Director Product Management, Android App Safety

Android proves you don't have to choose between an open ecosystem and a secure one. Since announcing updated verification requirements, we've worked with the community to ensure these protections are robust yet respectful of platform freedom. We've heard from power users that they want to take educated risks to install software from unverified developers. Today, we're sharing details on a new advanced flow that provides this option.


Graphic titled 'Sideloading is here to stay.' It shows a prompt with buttons for 'Don’t install' and 'Install anyway' next to text explaining that once risks are confirmed, users can install apps from unverified developers.


Advanced flow safeguards against coercion

Android is built on choice. That is why we've developed the advanced flow - an approach that allows power users to maintain the ability to sideload apps from unverified developers.




This flow is a one-time process for power users - but it was designed carefully to prevent those in the midst of a scam attempt from being coerced by high pressure tactics to install malicious software. In these scenarios, scammers exploit fear - using threats of financial ruin, legal trouble, or harm to a loved one - to create a sense of extreme urgency. They stay on the phone with victims, coaching them to bypass security warnings and disable security settings before the victim has a chance to think or seek help. According to a 2025 report from the Global Anti-Scam Alliance (GASA), 57% of surveyed adults experienced a scam in the past year, resulting in a global consumer loss of $442 billion. Because the consequences of these scams that use sophisticated social engineering tactics are so severe, we have carefully engineered the advanced flow to provide the critical time and space needed to break the cycle of coercion.

How the advanced flow works for users

  • Enable developer mode in system settings: Activating this is simple. This prevents accidental triggers or "one-tap" bypasses often used in high-pressure scams.
  • Confirm you aren't being coached: There is a quick check to make sure that no one is talking you into turning off your security. While power users know how to vet apps, scammers often pressure victims into disabling protections.
  • Restart your phone and reauthenticate: This cuts off any remote access or active phone calls a scammer might be using to watch what you're doing.
  • Come back after the protective waiting period and verify: There is a one-time, one-day wait and then you can confirm that this is really you who's making this change with our biometric authentication (fingerprint or face unlock) or device PIN. Scammers rely on manufactured urgency, so this breaks their spell and gives you time to think.
  • Install apps: Once you confirm you understand the risks, you're all set to install apps from unverified developers, with the option of enabling for 7 days or indefinitely. For safety, you'll still see a warning that the app is from an unverified developer, but you can just tap "Install Anyway."

A secure Android for every developer

We know a "one size fits all" approach doesn't work for our diverse ecosystem. We want to ensure that identity verification isn't a barrier to entry, so we're providing different paths to fit your specific needs.

In addition to the advanced flow we're building free, limited distribution accounts for students and hobbyists. This allows you to share apps with a small group (up to 20 devices) without needing to provide a government-issued ID or pay a registration fee. This ensures Android remains an open platform for learning and experimentation while maintaining robust protections for the broader community.

Limited distribution accounts and advanced flow for users will be available in August before the new developer verification requirements take effect.

Visit our website for more details. We look forward to sharing more in the coming days and weeks.

19 Mar 2026 2:00pm GMT

16 Mar 2026

feedAndroid Developers Blog

Get inspired and take your apps to desktop

Posted by Ivy Knight, Senior Design Advocate, Android











We're thrilled to announce major updates to our design resources, giving you the comprehensive guidance you need to create polished, adaptive Android apps across all form factors! We now have Desktop Experience guidance and a refreshed Android Design Gallery.

New Desktop Experience Design Guidance

Your users are engaging with Android apps on more diverse devices than ever before-from phones and foldables to laptops and external monitors. A "desktop experience" occurs anytime your app is in a desktop-like mode, typically involving a non-touch input device like a keyboard or mouse, or another display such as a monitor (read more in the connected display announcement). This means designing for larger screens and accommodating additional input states. These new design experiences are meant to maximize productivity for your users with higher information density, multi-tasking capabilities.


Dive into desktop experience guidance to help optimize your app with desktop design principles, input interaction guidance, and system UI considerations.

The new guidance includes foundational guides where you can learn design principles that make desktop experiences unique, such as how multitasking is at the core of desktop experiences.

When your app is in a desktop experience, keep in mind crucial interaction experiences, such as how to best design around unique input interactions, like choosing cursors from system provided cursors.

For specialized actions not covered by system icons, consider creating a custom cursor icon, while ensuring it remains easy for users to find on the page.


A desktop experience brings more multitasking features, like windowing, so expect your app to take on a variety of dimensions with a header bar.

Desktops have much larger screens than mobile, and users typically interact using a mouse which has finer precision than a finger on a touch screen. This means you can present a UI with higher information density so your users can be more productive!


Want to get started quickly? Check out the walkthrough to go from mobile to desktop and design along with the updated Adaptive Design lab.


For more on criteria that makes a differentiated quality app, read the newly updated adaptive app quality guidelines and adaptive developer guidance.

Introducing the Android Design Gallery

Looking for inspiration? We've launched the Android Design Gallery! This new resource is a living catalog of inspirational examples across multiple verticals, form factors, and UX patterns. We'll be continually adding new inspirational examples, so check back often to see the latest and greatest in Android design.







16 Mar 2026 5:00pm GMT

13 Mar 2026

feedAndroid Developers Blog

Room 3.0 - Modernizing the Room

Posted by Daniel Santiago Rivera, Software Engineer





The first alpha of Room 3.0 has been released! Room 3.0 is a major breaking version of the library that focuses on Kotlin Multiplatform (KMP) and adds support for JavaScript and WebAssembly (WASM) on top of the existing Android, iOS and JVM desktop support.

In this blog we outline the breaking changes, the reasoning behind Room 3.0, and the various things you can do to migrate from Room 2.0.


Breaking changes

Room 3.0 includes the following breaking API changes:

  • Dropping SupportSQLite APIs: Room 3.0 is fully backed by the androidx.sqlite driver APIs. The SQLiteDriver APIs are KMP-compatible and removing Room's dependency on Android's API simplifies the API surface for Android since it avoids having two possible backends.

  • No more Java code generation: Room 3.0 exclusively generates Kotlin code. This aligns with the evolving Kotlin-first paradigm but also simplifies the codebase and development process, enabling faster iterations.

  • Focus on KSP: We are also dropping support for Java Annotation Processing (AP) and KAPT. Room 3.0 is solely a KSP (Kotlin Symbol Processing) processor, allowing for better processing of Kotlin codebases without being limited by the Java language.

  • Coroutines first: Room 3.0 embraces Kotlin coroutines, making its APIs coroutine-first. Coroutines is the KMP-compatible asynchronous framework and making Room be asynchronous by nature is a critical requirement for supporting web platforms.

A new package

To prevent compatibility issues with existing Room 2.x implementations and for libraries with transitive dependencies to Room (for example, WorkManager), Room 3.0 resides in a new package which means it also has a new maven group and artifact ids. For example, androidx.room:room-runtime has become androidx.room3:room3-runtime and classes such as androidx.room.RoomDatabase will now be located at android.room3.RoomDatabase.

Kotlin and Coroutines First

With no more Java code generation, Room 3.0 also requires KSP and the Kotlin compiler even if the codebase interacting with Room is in Java. It is recommended to have a multi-module project where Room usage is concentrated and the Kotlin Gradle Plugin and KSP can be applied without affecting the rest of the codebase.

Room 3.0 also requires Coroutines and more specifically DAO functions have to be suspending unless they are returning a reactive type, such as a Flow. Room 3.0 disallows blocking DAO functions. See the Coroutines on Android documentation on getting started integrating Coroutines into your application.

Migration to SQLiteDriver APIs

With the shift away from SupportSQLite, apps will need to migrate to the SQLiteDriver APIs. This migration is essential to leveraging the full benefits of Room 3.0, including allowing the use of the bundled SQLite library via the BundledSQLiteDriver. You can start migrating to the driver APIs today with Room 2.7.0+. We strongly encourage you to avoid any further usage of SupportSQLite. If you migrate your Room integrations to SQLiteDriver APIs, then the transition to Room 3.0 is easier since the package change mostly involves updating symbol references (imports) and might require minimal changes to call-sites.

For a brief overview of the SQLiteDriver APIs, check out the SQLiteDriver APIs documentation.

For more details on how to migrate Room to use SQLiteDriver APIs, check out the official documentation to migrate from SupportSQLite.

Room SupportSQLite wrapper

We understand completely removing SupportSQLite might not be immediately feasible for all projects. To ease this transition, Room 2.8.0, the latest version of the Room 2.0 series, introduced a new artifact called androidx.room:room-sqlite-wrapper. This artifact offers a compatibility API that allows you to convert a RoomDatabase into a SupportSQLiteDatabase, even if the SupportSQLite APIs in the database have been disabled due to a SQLiteDriver being installed. This provides a temporary bridge for developers who need more time to fully migrate their codebase. This artifact continues to exist in Room 3.0 as androidx.room3:room3-sqlite-wrapper to enable the migration to Room 3.0 while still supporting critical SupportSQLite usage.

For example, invocations of Database.openHelper.writableDatabase can be replaced by roomDatabase.getSupportWrapper() and a wrapper would be provided even if setDriver() is called on Room's builder.

For more details check out the room-sqlite-wrapper documentation.

Room and SQLite Web Support

Support for the Kotlin Multiplatform targets JS and WasmJS and brings some of the most significant API changes. Specifically, many APIs in Room 3.0 are suspend functions since proper support for web storage is asynchronous. The SQLiteDriver APIs have also been updated to support the Web and a new web asynchronous driver is available in androidx.sqlite:sqlite-web. It is a Web Worker based driver that enables persisting the database in the Origin private file system (OPFS).

For more details on how to set up Room for the Web check out the Room 3.0 release notes.

Custom DAO Return Types

Room 3.0 introduces the ability to add custom integrations to Room similar to RxJava and Paging. Through a new annotation API called @DaoReturnTypeConverter you can create your own integration such that Room's generated code becomes accessible at runtime, this enables @Dao functions having their custom return types without having to wait for the Room team to add the support. Existing integrations are migrated to use this functionality and thus will now require for those who rely on it to add the converters to the @Database or @Dao definitions.

For example, the Paging converter will be located in the android.room3:room3-paging artifact and it's called PagingSourceDaoReturnTypeConverter. Meanwhile for LiveData the converter is in android.room3:room3-livedata and is called LiveDataReturnTypeConverter.

For more details check out the DAO Return Type Converters section in the Room 3.0 release notes.

Maintenance mode of Room 2.x

Since the development of Room will be focused on Room 3, the current Room 2.x version enters maintenance mode. This means that no major features will be developed but patch releases (2.8.1, 2.8.2, etc.) will still occur with bug fixes and dependency updates. The team is committed to this work until Room 3 becomes stable.

Final thoughts

We are incredibly excited about the potential of Room 3.0 and the opportunities it unlocks for the Kotlin ecosystem. Stay tuned for more updates as we continue this journey!

13 Mar 2026 5:00pm GMT

TikTok reduces code size by 58% and improves app performance for new features with Jetpack Compose

Posted by Ajesh R Pai, Developer Relations Engineer & Ben Trengrove, Developer Relations Engineer

TikTok is a global short-video platform known for its massive user base and innovative features. The team is constantly releasing updates, experiments, and new features for their users. Faced with the challenge of maintaining velocity while managing technical debt, the TikTok Android team turned to Jetpack Compose.

The team wanted to enable faster, higher-quality iteration of product requirements. By leveraging Compose, the team sought to improve engineering efficiency by writing less code and reducing cognitive load, while also achieving better performance and stability.

Streamlining complex UI to accelerate developer productivity

TikTok pages are often more complex than they appear, containing numerous layered conditional requirements. This complexity often resulted in difficult-to-maintain, sub-optimally structured View hierarchies and excessive View nesting, which caused performance degradation due to an increased number of measure passes.

Compose offered a direct solution to this structural problem.

Furthermore, Compose's measurement strategy helps reduce double taxation, making measure performance easier to optimize.

To improve developer productivity, TikTok's central Design System team provides a component library for teams working on different app features. The team observed that Development in Compose is simple; leveraging small composables is highly effective, while incorporating large UI blocks with conditional logic is both straightforward and has minimal overhead.


Building a path forward through strategic migration

By strategically adopting Jetpack Compose, TikTok was able to stay on top of technical debt, while also continuing to focus on creating great experiences for their users. The ability of Compose to handle conditional logic cleanly and streamline composition allowed the team to achieve up to a 78% reduction in page loading time on new or fully rewritten pages. This improvement was 20-30% in smaller cases, and 70-80% for full rewrites and new features. They also were able to reduce their code size by 58%, when compared to the same feature built in Views. The team has further shared a couple of learnings:

TikTok team's overall strategy was to incrementally migrate specific user journeys. This gave them an opportunity to migrate, confirm measurable benefits, then scale to more screens. They started with using Compose to simplify the overall structure in the QR code feature and saw the improvements. The team later expanded the migration to the Login and Sign-up experiences.

The team shared some additional learnings:

While checking performance during migration, the TikTok team found that using many small ComposeViews to replace elements inside a single ViewHolder, caused composition overhead. They achieved better results by expanding the migration to use one single ComposeView for the entire ViewHolder.

When migrating a Fragment inside ViewPager, which has custom height logic and conditional logic to hide and show ui based on experiments, the performance wasn't impacted. In this case, migrating the ViewPager to Composable performed better than migrating the Fragment.

Jun Shen really likes that Compose "reduces the amount of code required for feature development, improves testability, and accelerates delivery". The team plans to steadily increase Compose adoption, making it their preferred framework in the long term. Jetpack Compose proved to be a powerful solution for improving both their developer experience and production metrics at scale.

Get Started with Jetpack Compose

Learn more about how Jetpack Compose can help your team.

13 Mar 2026 1:00pm GMT

11 Mar 2026

feedAndroid Developers Blog

Level Up: Test Sidekick and prepare for upcoming program milestones

Posted by Maru Ahues Bouza, PM Director, Games on Google Play






Last September, we shared our vision for the future of Google Play Games grounded in a core belief: the best way to drive your game's success is to deliver a world-class player experience. We launched the Google Play Games Level Up program to recognize and reward great gaming experiences, while providing you with a powerful toolkit and new promotional opportunities to grow your games.

The momentum since our announcement has been incredibly positive, with more than 600 million gamers now using Play Games Services every month. Developers are also finding success, with one-third of all game installs on the Play Store now coming from editorially-driven organic discovery. In fact, in 2025, Level Up features have driven over 2.5 billion incremental acquisitions for featured games, in addition to an average uplift of 25% in installs during the featuring windows.

Today, we're inviting you to start testing Play Games Sidekick to keep your players in the action, sharing new Play Console updates to optimize your reach, and helping you prepare for our upcoming program milestones.


Boost retention and immersion with Play Games Sidekick

Play Games Sidekick is a helpful in-game overlay that gives players instant access to relevant gaming information-like rewards, offers, achievements, and quest progress- keeping them immersed while driving higher engagement for developers. It serves as a seamless bridge to the highly visible "You" tab, connecting your game to 160 million monthly active users already engaging there and doubles as an active gaming companion that enhances the player experience with helpful, AI-generated Game Tips.
Deep Rock Galactic: Survivor keeps players in the action with Play Games Sidekick

Today, Sidekick officially debuts in over 90 games, with the experience expanding to all Level Up titles later this year. But you don't need to wait for the broader rollout to get your game ready. You can now enable Sidekick through Play Console to preview and test how your players will interact with features like Achievements, Streaks, Play Points Coupons, and Game Tips. Upon completing your testing, be sure to push Sidekick for production to ensure your game meets the Level Up user experience guidelines.

Enable Play Games Sidekick in Play Console to begin testing


Optimize reach and operations with new Play Console updates

We are also rolling out two new Play Console updates to help you optimize your reach and streamline operations:

  • Pre-reg device breakdowns: To aid launch decisions, you can now analyze the device distribution of your pre-registered audience by key device attributes including Android version, RAM and SoC. This enables you to optimize game performance, minimum specs, and marketing spend for the players already waiting for your game.

Identify launch-day risks and optimize performance for your players
with new pre-registration device breakdowns

  • Real-time feedback: With Level Up+, our tier for high-performing games, qualifying titles can unlock promotional content featuring and tools like deep-links and audience targeting. While submissions must meet Play's quality guidelines, you no longer have to wait 24 hours to learn about issues. You can now get immediate feedback on quality whenever possible.


Your 2026 checklist: Securing your Level Up benefits

Today, all games on Google Play qualify for Google Play Games Level Up. However, in order to maintain access to Level Up benefits like Play Points offers, expanded APK size limits, Play Store collections and campaigns consideration, or access to high-visibility surfaces like You tab and Sidekick, you'll need to ensure your game meets user experience guidelines by their upcoming milestones:

By July 2026:
  • Integrate Play Games Sidekick to offer a quick and easy entry point to access rewards, offers, and achievements through an in-game overlay.
  • Implement achievements with Play Games Services, to support authentication with the modern Gamer Profile, and to keep players engaged across the lifespan of your game.
By November 2026:
  • Implement cloud save to enable progress sync across devices.

Last week, we announced that we're working on an expanded Level Up program that builds on our successful foundation to further improve gaming experiences. The update will introduce new requirements that will unlock additional benefits like lower service fees. Engaging with the program now ensures your work is strategically aligned with these future updates. We'll share more details in the coming months.

In the meantime, the path to your first program milestone begins today. By prioritizing these user experience guidelines now, you're investing in the long-term value of your game and ensuring it's built to thrive for every player. Head over to Play Console to start testing Sidekick and take the next step in your Level Up journey.

11 Mar 2026 8:02pm GMT

Expanding our stage for PC and paid titles

Posted by Aurash Mahbod, VP and GM, Games on Google Play






Google Play is proud to be the home of over 200,000 games-many of which defined the mobile-first era. But as cross-platform becomes the standard for players, we are evolving our ecosystem to match the scale of your ambitions. In recent years, we focused on elevating Android gaming quality while significantly deepening our support for native PC titles.

We know that maximizing your game's reach across different platforms is complex. The Level Up program serves as your strategic roadmap, helping you prioritize optimizations that drive great experiences on Android. Building on this foundation, we're doubling down on our investment to make Play the most accessible home for every category of play. We're adding new tools for paid games and making the PC game discovery to purchase seamless. Keep reading to learn more about how we're creating a bigger stage for your games.


Scale your discovery across mobile and PC platforms

Building a bigger stage starts with making your games easier to find-and easier to buy-no matter which device your players prefer. We're expanding your reach by bringing cross-platform discovery directly to the mobile storefront.

PC in the Games tab and PC badging expands your game's reach


'Buy once play anywhere' pricing to sell your games across devices

Lower the purchase barrier with Game Trials

To help you convert high-intent buyers with less friction, we're introducing Game Trials, a feature that enables players to experience your game for a limited time before making a purchase on mobile. Accessible directly from your game's store listing, Game Trials provides a fast-track for players to start exploring your world with a single tap. Game trials are now in testing with select titles and we'll roll it out to more titles soon.

  • To ensure this is low maintenance for you, Game Trials is added directly into your Android App Bundle. This enables you to offer a high quality trial without the burden of a separate codebase or a demo version of your app.

  • Play ensures trials are secure and seamless. Game Trials are once per user and protects your game while the trial is active. When it ends, players can purchase your game and keep their progress.

  • We're also working on tools that will give you more control-such as specifying a custom time limit or an in-game event to conclude the trial.



Game Trial for DREDGE to help convert high-intent buyers

Diversify your revenue with a dedicated player community on Play Pass

Play Pass is another way to diversify revenue and grow your player audience. It has been a strong launchpad for indie hits such as Isle of Arrows, Slay the Spire, and Dead Cells. With Play Pass, you can reach highly dedicated players seeking a more curated gaming experience, free of ads and in-app purchases. To help you deepen engagement, paid titles on Play Pass can now opt in to Google Play Games on PC - making it easy for players to find and play your games on a larger screen. Later this year, you can nominate your game through a streamlined opt-in process directly in Play Console.



Drive long term sales with Wishlists and Discounts

Wishlists and Discounts are one of the most effective ways to capture player intent and drive long term sales. To support players at every stage of their purchase journey, we're integrating them directly into Play. Players can save titles to their wishlist and manage them from library settings. To keep your game top-of-mind, players will receive automated notifications for your latest discounts - starting with mobile and expanding soon to PC games.

Wishlist and discount notifications drives long term sales, rolling out today

How leading studios are finding a new path to success on Play

We're thrilled to welcome Sledding Game, 9 Kings, Potion Craft, Moonlight Peaks, and Low Budget Repairs to Play [1]. It marks an exciting expansion of our catalog and a step forward in our mission to build a bigger gaming ecosystem for all developers. This growth is fueled by our developer community, whose feedback continues to shape our roadmap and help us better support your success.

Sledding Game, 9 Kings, Potion Craft, Moonlight Peaks, and Low Budget Repairs is coming to Play.

That mission brings us to GDC and the Independent Games Festival (IGF) Awards [2], where the next generation of games awaits! This year, we're inviting you to come along for the ride as we go backstage to chat with the finalists and winners, sharing the moments of triumph and the creative stories behind their development. Not joining us at GDC? You can take the next step in your journey to launch your game on Google Play today.


1. Sledding Game, 9 Kings, Potion Craft, and Moonlight Peaks are coming to Google Play in 2026. Low Budget Repairs is scheduled for release in 2027. [Back]

2. Independent Games Festival (IGF) Awards is hosted by Game Developers Conference (GDC) and requires a valid GDC pass for entry. [Back]

11 Mar 2026 8:02pm GMT

10 Mar 2026

feedAndroid Developers Blog

Boosting Android Performance: Introducing AutoFDO for the Kernel

Posted by Yabin Cui, Software Engineer









We are the Android LLVM toolchain team. One of our top priorities is to improve Android performance through optimization techniques in the LLVM ecosystem. We are constantly searching for ways to make Android faster, smoother, and more efficient. While much of our optimization work happens in userspace, the kernel remains the heart of the system. Today, we're excited to share how we are bringing Automatic Feedback-Directed Optimization (AutoFDO) to the Android kernel to deliver significant performance wins for users.


What is AutoFDO?


During a standard software build, the compiler makes thousands of small decisions, such as whether to inline a function and which branch of a conditional is likely to be taken, based on static code hints.While these heuristics are useful, they don't always accurately predict code execution during real-world phone usage.

AutoFDO changes this by using real-world execution patterns to guide the compiler. These patterns represent the most common instruction execution paths the code takes during actual use, captured by recording the CPU's branching history. While this data can be collected from fleet devices, for the kernel we synthesize it in a lab environment using representative workloads, such as running the top 100 most popular apps. We use a sampling profiler to capture this data, identifying which parts of the code are 'hot' (frequently used) and which are 'cold'. When we rebuild the kernel with these profiles, the compiler can make much smarter optimization decisions tailored to actual Android workloads.

To understand the impact of this optimization, consider these key facts:

Real-World Performance Wins

We have seen impressive improvements across key Android metrics by leveraging profiles from controlled lab environments. These profiles were collected using app crawling and launching, and measured on Pixel devices across the 6.1, 6.6, and 6.12 kernels.

The most noticeable improvements are listed below. Details on the AutoFDO profiles for these kernel versions can be found in the respective Android kernel repositories for android16-6.12 and android15-6.6 kernels.





These aren't just theoretical numbers. They translate to a snappier interface, faster app switching, extended battery life, and an overall more responsive device for the end user.

How It Works: The Pipeline

Our deployment strategy involves a sophisticated pipeline to ensure profiles stay relevant and performance remains stable.


Step 1: Profile Collection

While we rely on our internal test fleet to profile userspace binaries, we shifted to a controlled lab environment for the Generic Kernel Image (GKI). Decoupling profiling from the device release cycle allows for flexible, immediate updates independent of deployed kernel versions. Crucially, tests confirm that this lab-based data delivers performance gains comparable to those from real-world fleets.
  • Tools & Environment: We flash test devices with the latest kernel image and use simpleperf to capture instruction execution streams. This process relies on hardware capabilities to record branching history, specifically utilizing ARM Embedded Trace Extension (ETE) and ARM Trace Buffer Extension (TRBE) on Pixel devices.
  • Workloads: We construct a representative workload using the top 100 most popular apps from the Android App Compatibility Test Suite (C-Suite). To capture the most accurate data, we focus on:
    • App Launching: Optimizing for the most visible user delays
    • AI-Driven App Crawling: Simulating contiguous, evolving user interactions
    • System-Wide Monitoring: Capturing not only foreground app activities, but also critical background workloads and inter-process communications
  • Validation: This synthesized workload shows an 85% similarity to execution patterns collected from our internal fleet.
  • Targeted Data: By repeating these tests sufficiently, we capture high-fidelity execution patterns that accurately represent real-world user interaction with the most popular applications. Furthermore, this extensible framework allows us to seamlessly integrate additional workloads and benchmarks to broaden our coverage.

Step 2: Profile Processing

We post-process the raw trace data to ensure it is clean, effective, and ready for the compiler.
  • Aggregation: We consolidate data from multiple test runs and devices into a single system view.
  • Conversion: We convert raw traces into the AutoFDO profile format, filtering out unwanted symbols as needed.
  • Profile Trimming: We trim profiles to remove data for "cold" functions, allowing them to use standard optimization. This prevents regressions in rarely used code and avoids unnecessary increases in binary size.

Step 3: Profile Testing

Before deployment, profiles undergo rigorous verification to ensure they deliver consistent performance wins without stability risks.
  • Profile & Binary Analysis: We strictly compare the new profile's content (including hot functions, sample counts, and profile size) against previous versions. We also use the profile to build a new kernel image, analyzing binaries to ensure that changes to the text section are consistent with expectations.
  • Performance Verification: We run targeted benchmarks on the new kernel image. This confirms that it maintains the performance improvements established by previous baselines.

Continuous Updates

Code naturally "drifts" over time, so a static profile would eventually lose its effectiveness. To maintain peak performance, we run the pipeline continuously to drive regular updates:
  • Regular Refresh: We refresh profiles in Android kernel LTS branches ahead of each GKI release, ensuring every build includes the latest profile data.
  • Future Expansion: We are currently delivering these updates to the android16-6.12 and android15-6.6 branches and will expand support to newer GKI versions, such as the upcoming android17-6.18.

Ensuring Stability

A common question with profile-guided optimization is whether it introduces stability risks. Because AutoFDO primarily influences compiler heuristics, such as function inlining and code layout, rather than altering the source code's logic, it preserves the functional integrity of the kernel. This technology has already been proven at scale, serving as a standard optimization for Android platform libraries, ChromeOS, and Google's own server infrastructure for years.

To further guarantee consistent behavior, we apply a "conservative by default" strategy. Functions not captured in our high-fidelity profiles are optimized using standard compiler methods. This ensures that the "cold" or rarely executed parts of the kernel behave exactly as they would in a standard build, preventing performance regressions or unexpected behaviors in corner cases.

Looking Ahead

We are currently deploying AutoFDO across the android16-6.12 and android15-6.6 branches. Beyond this initial rollout, we see several promising avenues to further enhance the technology:

  • Expanded Reach: We look forward to deploying AutoFDO profiles to newer GKI kernel versions and additional build targets beyond the current aarch64 support.

  • GKI Module Optimization: Currently, our optimization is focused on the main kernel binary (vmlinux). Expanding AutoFDO to GKI modules could bring performance benefits to a larger portion of the kernel subsystem.

  • Vendor Module Support: We are also interested in supporting AutoFDO for vendor modules built using the Driver Development Kit (DDK). With support already available in our build system (Kleaf) and profiling tools (simpleperf), this allows vendors to apply these same optimization techniques to their specific hardware drivers.

  • Broader Profile Coverage: There is potential to collect profiles from a wider range of Critical User Journeys (CUJs) to optimize them.

By bringing AutoFDO to the Android kernel, we're ensuring that the very foundation of the OS is optimized for the way you use your device every day.


10 Mar 2026 11:00pm GMT

05 Mar 2026

feedAndroid Developers Blog

Instagram and Facebook deliver instant playback and boost user engagement with Media3 PreloadManager

Posted by Mayuri Khinvasara Khabya, Developer Relations Engineer (LinkedIn and X)

In the dynamic world of social media, user attention is won or lost quickly. Meta apps (Facebook and Instagram) are among the world's largest social platforms and serve billions of users globally. For Meta, delivering videos seamlessly isn't just a feature, it's the core of their user experience. Short-form videos, particularly Facebook Newsfeed and Instagram Reels, have become a primary driver of engagement. They enable creative expression and rapid content consumption; connecting and entertaining people around the world.


This blog post takes you through the journey of how Meta transformed video playback for billions by delivering true instant playback.


The latency gap in short form videos


Short-form videos lead to highly fast paced interactions as users quickly scroll through their feeds. Delivering a seamless transition between videos in an ever-changing feed introduces unique hurdles for instantaneous playback. Hence we need solutions that go beyond traditional disk caching and standard reactive playback strategies.


The path forward with Media3 PreloadManager


To address the shifts in consumption habits from rise in short form content and the limitations of traditional long form playback architecture, Jetpack Media3 introduced PreloadManager. This component allows developers to move beyond disk caching, offering granular control and customization to keep media ready in memory before the user hits play. Read this blog series to understand technical details about media playback with PreloadManager.


How Meta achieved true instant playback

Existing Complexities


Previously, Meta used a combination of warmup (to get players ready) and prefetch (to cache content on disk) for video delivery. While these methods helped improve network efficiency, they introduced significant challenges. Warmup required instantiating multiple player instances sequentially, which consumed significant memory and limited preloading to only a few videos. This high resource demand meant that a more scalable robust solution could be applied to deliver the instant playback expected on modern, fast-scrolling social feeds.


Integrating Media3 PreloadManager

To achieve truly instant playback, Meta's Media Foundation Client team integrated the Jetpack Media3 PreloadManager into Facebook and Instagram. They chose the DefaultPreloadManager to unify their preloading and playback systems. This integration required refactoring Meta's existing architecture to enable efficient resource sharing between the PreloadManager and ExoPlayer instances. This strategic shift provided a key architectural advantage: the ability to parallelize preloading tasks and manage many videos using a single player instance. This dramatically increased preloading capacity while eliminating the high memory complexities of their previous approach.







Optimization and Performance Tuning

The team then performed extensive testing and iterations to optimize performance across Meta's diverse global device ecosystem. Initial aggressive preloading sometimes caused issues, including increased memory usage and scroll performance slowdowns. To solve this, they fine-tuned the implementation by using careful memory measurements, considering device fragmentation, and tailoring the system to specific UI patterns.


Fine tuning implementation to specific UI patterns

Meta applied different preloading strategies and tailored the behavior to match the specific UI patterns of each app:


  • Facebook Newsfeed: The UI prioritizes the video currently coming into view. The manager preloads only the current video to ensure it starts the moment the user pauses their scroll. This "current-only" focus minimizes data and memory footprints in an environment where users may see many static posts between videos. While the system is presently designed to preload just the video in view, it can be adjusted to also preload upcoming (future) videos.


  • Instagram Reels: This is a pure video environment where users swipe vertically. For this UI, the team implemented an "adjacent preload" strategy. The PreloadManager keeps the videos immediately after the current Reel ready in memory. This bi-directional approach ensures that whether a user swipes up or down, the transition remains instant and smooth. The result was a dramatic improvement in the Quality of Experience (QoE) including improvements in Playback Start and Time to First Frame for the user.


Scaling for a diverse global device ecosystem

Scaling a high-performance video stack across billions of devices requires more than just aggressive preloading; it requires intelligence. Meta faced initial challenges with memory pressure and scroll lag, particularly on mid-to-low-end hardware. To solve this, they built a Device Stress Detection system around the Media3 implementation. The apps now monitor I/O and CPU signals in real-time. If a device is under heavy load, preloading is paused to prioritize UI responsiveness.


This device-aware optimization ensures that the benefit of instant playback doesn't come at the cost of system stability, allowing even users on older hardware to experience a smoother, uninterrupted feed.




Architectural wins and code health

Beyond the user-facing metrics, the migration to Media3 PreloadManageroffered long-term architectural benefits. While the integration and tuning process needed multiple iterations to balance performance, the resulting codebase is more maintainable. The team found that the PreloadManager API integrated cleanly with the existing Media3 ecosystem, allowing for better resource sharing. For Meta, the adoption of Media3 PreloadManager was a strategic investment in the future of video consumption.


By adopting preloading and adding device-intelligent gates, they successfully increased total watch time on their apps and improved the overall engagement of their global community.


Resulting impact on Instagram and Facebook


The proactive architecture delivered immediate and measurable improvements across both platforms.


  • Facebook experienced faster playback starts, decreased playback stall rates and a reduction in bad sessions (like rebuffering, delayed start time, lower quality,etc) which overall resulted in higher watch time.


  • Instagram saw faster playback starts and an increase in total watch time. Eliminating join latency (the interval from the user's action to the first frame display) directly increased engagement metrics. The fewer interruptions due to reduced buffering meant users watched more content, which showed through engagement metrics.


Key engineering learnings at scale


As media consumption habits evolve, the demand for instant experiences will continue to grow. Implementing proactive memory management and optimizing for scale and device diversity ensures your application can meet these expectations efficiently.


  • Prioritize intelligent preloading

Focus on delivering a reliable experience by minimizing stutters and loading times through preloading. Rather than simple disk caching, leveraging memory-level preloading ensures that content is ready the moment a user interacts with it.


  • Align your implementation with UI patterns

Customize preloading behavior as per your apps's UI. For example, use a "current-only" focus for mixed feeds like Facebook to save memory, and an "adjacent preload" strategy for vertical environments like Instagram Reels.

  • Leverage Media3 for long-term code health

Integrating with Media3 APIs rather than a custom caching solution allows for better resource sharing between the player and the PreloadManager, enabling you to manage multiple videos with a single player instance. This results in a future-proof codebase that is easier for engineering teams to not only maintain and optimize over time but also benefit from the latest feature updates.

  • Implement device aware optimizations

Broaden your market reach by testing on various devices, including mid-to-low-end models. Use real-time signals like CPU, memory, and I/O to adapt features and resource usage dynamically.

Learn More


To get started and learn more, visit


Now you know the secrets for instant playback. Go try them out!



05 Mar 2026 6:03pm GMT

Elevating AI-assisted Android development and improving LLMs with Android Bench

Posted by Matthew McCullough, VP of Product Management, Android Developer


We want to make it faster and easier for you to build high-quality Android apps, and one way we're helping you be more productive is by putting AI at your fingertips. We know you want AI that truly understands the nuances of the Android platform, which is why we've been measuring how LLMs perform Android development tasks. Today we released the first version of Android Bench, our official leaderboard of LLMs for Android development.


Our goal is to provide model creators with a benchmark to evaluate LLM capabilities for Android development. By establishing a clear, reliable baseline for what high quality Android development looks like, we're helping model creators identify gaps and accelerate improvements-which empowers developers to work more efficiently with a wider range of helpful models to choose for AI assistance-which ultimately will lead to higher quality apps across the Android ecosystem.


Designed with real-world Android development tasks

We created the benchmark by curating a task set against a range of common Android development areas. It is composed of real challenges of varying difficulty, sourced from public GitHub Android repositories. Scenarios include resolving breaking changes across Android releases, domain-specific tasks like networking on wearables, and migrating to the latest version of Jetpack Compose, to name a few.


Each evaluation attempts to have an LLM fix the issue reported in the task, which we then verify using unit or instrumentation tests. This model-agnostic approach allows us to measure a model's ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day.


We validated this methodology with several LLM makers, including JetBrains.


"Measuring AI's impact on Android is a massive challenge, so it's great to see a framework that's this sound and realistic. While we're active in benchmarking ourselves, Android Bench is a unique and welcome addition. This methodology is exactly the kind of rigorous evaluation Android developers need right now."

- Kirill Smelov, Head of AI Integrations at JetBrains.


The first Android Bench results

For this initial release, we wanted to purely measure model performance and not focus on agentic or tool use. The models were able to successfully complete 16-72% of the tasks. This is a wide range that demonstrates some LLMs already have a strong baseline for Android knowledge, while others have more room for improvement. Regardless of where the models are at now, we're anticipating continued improvement as we encourage LLM makers to enhance their models for Android development.


The LLM with the highest average score for this first release is Gemini 3.1 Pro, followed closely by Claude Opus 4.6. You can try all of the models we evaluated for AI assistance for your Android projects by using API keys in the latest stable version of Android Studio.



Providing developers and LLM makers with transparency

We value an open and transparent approach, so we made our methodology, dataset, and test harness publicly available on GitHub.


One challenge for any public benchmark is the risk of data contamination, where models may have seen evaluation tasks during their training process. We have taken measures to ensure our results reflect genuine reasoning rather than memorization or guessing, including a thorough manual review of agent trajectories, or the integration of a canary string to discourage training.


Looking ahead, we will continue to evolve our methodology to preserve the integrity of the dataset, while also making improvements for future releases of the benchmark-for example, growing the quantity and complexity of tasks.


We're looking forward to how Android Bench can improve AI assistance long-term. Our vision is to close the gap between concept and quality code. We're building the foundation for a future where no matter what you imagine, you can build it on Android.


05 Mar 2026 2:03pm GMT

Battery Technical Quality Enforcement is Here: How to Optimize Common Wake Lock Use Cases

Posted by Alice Yuan, Senior Developer Relations Engineer


In recognition that excessive battery drain is top of mind for Android users, Google has been taking significant steps to help developers build more power-efficient apps. On March 1st, 2026, Google Play Store began rolling out the wake lock technical quality treatments to improve battery drain. This treatment will roll out gradually to impacted apps over the following weeks. Apps that consistently exceed the "Excessive Partial Wake Lock" threshold in Android vitals may see tangible impacts on their store presence, including warnings on their store listing and exclusion from discovery surfaces such as recommendations.

Users may see a warning on your store listing if your app exceeds the bad behavior threshold.

This initiative elevated battery efficiency to a core vital metric alongside stability metrics like crashes and ANRs. The "bad behavior threshold" is defined as holding a non-exempted partial wake lock for at least two hours on average while the screen is off in more than 5% of user sessions in the past 28 days. A wake lock is exempted if it is a system held wake lock that offers clear user benefits that cannot be further optimized, such as audio playback, location access, or user-initiated data transfer. You can view the full definition of excessive wake locks in our Android vitals documentation.

As part of our ongoing initiative to improve battery life across the Android ecosystem, we have analyzed thousands of apps and how they use partial wake locks. While wake locks are sometimes necessary, we often see apps holding them inefficiently or unnecessarily, when more efficient solutions exist. This blog will go over the most common scenarios where excessive wake locks occur and our recommendations for optimizing wake locks. We have already seen measurable success from partners like WHOOP, who leveraged these recommendations to optimize their background behavior.

Using a foreground service vs partial wake locks

We've often seen developers struggle to understand the difference between two concepts when doing background execution: foreground service and partial wake locks.

A foreground service is a lifecycle API that signals to the system that an app is performing user-perceptible work and should not be killed to reclaim memory, but it does not automatically prevent the CPU from sleeping when the screen turns off. In contrast, a partial wake lock is a mechanism specifically designed to keep the CPU running even while the screen is off.

While a foreground service is often necessary to continue a user action, a manual acquisition of a partial wake lock is only necessary in conjunction with a foreground service for the duration of the CPU activity. In addition, you don't need to use a wake lock if you're already utilizing an API that keeps the device awake.

Refer to the flow chart in Choose the right API to keep the device awake to ensure you have a strong understanding of what tool to use to avoid acquiring a wake lock in scenarios where it's not necessary.

Third party libraries acquiring wake locks

It is common for an app to discover that it is flagged for excessive wake locks held by a third-party SDK or system API acting on its behalf. To identify and resolve these wake locks, we recommend the following steps:

Common wake lock scenarios

Below is a breakdown of some of the specific use cases we have reviewed, along with the recommended path to optimize your wake lock implementation.

User-Initiated Upload or Download

Example use cases:

  • Video streaming apps where the user triggers a download of a large file for offline access.

  • Media backup apps where the user triggers uploading their recent photos via a notification prompt.

How to reduce wake locks:

  • Do not acquire a manual wake lock. Instead, use the User-Initiated Data Transfer (UIDT) API. This is the designated path for long running data transfer tasks initiated by the user, and it is exempted from excessive wake lock calculations.

One-Time or Periodic Background Syncs

Example use cases:

How to reduce wake locks:

workManager.getWorkInfoByIdFlow(syncWorker.id)
  .collect { workInfo ->
      if (workInfo != null) {
        val stopReason = workInfo.stopReason
        logStopReason(syncWorker.id, stopReason)
      }
  }

Bluetooth Communication

Example use cases:

How to reduce wake locks:

Location Tracking

Example use cases:

How to reduce wake locks:


override fun onCreate(savedInstanceState: Bundle?) {
    locationCallback = object : LocationCallback() {
        override fun onLocationResult(locationResult: LocationResult?) {
            locationResult ?: return
            // System wakes up CPU for short duration
            for (location in locationResult.locations){
                // Store data in memory to process at another time
            }
        }
    }
}

High Frequency Sensor Monitoring

Example use cases:

How to reduce wake locks:

val accelerometer = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER)

sensorManager.registerListener(this,
                 accelerometer,
                 samplingPeriodUs, // How often to sample data
                 maxReportLatencyUs // Key for sensor batching 
              )

Remote Messaging

Example use cases:

  • Video or sound monitoring companion apps that need to monitor events that occur on an external device connected using a local network.

  • Messaging apps that maintain a network socket connection with the desktop variant.

How to reduce wake locks:

  • If the network events can be processed on the server side, use FCM to receive information on the client. You may choose to schedule an expedited worker if additional processing of FCM data is required.

  • If events must be processed on the client side via a socket connection, a wake lock is not needed to listen for event interrupts. When data packets arrive at the Wi-Fi or Cellular radio, the radio hardware triggers a hardware interrupt in the form of a kernel wake lock. You may then choose to schedule a worker or acquire a wake lock to process the data.

  • For example, if you're using ktor-network to listen for data packets on a network socket, you should only acquire a wake lock when packets have been delivered to the client and need to be processed.

val readChannel = socket.openReadChannel()
while (!readChannel.isClosedForRead) {
    // CPU can safely sleep here while waiting for the next packet
    val packet = readChannel.readRemaining(1024) 
    if (!packet.isEmpty) {
         // Data Arrived: The system woke the CPU and we should keep it awake via manual wake lock (urgent) or scheduling a worker (non-urgent)
         performWorkWithWakeLock { 
              val data = packet.readBytes()
              // Additional logic to process data packets
         }
    }
}

Summary

By adopting these recommended solutions for common use cases like background syncs, location tracking, sensor monitoring and network communication, developers can work towards reducing unnecessary wake lock usage. To continue learning, read our other technical blog post or watch our technical video on how to discover and debug wake locks: Optimize your app battery using Android vitals wake lock metric. Also, consult our updated wakelock documentation. To help us continue improving our technical resources, please share any additional feedback on our guidance in our documentation feedback survey.

05 Mar 2026 12:00am GMT

04 Mar 2026

feedAndroid Developers Blog

How WHOOP decreased excessive partial wake lock sessions by over 90%

Posted by Breana Tate, Developer Relations Engineer, Mayank Saini, Senior Android Engineer, Sarthak Jagetia, Senior Android Engineer and Manmeet Tuteja, Android Engineer II


Building an Android app for a wearable means the real work starts when the screen turns off. WHOOP helps members understand how their body responds to training, recovery, sleep, and stress, and for the many WHOOP members on Android, reliable background syncing and connectivity are what make those insights possible.


Earlier this year, Google Play released a new metric in Android vitals: Excessive partial wake locks. This metric measures the percentage of user sessions where cumulative, non-exempt wake lock usage exceeds 2 hours in a 24-hour period. The aim of this metric is to help you identify and address possible sources of battery drain, which is crucial for delivering a great user experience.


Beginning March 1, 2026, apps that continue to not meet the quality threshold may be excluded from Google Play discovery surfaces. A warning may also be placed on the Google Play Store listing, indicating the app might use more battery than expected.


According to Mayank Saini, Senior Android Engineer at WHOOP, this "presented the team with an opportunity to raise the bar on Android efficiency," after Android vitals flagged the app's excessive partial wake lock % as 15%-which exceeded the recommended 5% threshold.




The team viewed the Android vitals metric as a clear signal that their background work was holding the CPU awake longer than necessary. Resolving this would allow them to continue to deliver a great user experience while simultaneously decreasing wasted background time and maintaining reliable and timely Bluetooth connectivity and syncing.


Identifying the issue


To figure out where to get started, the team first turned to Android vitals for more insight into which wake locks were affecting the metric. By consulting the Android vitals excessive partial wake locks dashboard, they were able to identify the biggest contributor to excessive partial wake locks as one of their WorkManager workers (identified in the dashboard as androidx.work.impl.background.systemjob.SystemJobService). To support the WHOOP "always-on experience", the app uses WorkManager for background tasks like periodic syncing and delivering recurring updates to the wearable.


While the team was aware that WorkManager acquires a wake lock while executing tasks in the background, they previously did not have visibility into how all of their background work (beyond just WorkManager) was distributed until the introduction of the excessive partial wake locks metric in Android vitals.


With the dashboard identifying WorkManager as the main contributor, the team was then able to focus their efforts on identifying which of their workers was contributing the most and work towards resolving the issue.


Making use of internal metrics and data to better narrow down the cause


WHOOP already had internal infrastructure set up to monitor WorkManager metrics. They periodically monitor:

  1. Average Runtime: For how long does the worker run?

  2. Timeouts: How often is the worker timing out instead of completing?

  3. Retries: How often does the worker retry if the work timed out or failed?

  4. Cancellations: How often was the work cancelled?


Tracking more than just worker successes and failures gives the team visibility into their work's efficiency.


The internal metrics flagged high average runtime for a select few workers, enabling them to narrow the investigation down even further.


In addition to their internal metrics, the team also used Android Studio's Background Task Inspector to inspect and debug the workers of interest, with a specific focus on associated wake locks, to align with the metric flagged in Android vitals.


Investigation: Distinguishing between worker variants


WHOOP uses both one-time and periodic scheduling for some workers. This allows the app to reuse the same Worker logic for identical tasks with the same success criteria, differing only in timing.


Using their internal metrics made it possible to narrow their search to a specific worker, but they couldn't tell if the bug occurred when the worker was one-time, periodic, or both. So, they rolled out an update to use WorkManager's setTraceTag method to distinguish between the one-time and periodic variants of the same Worker.


This extra detail would allow them to definitively identify which Worker variant (periodic or one-time) was contributing the most to sessions with excessive partial wake locks. However, the team was surprised when the data revealed that neither variant appeared to be contributing more than the other.


Manmeet Tuteja, Android Engineer II at WHOOP said "that split also helped us confirm the issue was happening in both variants, which pointed away from scheduling configuration and toward a shared business logic problem inside the worker implementation."




Diving deeper on worker behavior and fixing the root cause


With the knowledge that they needed to take a look at logic within the worker, the team re-examined worker behavior for the workers that had been flagged during their investigation. Specifically, they were looking for instances in which work may have been getting stuck and not completing.


All of this culminated in finding the root cause of the excessive wake locks:


A CoroutineWorker that was designed to wait for a connection to the WHOOP sensor before proceeding.


If the work started with no sensor connected, whoopSensorFlow-which indicates if the sensor is connected- was null. The SensorWorker didn't treat this as an early-exit condition and kept running, effectively waiting indefinitely for a connection. As a result, WorkManager held a partial wake lock until the work timed out, leading to high background wake lock usage and frequent, unwanted rescheduling of the SensorWorker.


To address this, the WHOOP team updated the worker logic to check the connection status before attempting to execute the core business logic.


If the sensor isn't available, the worker exits, avoiding a timeout scenario and releasing the wake lock. The following code snippet shows the solution:


class SensorWorker(appContext: Context, params: WorkerParameters): CoroutineWorker(appContext, params) {
   override suspend fun doWork(): Result {
      ...
      // Check the sensor state and perform work or return failure
       return whoopSensorFlow.replayCache
            .firstOrNull()
            ?.let { cachedData ->
                processSensorData(cachedData)
                Result.success()
            } ?: run {
                Result.failure()
            }
}


Achieving a 90% decrease in sessions with excessive partial wake locks


After rolling out the fix, the team continued to monitor the Android vitals dashboard to confirm the impact of the changes.


Ultimately, WHOOP saw their excessive partial wake lock percentage drop from 15% to less than 1% just 30 days after implementing the changes to their Worker.


As a result of the changes, the team has seen fewer instances of work timing out without completing, resulting in lower average runtimes.


The WHOOP team's advice to other developers that want to improve their background work's efficiency:



















Want to dive deeper into the details and hear more insights from the developers? Check out the WHOOP team's blog.


Get Started

If you're interested in trying to reduce your app's excessive partial wake locks or trying to improve worker efficiency, view your app's excessive partial wake locks metric in Android vitals, and review the wake locks documentation for more best practices and debugging strategies.

04 Mar 2026 6:00pm GMT