16 May 2025

feedTalkAndroid

After Swollen Pixel 7a Batteries, the Pixel 6a Is Catching Fire

It seems that the Pixel "a"-series might be full of battery woes.

16 May 2025 5:30pm GMT

Board Kings Free Rolls – Updated Every Day!

Run out of rolls for Board Kings? Find links for free rolls right here, updated daily!

16 May 2025 3:35pm GMT

Thanks to T-Mobile, Seeing Tom Cruise’s Movie Is Mission Possible

Looking forward to seeing the latest in Tom Cruise's death-defying stunts? T-Mobile makes that easier.

16 May 2025 3:35pm GMT

Coin Tales Free Spins – Updated Every Day!

Tired of running out of Coin Tales Free Spins? We update our links daily, so you won't have that problem again!

16 May 2025 3:34pm GMT

Avatar World Codes – May 2025 – Updated Daily

Find all the latest Avatar World Codes right here in this article! Read on for more!

16 May 2025 3:32pm GMT

Coin Master Free Spins & Coins Links

Find all the latest Coin Master free spins right here! We update daily, so be sure to check in daily!

16 May 2025 3:32pm GMT

Monopoly Go – Free Dice Links Today (Updated Daily)

If you keep on running out of dice, we have just the solution! Find all the latest Monopoly Go free dice links right here!

16 May 2025 3:30pm GMT

This Netflix drama is perfect for fans of Yellowstone

Netflix's new western drama "Ransom Canyon" has captivated audiences worldwide since its April 17 release. Social media is…

16 May 2025 3:30pm GMT

Family Island Free Energy Links (Updated Daily)

Tired of running out of energy on Family Island? We have all the latest Family Island Free Energy links right here, and we update these daily!

16 May 2025 3:28pm GMT

Crazy Fox Free Spins & Coins (Updated Daily)

If you need free coins and spins in Crazy Fox, look no further! We update our links daily to bring you the newest working links!

16 May 2025 3:23pm GMT

Match Masters Free Gifts, Coins, And Boosters (Updated Daily)

Tired of running out of boosters for Match Masters? Find new Match Masters free gifts, coins, and booster links right here! Updated Daily!

16 May 2025 3:20pm GMT

Solitaire Grand Harvest – Free Coins (Updated Daily)

Get Solitaire Grand Harvest free coins now, new links added daily. Only tested and working links, complete with a guide on how to redeem the links.

16 May 2025 3:17pm GMT

Dice Dreams Free Rolls – Updated Daily

Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.

16 May 2025 3:16pm GMT

Monopoly Go Events Schedule Today – Updated Daily

Current active events are Main Event - Hyper Tour, Tournament -Bullseye Bolt, and Special Event - Juggle Jam and Partner Event - Star Wars Partners

16 May 2025 3:13pm GMT

The Best Consumer Headphones Ever? Sony Unveils the WH-1000XM6

Sony's new top-of-the-line headphones for consumers have been well worth the wait.

16 May 2025 2:01pm GMT

What is NFC and why your phone probably already uses it

The invisible technology powering your daily life is right in your pocket. Near Field Communication (NFC) has revolutionized…

16 May 2025 11:20am GMT

13 May 2025

feedAndroid Developers Blog

The Android Show: I/O Edition - what Android devs need to know!

Posted by Matthew McCullough - Vice President, Product Management, Android Developer


We just dropped an I/O Edition of The Android Show, where we unpacked exciting new experiences coming to the Android ecosystem: a fresh and dynamic look and feel, smarts across your devices, and enhanced safety and security features. Join Sameer Samat, President of Android Ecosystem, and the Android team to learn about exciting new development in the episode below, and read about all of the updates for users.

Tune into Google I/O next week - including the Developer Keynote as well as the full Android track of sessions - where we're covering these topics in more detail and how you can get started.


Start building with Material 3 Expressive

The world of UX design is constantly evolving, and you deserve the tools to create truly engaging and impactful experiences. That's why Material Design's latest evolution, Material 3 Expressive, provides new ways to make your product more engaging, easy to use, and desirable. Learn more, and try out the new Material 3 Expressive: an expansion pack designed to enhance your app's appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. It comes with new components, motion-physics system, type styles, colors, shapes and more.

Material 3 Expressive will be coming to Android 16 later this year; check out the Google I/O talk next week where we'll dive into this in more detail.

A fluid design built for your watch's round display

Wear OS 6, arriving later this year, brings Material 3 Expressive design to Google's smartwatch platform. New design language puts the round watch display at the heart of the experience, and is embraced in every single component and motion of the System, from buttons to notifications. You'll be able to try new visual design and upgrade existing app experiences to a new level. Next week, tune in to the What's New in Android session to learn more.

Plus some goodies in Android 16...

We also unpacked some of the latest features coming to users in Android 16, which we've been previewing with you for the last few months. If you haven't already, you can try out the latest Beta of Android 16.

A few new features that Android 16 adds which developers should pay attention to are Live updates, professional media and camera features, desktop windowing for tablets, major accessibility enhancements and much more:

  • Live Updates allow your app to show time-sensitive progress updates. Use the new ProgressStyle template for an improved experience around navigation, deliveries, and rideshares.

Watch the What's New in Android session and the Live updates talk to learn more.

Tune in next week to Google I/O

This was just a preview of some Android-related news, so remember to tune in next week to Google I/O, where we'll be diving into a range of Android developer topics in a lot more detail. You can check out What's New in Android and the full Android track of sessions to start planning your time.

We can't wait to see you next week, whether you're joining in person or virtually from anywhere around the world!

13 May 2025 5:00pm GMT

#WeArePlay: How My Lovely Planet is making environmental preservation fun through games

Posted by Robbie McLachlan - Developer Marketing


In our latest #WeArePlay film, which celebrates the people behind apps and games on Google Play, we meet Clément, the founder of Imagine Games. His game, My Lovely Planet, turns casual mobile gaming into tangible environmental action, planting real trees and supporting reforestation projects worldwide. Discover the inspiration behind My Lovely Planet and the impact it's had so far.


What inspired you to combine gaming with positive environmental impact?

I've always loved gaming and believed in technology's potential to tackle environmental challenges. But it was my time working with an NGO in Madagascar, where I witnessed firsthand the devastating effects of environmental changes that truly sparked my mission. Combining gaming and sustainability just made sense. Billions of people play games, so why not harness that entertainment to create real-world impact? So far, the results speak for themselves: we've built an engaged global community committed to protecting the environment.

Imagine Games team, Clément, from France


How do players in My Lovely Planet make real-world differences through the game?

With My Lovely Planet, planting a tree in the game means planting a real tree in the world. Our community has already planted over 360,000 trees through partnerships with NGOs like Graines de Vie in Madagascar, Kenya, and France. We've also supported ocean-cleaning, bee-protection, and drone reforestation projects.

Balancing fun with impact was key. Players wouldn't stay just for the mission, so we focused on creating a genuinely fun match-3 style game. Once gameplay was strong, we made real-world actions like tree planting core rewards in the game, helping players feel naturally connected to their impact. Our goal is to keep growing this model to protect biodiversity and fight climate change.

Can you tell us about your drone-led reforestation project in France?

Our latest initiative involves using drones to reforest areas severely impacted by insect infestations and other environmental issues. We're dropping over one million specially-coated seeds by drone, which is a completely new and efficient way of reforesting large areas. It's exciting because if this pilot succeeds, it could be replicated worldwide, significantly boosting global reforestation efforts.

a drone in mid air dropping seeds in a forested area


How has Google Play helped your journey?

Google Play has been crucial for My Lovely Planet - it's our main distribution channel, with about 70% of our players coming through the platform. It makes it incredibly easy and convenient for anyone to download and start playing immediately. which is essential for engaging a global community. Plus, from a developer's standpoint, the flexibility, responsiveness, and powerful testing tools Google Play provides have made launching and scaling our game faster and smoother, allowing us to focus even more on our environmental impact.

a close up of a user playing the My Lovely Planet game on their mobile device while sitting in the front seat of a vehicle


What is next for My Lovely Planet?

Right now, we're focused on expanding the game experience by adding more engaging levels, and introducing exciting new features like integrating our eco-friendly cryptocurrency, My Lovely Coin, into gameplay. Following the success of our first drone-led reforestation project in France, our next step is tracking its impact and expanding this approach to other regions. Ultimately, we aim to build the world's largest gaming community dedicated to protecting the environment, empowering millions to make a difference while enjoying the game.


Discover other inspiring app and game founders featured in #WeArePlay.

13 May 2025 1:00pm GMT

08 May 2025

feedAndroid Developers Blog

Prepare your apps for Google Play’s 16 KB page size compatibility requirement

Posted by Dan Brown - Product Manager, Google Play

Google Play empowers you to manage and distribute your innovative and trusted apps and games to billions of users around the world across the entire breadth of Android devices, and historically, all Android devices have managed memory in 4 KB pages.

As device manufacturers equip devices with more RAM to optimize performance, many will adopt larger page sizes like 16 KB. Android 15 introduces support for the increased page size, ensuring your app can run on these evolving devices and benefit from the associated performance gains.

Starting November 1st, 2025, all new apps and updates to existing apps submitted to Google Play and targeting Android 15+ devices must support 16 KB page sizes.

This is a key technical requirement to ensure your users can benefit from the performance enhancements on newer devices and prepares your apps for the platform's future direction of improved performance on newer hardware. Without recompiling to support 16 KB pages, your app might not function correctly on these devices when they become more widely available in future Android releases.

We've seen that 16 KB can help with:

  • Faster app launches: See improvements ranging from 3% to 30% for various apps.
  • Improved battery usage: Experience an average gain of 4.5%.
  • Quicker camera starts: Launch the camera 4.5% to 6.6% faster.
  • Speedier system boot-ups: Boot Android devices approximately 8% faster.

We recommend checking your apps early especially for dependencies that might not yet be 16 KB compatible. Many popular SDK providers, like React Native and Flutter, already offer compatible versions. For game developers, several leading game engines, such as Unity, support 16 KB, with support for Unreal Engine coming soon.

Reaching 16 KB compatibility

A substantial number of apps are already compatible, so your app may already work seamlessly with this requirement. For most of those that need to make adjustments, we expect the changes to be minimal.

  • Apps with no native code should be compatible without any changes at all.
  • Apps using libraries or SDKs that contain native code may need to update these to a compatible version.
  • Apps with native code may need to recompile with a more recent toolchain and check for any code with incompatible low level memory management.

Our December blog post, Get your apps ready for 16 KB page size devices, provides a more detailed technical explanation and guidance on how to prepare your apps.

Check your app's compatibility now

It's easy to see if your app bundle already supports 16 KB memory page sizes. Visit the app bundle explorer page in Play Console to check your app's build compliance and get guidance on where your app may need updating.

App bundle explorer in Play Console


Beyond the app bundle explorer, make sure to also test your app in a 16 KB environment. This will help you ensure users don't experience any issues and that your app delivers its best performance.

For more information, check out the full documentation.

Thank you for your continued support in bringing delightful, fast, and high-performance experiences to users across the breadth of devices Play supports. We look forward to seeing the enhanced experiences you'll deliver with 16 KB support.

08 May 2025 4:00pm GMT

07 May 2025

feedAndroid Developers Blog

Building delightful Android camera and media experiences

Posted by Donovan McMurray, Mayuri Khinvasara Khabya, Mozart Louis, and Nevin Mital - Developer Relations Engineers

Hello Android Developers!

We are the Android Developer Relations Camera & Media team, and we're excited to bring you something a little different today. Over the past several months, we've been hard at work writing sample code and building demos that showcase how to take advantage of all the great potential Android offers for building delightful user experiences.

Some of these efforts are available for you to explore now, and some you'll see later throughout the year, but for this blog post we thought we'd share some of the learnings we gathered while going through this exercise.

Grab your favorite Android plush or rubber duck, and read on to see what we've been up to!

Future-proof your app with Jetpack

Nevin Mital

One of our focuses for the past several years has been improving the developer tools available for video editing on Android. This led to the creation of the Jetpack Media3 Transformer APIs, which offer solutions for both single-asset and multi-asset video editing preview and export. Today, I'd like to focus on the Composition demo app, a sample app that showcases some of the multi-asset editing experiences that Transformer enables.

I started by adding a custom video compositor to demonstrate how you can arrange input video sequences into different layouts for your final composition, such as a 2x2 grid or a picture-in-picture overlay. You can customize this by implementing a VideoCompositorSettings and overriding the getOverlaySettings method. This object can then be set when building your Composition with setVideoCompositorSettings.

Here is an example for the 2x2 grid layout:

object : VideoCompositorSettings {
  ...

  override fun getOverlaySettings(inputId: Int, presentationTimeUs: Long): OverlaySettings {
    return when (inputId) {
      0 -> { // First sequence is placed in the top left
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(-0.5f, 0.5f) // Top-left section of background
          .build()
      }

      1 -> { // Second sequence is placed in the top right
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(0.5f, 0.5f) // Top-right section of background
          .build()
      }

      2 -> { // Third sequence is placed in the bottom left
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(-0.5f, -0.5f) // Bottom-left section of background
          .build()
      }

      3 -> { // Fourth sequence is placed in the bottom right
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
          .setBackgroundFrameAnchor(0.5f, -0.5f) // Bottom-right section of background
          .build()
      }

      else -> {
        StaticOverlaySettings.Builder().build()
      }
    }
  }
}

Since getOverlaySettings also provides a presentation time, we can even animate the layout, such as in this picture-in-picture example:

moving image of picture in picture on a mobile device


Next, I spent some time migrating the Composition demo app to use Jetpack Compose. With complicated editing flows, it can help to take advantage of as much screen space as is available, so I decided to use the supporting pane adaptive layout. This way, the user can fine-tune their video creation on the preview screen, and export options are only shown at the same time on a larger display. Below, you can see how the UI dynamically adapts to the screen size on a foldable device, when switching from the outer screen to the inner screen and vice versa.

What's great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

moving image of suportive pane adaptive layout


What's great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

moving image of sequential composition preview in Android XR


Orbiter(
  position = OrbiterEdge.Bottom,
  offset = EdgeOffset.inner(offset = MaterialTheme.spacing.standard),
  alignment = Alignment.CenterHorizontally,
  shape = SpatialRoundedCornerShape(CornerSize(28.dp))
) {
  Row (horizontalArrangement = Arrangement.spacedBy(MaterialTheme.spacing.mini)) {
    // Playback control for rewinding by 10 seconds
    FilledTonalIconButton({ viewModel.seekBack(10_000L) }) {
      Icon(
        painter = painterResource(id = R.drawable.rewind_10),
        contentDescription = "Rewind by 10 seconds"
      )
    }
    // Playback control for play/pause
    FilledTonalIconButton({ viewModel.togglePlay() }) {
      Icon(
        painter = painterResource(id = R.drawable.rounded_play_pause_24),
        contentDescription = 
            if(viewModel.compositionPlayer.isPlaying) {
                "Pause preview playback"
            } else {
                "Resume preview playback"
            }
      )
    }
    // Playback control for forwarding by 10 seconds
    FilledTonalIconButton({ viewModel.seekForward(10_000L) }) {
      Icon(
        painter = painterResource(id = R.drawable.forward_10),
        contentDescription = "Forward by 10 seconds"
      )
    }
  }
}

Jetpack libraries unlock premium functionality incrementally

Donovan McMurray

Not only do our Jetpack libraries have you covered by working consistently across existing and future devices, but they also open the doors to advanced functionality and custom behaviors to support all types of app experiences. In a nutshell, our Jetpack libraries aim to make the common case very accessible and easy, and it has hooks for adding more custom features later.

We've worked with many apps who have switched to a Jetpack library, built the basics, added their critical custom features, and actually saved developer time over their estimates. Let's take a look at CameraX and how this incremental development can supercharge your process.

// Set up CameraX app with preview and image capture.
// Note: setting the resolution selector is optional, and if not set,
// then a default 4:3 ratio will be used.
val aspectRatioStrategy = AspectRatioStrategy(
  AspectRatio.RATIO_16_9, AspectRatioStrategy.FALLBACK_RULE_NONE)
var resolutionSelector = ResolutionSelector.Builder()
  .setAspectRatioStrategy(aspectRatioStrategy)
  .build()

private val previewUseCase = Preview.Builder()
  .setResolutionSelector(resolutionSelector)
  .build()
private val imageCaptureUseCase = ImageCapture.Builder()
  .setResolutionSelector(resolutionSelector)
  .setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
  .build()

val useCaseGroupBuilder = UseCaseGroup.Builder()
  .addUseCase(previewUseCase)
  .addUseCase(imageCaptureUseCase)

cameraProvider.unbindAll()

camera = cameraProvider.bindToLifecycle(
  this,  // lifecycleOwner
  CameraSelector.DEFAULT_BACK_CAMERA,
  useCaseGroupBuilder.build(),
)

After setting up the basic structure for CameraX, you can set up a simple UI with a camera preview and a shutter button. You can use the CameraX Viewfinder composable which displays a Preview stream from a CameraX SurfaceRequest.

// Create preview
Box(
  Modifier
    .background(Color.Black)
    .fillMaxSize(),
  contentAlignment = Alignment.Center,
) {
  surfaceRequest?.let {
    CameraXViewfinder(
      modifier = Modifier.fillMaxSize(),
      implementationMode = ImplementationMode.EXTERNAL,
      surfaceRequest = surfaceRequest,
     )
  }
  Button(
    onClick = onPhotoCapture,
    shape = CircleShape,
    colors = ButtonDefaults.buttonColors(containerColor = Color.White),
    modifier = Modifier
      .height(75.dp)
      .width(75.dp),
  )
}

fun onPhotoCapture() {
  // Not shown: defining the ImageCapture.OutputFileOptions for
  // your saved images
  imageCaptureUseCase.takePicture(
    outputOptions,
    ContextCompat.getMainExecutor(context),
    object : ImageCapture.OnImageSavedCallback {
      override fun onError(exc: ImageCaptureException) {
        val msg = "Photo capture failed."
        Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
      }

      override fun onImageSaved(output: ImageCapture.OutputFileResults) {
        val savedUri = output.savedUri
        if (savedUri != null) {
          // Do something with the savedUri if needed
        } else {
          val msg = "Photo capture failed."
          Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
        }
      }
    },
  )
}

You're already on track for a solid camera experience, but what if you wanted to add some extra features for your users? Adding filters and effects are easy with CameraX's Media3 effect integration, which is one of the new features introduced in CameraX 1.4.0.

Here's how simple it is to add a black and white filter from Media3's built-in effects.

val media3Effect = Media3Effect(
  application,
  PREVIEW or IMAGE_CAPTURE,
  ContextCompat.getMainExecutor(application),
  {},
)
media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()))
useCaseGroupBuilder.addEffect(media3Effect)

The Media3Effect object takes a Context, a bitwise representation of the use case constants for targeted UseCases, an Executor, and an error listener. Then you set the list of effects you want to apply. Finally, you add the effect to the useCaseGroupBuilder we defined earlier.

moving image of sequential composition preview in Android XR
(Left) Our camera app with no filter applied.
(Right) Our camera app after the createGrayscaleFilter was added.


There are many other built-in effects you can add, too! See the Media3 Effect documentation for more options, like brightness, color lookup tables (LUTs), contrast, blur, and many other effects.

To take your effects to yet another level, it's also possible to define your own effects by implementing the GlEffect interface, which acts as a factory of GlShaderPrograms. You can implement a BaseGlShaderProgram's drawFrame() method to implement a custom effect of your own. A minimal implementation should tell your graphics library to use its shader program, bind the shader program's vertex attributes and uniforms, and issue a drawing command.

Jetpack libraries meet you where you are and your app's needs. Whether that be a simple, fast-to-implement, and reliable implementation, or custom functionality that helps the critical user journeys in your app stand out from the rest, Jetpack has you covered!

Jetpack offers a foundation for innovative AI Features

Mayuri Khinvasara Khabya

Just as Donovan demonstrated with CameraX for capture, Jetpack Media3 provides a reliable, customizable, and feature-rich solution for playback with ExoPlayer. The AI Samples app builds on this foundation to delight users with helpful and enriching AI-driven additions.

In today's rapidly evolving digital landscape, users expect more from their media applications. Simply playing videos is no longer enough. Developers are constantly seeking ways to enhance user experiences and provide deeper engagement. Leveraging the power of Artificial Intelligence (AI), particularly when built upon robust media frameworks like Media3, offers exciting opportunities. Let's take a look at some of the ways we can transform the way users interact with video content:

  • Empowering Video Understanding: The core idea is to use AI, specifically multimodal models like the Gemini Flash and Pro models, to analyze video content and extract meaningful information. This goes beyond simply playing a video; it's about understanding what's in the video and making that information readily accessible to the user.
  • Actionable Insights: The goal is to transform raw video into summaries, insights, and interactive experiences. This allows users to quickly grasp the content of a video and find specific information they need or learn something new!
  • Accessibility and Engagement: AI helps make videos more accessible by providing features like summaries, translations, and descriptions. It also aims to increase user engagement through interactive features.

A Glimpse into AI-Powered Video Journeys

The following example demonstrates potential video journies enhanced by artificial intelligence. This sample integrates several components, such as ExoPlayer and Transformer from Media3; the Firebase SDK (leveraging Vertex AI on Android); and Jetpack Compose, ViewModel, and StateFlow. The code will be available soon on Github.

moving images of examples of AI-powered video journeys
(Left) Video summarization
(Right) Thumbnails timestamps and HDR frame extraction


There are two experiences in particular that I'd like to highlight:

  • HDR Thumbnails: AI can help identify key moments in the video that could make for good thumbnails. With those timestamps, you can use the new ExperimentalFrameExtractor API from Media3 to extract HDR thumbnails from videos, providing richer visual previews.
  • Text-to-Speech: AI can be used to convert textual information derived from the video into spoken audio, enhancing accessibility. On Android you can also choose to play audio in different languages and dialects thus enhancing personalization for a wider audience.

Using the right AI solution

Currently, only cloud models support video inputs, so we went ahead with a cloud-based solution.Iintegrating Firebase in our sample empowers the app to:

  • Generate real-time, concise video summaries automatically.
  • Produce comprehensive content metadata, including chapter markers and relevant hashtags.
  • Facilitate seamless multilingual content translation.

So how do you actually interact with a video and work with Gemini to process it? First, send your video as an input parameter to your prompt:

val promptData =
"Summarize this video in the form of top 3-4 takeaways only. Write in the form of bullet points. Don't assume if you don't know"

val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
_outputText.value = OutputTextState.Loading

viewModelScope.launch(Dispatchers.IO) {
    try {
        val requestContent = content {
            fileData(videoSource.toString(), "video/mp4")
            text(prompt)
        }
        val outputStringBuilder = StringBuilder()

        generativeModel.generateContentStream(requestContent).collect { response ->
            outputStringBuilder.append(response.text)
            _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
        }

        _outputText.value = OutputTextState.Success(outputStringBuilder.toString())

    } catch (error: Exception) {
        _outputText.value = error.localizedMessage?.let { OutputTextState.Error(it) }
    }
}

Notice there are two key components here:

  • FileData: This component integrates a video into the query.
  • Prompt: This asks the user what specific assistance they need from AI in relation to the provided video.

Of course, you can finetune your prompt as per your requirements and get the responses accordingly.

In conclusion, by harnessing the capabilities of Jetpack Media3 and integrating AI solutions like Gemini through Firebase, you can significantly elevate video experiences on Android. This combination enables advanced features like video summaries, enriched metadata, and seamless multilingual translations, ultimately enhancing accessibility and engagement for users. As these technologies continue to evolve, the potential for creating even more dynamic and intelligent video applications is vast.

Go above-and-beyond with specialized APIs

Mozart Louis

Android 16 introduces the new audio PCM Offload mode which can reduce the power consumption of audio playback in your app, leading to longer playback time and increased user engagement. Eliminating the power anxiety greatly enhances the user experience.

Oboe is Android's premiere audio api that developers are able to use to create high performance, low latency audio apps. A new feature is being added to the Android NDK and Android 16 called Native PCM Offload playback.

Offload playback helps save battery life when playing audio. It works by sending a large chunk of audio to a special part of the device's hardware (a DSP). This allows the CPU of the device to go into a low-power state while the DSP handles playing the sound. This works with uncompressed audio (like PCM) and compressed audio (like MP3 or AAC), where the DSP also takes care of decoding.

This can result in significant power saving while playing back audio and is perfect for applications that play audio in the background or while the screen is off (think audiobooks, podcasts, music etc).

We created the sample app PowerPlay to demonstrate how to implement these features using the latest NDK version, C++ and Jetpack Compose.

Here are the most important parts!

First order of business is to assure the device supports audio offload of the file attributes you need. In the example below, we are checking if the device support audio offload of stereo, float PCM file with a sample rate of 48000Hz.

       val format = AudioFormat.Builder()
            .setEncoding(AudioFormat.ENCODING_PCM_FLOAT)
            .setSampleRate(48000)
            .setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
            .build()

        val attributes =
            AudioAttributes.Builder()
                .setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
                .setUsage(AudioAttributes.USAGE_MEDIA)
                .build()
       
        val isOffloadSupported = 
            if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
                AudioManager.isOffloadedPlaybackSupported(format, attributes)
            } else {
                false
            }

        if (isOffloadSupported) {
            player.initializeAudio(PerformanceMode::POWER_SAVING_OFFLOADED)
        }

Once we know the device supports audio offload, we can confidently set the Oboe audio streams' performance mode to the new performance mode option, PerformanceMode::POWER_SAVING_OFFLOADED.

Player::initializeAudio(bool isOffloadSupported) {
    // Create an audio stream
    AudioStreamBuilder builder;
    builder.setChannelCount(mChannelCount);
    builder.setDataCallback(mDataCallback);
    builder.setFormat(AudioFormat::Float);
    builder.setSampleRate(48000);
    builder.setErrorCallback(mErrorCallback);
    builder.setPresentationCallback(mPresentationCallback);

    if (isOffloadSupported) {
      builder.setPerformanceMode(oboe::PerformanceMode::POWER_SAVING_OFFLOADED);
      builder.setFramesPerDataCallback(128); // set a low frame buffer amount
    } else {
      builder.setPerformanceMode(oboe::PerformanceMode::LowLatency
    }
      builder.setSharingMode(SharingMode::Exclusive);
      builder.setSampleRateConversionQuality(SampleRateConversionQuality::Medium);
      Result result = builder.openStream(mAudioStream);
}

Now when audio is played back, it will be offloading audio to the DSP, helping save power when playing back audio.

There is more to this feature that will be covered in a future blog post, fully detailing out all of the new available APIs that will help you optimize your audio playback experience!

What's next

Of course, we were only able to share the tip of the iceberg with you here, so to dive deeper into the samples, check out the following links:

Hopefully these examples have inspired you to explore what new and fascinating experiences you can build on Android. Tune in to our session at Google I/O in a couple weeks to learn even more about use-cases supported by solutions like Jetpack CameraX and Jetpack Media3!

07 May 2025 9:00pm GMT

Zoho Achieves 6x Faster Logins with Passkey and Credential Manager Integration

Posted by Niharika Arora - Senior Developer Relations Engineer, Joseph Lewis - Staff Technical Writer, and Kumareshwaran Sreedharan - Product Manager, Zoho.

As an Android developer, you're constantly looking for ways to enhance security, improve user experience, and streamline development. Zoho, a comprehensive cloud-based software suite focused on security and seamless experiences, achieved significant improvements by adopting passkeys in their OneAuth Android app.

Since integrating passkeys in 2024, Zoho achieved login speeds up to 6x faster than previous methods and a 31% month-over-month (MoM) growth in passkey adoption.

This case study examines Zoho's adoption of passkeys and Android's Credential Manager API to address authentication difficulties. It details the technical implementation process and highlights the impactful results.

Overcoming authentication challenges

Zoho utilizes a combination of authentication methods to protect user accounts. This included Zoho OneAuth, their own multi-factor authentication (MFA) solution, which supported both password-based and passwordless authentication using push notifications, QR codes, and time-based one-time passwords (TOTP). Zoho also supported federated logins, allowing authentication through Security Assertion Markup Language (SAML) and other third-party identity providers.

Challenges

Zoho, like many organizations, aimed to improve authentication security and user experience while reducing operational burdens. The primary challenges that led to the adoption of passkeys included:

  • Security vulnerabilities: Traditional password-based methods left users susceptible to phishing attacks and password breaches.
  • User friction: Password fatigue led to forgotten passwords, frustration, and increased reliance on cumbersome recovery processes.
  • Operational inefficiencies: Handling password resets and MFA issues generated significant support overhead.
  • Scalability concerns: A growing user base demanded a more secure and efficient authentication solution.

Why the shift to passkeys?

Passkeys were implemented in Zoho's apps to address authentication challenges by offering a passwordless approach that significantly improves security and user experience. This solution leverages phishing-resistant authentication, cloud-synchronized credentials for effortless cross-device access, and biometrics (such as a fingerprint or facial recognition), PIN, or pattern for secure logins, thereby reducing the vulnerabilities and inconveniences associated with traditional passwords.

By adopting passkeys with Credential Manager, Zoho cut login times by up to 6x, slashed password-related support costs, and saw strong user adoption - doubling passkey sign-ins in 4 months with 31% MoM growth. Zoho users now enjoy faster, easier logins and phishing-resistant security.

Quote card reads 'Cloud Lion now enjoys logins that are 30% faster and more secure using passkeys – allowing us to use our thumb instead of a password. With passkeys, we can also protect our critical business data against phishing and brute force attacks.' – Fabrice Venegas, Founder, Cloud Lion (a Zoho integration partner)

Implementation with Credential Manager on Android

So, how did Zoho achieve these results? They used Android's Credential Manager API, the recommended Jetpack library for implementing authentication on Android.

Credential Manager provides a unified API that simplifies handling of the various authentication methods. Instead of juggling different APIs for passwords, passkeys, and federated logins (like Sign in with Google), you use a single interface.

Implementing passkeys at Zoho required both client-side and server-side adjustments. Here's a detailed breakdown of the passkey creation, sign-in, and server-side implementation process.

Passkey creation

Passkey creation in OneAuth on a small screen mobile device


To create a passkey, the app first retrieves configuration details from Zoho's server. This process includes a unique verification, such as a fingerprint or facial recognition. This verification data, formatted as a requestJson string), is used by the app to build a CreatePublicKeyCredentialRequest. The app then calls the credentialManager.createCredential method, which prompts the user to authenticate using their device screen lock (biometrics, fingerprint, PIN, etc.).

Upon successful user confirmation, the app receives the new passkey credential data, sends it back to Zoho's server for verification, and the server then stores the passkey information linked to the user's account. Failures or user cancellations during the process are caught and handled by the app.

Sign-in

The Zoho Android app initiates the passkey sign-in process by requesting sign-in options, including a unique challenge, from Zoho's backend server. The app then uses this data to construct a GetCredentialRequest, indicating it will authenticate with a passkey. It then invokes the Android CredentialManager.getCredential() API with this request. This action triggers a standardized Android system interface, prompting the user to choose their Zoho account (if multiple passkeys exist) and authenticate using their device's configured screen lock (fingerprint, face scan, or PIN). After successful authentication, Credential Manager returns a signed assertion (proof of login) to the Zoho app. The app forwards this assertion to Zoho's server, which verifies the signature against the user's stored public key and validates the challenge, completing the secure sign-in process.

Server-side implementation

Zoho's transition to supporting passkeys benefited from their backend systems already being FIDO WebAuthn compliant, which streamlined the server-side implementation process. However, specific modifications were still necessary to fully integrate passkey functionality.

The most significant challenge involved adapting the credential storage system. Zoho's existing authentication methods, which primarily used passwords and FIDO security keys for multi-factor authentication, required different storage approaches than passkeys, which are based on cryptographic public keys. To address this, Zoho implemented a new database schema specifically designed to securely store passkey public keys and related data according to WebAuthn protocols. This new system was built alongside a lookup mechanism to validate and retrieve credentials based on user and device information, ensuring backward compatibility with older authentication methods.

Another server-side adjustment involved implementing the ability to handle requests from Android devices. Passkey requests originating from Android apps use a unique origin format (android:apk-key-hash:example) that is distinct from standard web origins that use a URI-based format (https://example.com/app). The server logic needed to be updated to correctly parse this format, extract the SHA-256 fingerprint hash of the app's signing certificate, and validate it against a pre-registered list. This verification step ensures that authentication requests genuinely originate from Zoho's Android app and protects against phishing attacks.

This code snippet demonstrates how the server checks for the Android-specific origin format and validates the certificate hash:

val origin: String = clientData.getString("origin")

if (origin.startsWith("android:apk-key-hash:")) { 
    val originSplit: List<String> = origin.split(":")
    if (originSplit.size > 3) {
               val androidOriginHashDecoded: ByteArray = Base64.getDecoder().decode(originSplit[3])

                if (!androidOriginHashDecoded.contentEquals(oneAuthSha256FingerPrint)) {
            throw IAMException(IAMErrorCode.WEBAUTH003)
        }
    } else {
        // Optional: Handle the case where the origin string is malformed    }
}

Error handling

Zoho implemented robust error handling mechanisms to manage both user-facing and developer-facing errors. A common error, CreateCredentialCancellationException, appeared when users manually canceled their passkey setup. Zoho tracked the frequency of this error to assess potential UX improvements. Based on Android's UX recommendations, Zoho took steps to better educate their users about passkeys, ensure users were aware of passkey availability, and promote passkey adoption during subsequent sign-in attempts.

This code example demonstrates Zoho's approach for how they handled their most common passkey creation errors:

private fun handleFailure(e: CreateCredentialException) {
    val msg = when (e) {
        is CreateCredentialCancellationException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_CANCELLED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "The operation was canceled by the user."
        }
        is CreateCredentialInterruptedException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_INTERRUPTED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "Passkey setup was interrupted. Please try again."
        }
        is CreateCredentialProviderConfigurationException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_PROVIDER_MISCONFIGURED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "Credential provider misconfigured. Contact support."
        }
        is CreateCredentialUnknownException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_UNKNOWN_ERROR", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "An unknown error occurred during Passkey setup."
        }
        is CreatePublicKeyCredentialDomException -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_WEB_AUTHN_ERROR", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "Passkey creation failed: ${e.domError}"
        }
        else -> {
            Analytics.addAnalyticsEvent(eventProtocol: "PASSKEY_SETUP_FAILED", GROUP_NAME)
            Analytics.addNonFatalException(e)
            "An unexpected error occurred. Please try again."
        }
    }
}

Testing passkeys in intranet environments

Zoho faced an initial challenge in testing passkeys within a closed intranet environment. The Google Password Manager verification process for passkeys requires public domain access to validate the relying party (RP) domain. However, Zoho's internal testing environment lacked this public Internet access, causing the verification process to fail and hindering successful passkey authentication testing. To overcome this, Zoho created a publicly accessible test environment, which included hosting a temporary server with an asset link file and domain validation.

This example from the assetlinks.json file used in Zoho's public test environment demonstrates how to associate the relying party domain with the specified Android app for passkey validation.

[
    {
        "relation": [
            "delegate_permission/common.handle_all_urls",
            "delegate_permission/common.get_login_creds"
        ],
        "target": {
            "namespace": "android_app",
            "package_name": "com.zoho.accounts.oneauth",
            "sha256_cert_fingerprints": [
                "SHA_HEX_VALUE" 
            ]
        }
    }
]

Integrate with an existing FIDO server

Android's passkey system utilizes the modern FIDO2 WebAuthn standard. This standard requires requests in a specific JSON format, which helps maintain consistency between native applications and web platforms. To enable Android passkey support, Zoho did minor compatibility and structural changes to correctly generate and process requests that adhere to the required FIDO2 JSON structure.

This server update involved several specific technical adjustments:

1. Encoding conversion: The server converts the Base64 URL encoding (commonly used in WebAuthn for fields like credential IDs) to standard Base64 encoding before it stores the relevant data. The snippet below shows how a rawId might be encoded to standard Base64:

// Convert rawId bytes to a standard Base64 encoded string for storage
val base64RawId: String = Base64.getEncoder().encodeToString(rawId.toByteArray())

2. Transport list format: To ensure consistent data processing, the server logic handles lists of transport mechanisms (such as USB, NFC, and Bluetooth, which specify how the authenticator communicated) as JSON arrays.

3. Client data alignment: The Zoho team adjusted how the server encodes and decodes the clientDataJson field. This ensures the data structure aligns precisely with the expectations of Zoho's existing internal APIs. The example below illustrates part of the conversion logic applied to client data before the server processes it:

private fun convertForServer(type: String): String {
    val clientDataBytes = BaseEncoding.base64().decode(type)
    val clientDataJson = JSONObject(String(clientDataBytes, StandardCharsets.UTF_8))
    val clientJson = JSONObject()
    val challengeFromJson = clientDataJson.getString("challenge")
    // 'challenge' is a technical identifier/token, not localizable text.
    clientJson.put("challenge", BaseEncoding.base64Url()
        .encode(challengeFromJson.toByteArray(StandardCharsets.UTF_8))) 

    clientJson.put("origin", clientDataJson.getString("origin"))
    clientJson.put("type", clientDataJson.getString("type"))
    clientJson.put("androidPackageName", clientDataJson.getString("androidPackageName"))
    return BaseEncoding.base64().encode(clientJson.toString().toByteArray())
}

User guidance and authentication preferences

A central part of Zoho's passkey strategy involved encouraging user adoption while providing flexibility to align with different organizational requirements. This was achieved through careful UI design and policy controls.

Zoho recognized that organizations have varying security needs. To accommodate this, Zoho implemented:

  • Admin enforcement: Through the Zoho Directory admin panel, administrators can designate passkeys as the mandatory, default authentication method for their entire organization. When this policy is enabled, employees are required to set up a passkey upon their next login and use it going forward.
  • User choice: If an organization does not enforce a specific policy, individual users maintain control. They can choose their preferred authentication method during login, selecting from passkeys or other configured options via their authentication settings.

To make adopting passkeys appealing and straightforward for end-users, Zoho implemented:

  • Easy setup: Zoho integrated passkey setup directly into the Zoho OneAuth mobile app (available for both Android and iOS). Users can conveniently configure their passkeys within the app at any time, smoothing the transition.
  • Consistent access: Passkey support was implemented across key user touchpoints, ensuring users can register and authenticate using passkeys via:
  • The Zoho OneAuth mobile app (Android & iOS);

This method ensured that the process of setting up and using passkeys was accessible and integrated into the platforms they already use, regardless of whether it was mandated by an admin or chosen by the user. You can learn more about how to create smooth user flows for passkey authentication by exploring our comprehensive passkeys user experience guide.

Impact on developer velocity and integration efficiency

Credential Manager, as a unified API, also helped improve developer productivity compared to older sign-in flows. It reduced the complexity of handling multiple authentication methods and APIs separately, leading to faster integration, from months to weeks, and fewer implementation errors. This collectively streamlined the sign-in process and improved overall reliability.

By implementing passkeys with Credential Manager, Zoho achieved significant, measurable improvements across the board:

  • Dramatic speed improvements
    • 2x faster login compared to traditional password authentication.
    • 4x faster login compared to username or mobile number with email or SMS OTP authentication.
    • 6x faster login compared to username, password, and SMS or authenticator OTP authentication.
  • Reduced support costs
    • Reduced password-related support requests, especially for forgotten passwords.
    • Lower costs associated with SMS-based 2FA, as existing users can onboard directly with passkeys.
  • Strong user adoption & enhanced security:
    • Passkey sign-ins doubled in just 4 months, showing high user acceptance.
    • Users migrating to passkeys are fully protected from common phishing and password breach threats.
    • With 31% MoM adoption growth, more users are benefiting daily from enhanced security against vulnerabilities like phishing and SIM swaps.

Recommendations and best practices

To successfully implement passkeys on Android, developers should consider the following best practices:

  • Leverage Android's Credential Manager API:
    • Credential Manager simplifies credential retrieval, reducing developer effort and ensuring a unified authentication experience.
    • Handles passwords, passkeys, and federated login flows in a single interface.
  • Ensure data encoding consistency while migrating from other FIDO authentication solutions:
    • Make sure you handle consistent formatting for all inputs/outputs while migrating from other FIDO authentication solutions such as FIDO security keys.
  • Optimize error handling and logging:
    • Implement robust error handling for a seamless user experience.
    • Provide localized error messages and use detailed logs to debug and resolve unexpected failures.
  • Educate users on passkey recovery options:
    • Prevent lockout scenarios by proactively guiding users on recovery options.
  • Monitor adoption metrics and user feedback:
    • Track user engagement, passkey adoption rates, and login success rates to keep optimizing user experience.
    • Conduct A/B testing on different authentication flows to improve conversion and retention.

Passkeys, combined with the Android Credential Manager API, offer a powerful, unified authentication solution that enhances security while simplifying user experience. Passkeys significantly reduce phishing risks, credential theft, and unauthorized access. We encourage developers to try out the experience in their app and bring the most secure authentication to their users.

Get started with passkeys and Credential Manager

Get hands on with passkeys and Credential Manager on Android using our public sample code.

If you have any questions or issues, you can share with us through the Android Credentials issues tracker.

07 May 2025 4:00pm GMT

06 May 2025

feedAndroid Developers Blog

Android Studio Meerkat Feature Drop is stable

Posted by Adarsh Fernando, Group Product Manager

Today, we're excited to announce the stable release of Android Studio Meerkat Feature Drop (2024.3.2)!

This release brings a host of new features and improvements designed to boost your productivity and enhance your development workflow. With numerous enhancements, this latest release helps you build high-quality Android apps faster and more efficiently: streamlined Jetpack Compose previews, new Gemini capabilities, better Kotlin Multiplatform (KMP) integration, improved device management, and more.

Read on to learn about the key updates in Android Studio Meerkat Feature Drop, and download the latest stable version today to explore them yourself!

Developer Productivity Enhancements

Analyze Crash Reports with Gemini in Android Studio

Debugging production crashes can require you to spend significant time switching contexts between your crash reporting tool, such as Firebase Crashlytics and Android Vitals, and investigating root causes in the IDE. Now, when viewing reports in App Quality Insights (AQI), click the Insights tab. Gemini provides a summary of the crash, generates insights, and links to useful documentation. If you also provide Gemini with access to local code context, it can provide more accurate results, relevant next steps, and code suggestions. This helps you reduce the time spent diagnosing and resolving issues.

moving image of Gemini in the App Quality Insights tool window in Android Studio
Gemini helps you investigate, understand, and resolve crashes in your app much more quickly in the App Quality Insights tool window.


Generate Unit Test Scenarios with Gemini

Writing effective unit tests is crucial but can be time-consuming. Gemini now helps kickstart this process by generating relevant test scenarios. Right-click on a class in your editor and select Gemini > Generate Unit Test Scenarios. Gemini analyzes the code and suggests test cases with descriptive names, outlining what to test. While you still implement the specific test logic, this significantly speeds up the initial setup and ensures better test coverage by suggesting scenarios you might have missed.

moving image of generating unit test scenarios in Android Studio
Gemini helps you generate unit test scenarios for your app.


Gemini Prompt Library

No more retyping your most frequently used prompts for Gemini! The new Prompt Library lets you save prompts directly within Android Studio (Settings > Gemini > Prompt Library). Whether it's a specific code generation pattern, a refactoring instruction, or a debugging query you use often, save it once from the chat (right-click > Save prompt) and re-apply it instantly from the editor (right-click > Gemini > Prompt Library). Prompts that you save can also be shared and standardized across your team.

moving image of prompt library in Android Studio
The prompt library saves your frequently used Gemini prompts to make them easier to use.


You have the option to store prompts on IDE level or Project level:

  • IDE level prompts are private and can be used across multiple projects.
  • Project level prompts can be shared across teams working on the same project (if .idea folder is added to VCS).

Compose and UI Development

Themed Icon Support Preview

Ensure your app's branding looks great with Android's themed icons. Android Studio now lets you preview how your existing launcher icon adapts to the monochromatic theming algorithm directly within the IDE. This quick visual check helps you identify potential contrast issues or undesirable shapes early in the workflow, even before you provide a dedicated monochromatic drawable. This allows for faster iteration on your app's visual identity.

moving image of themed icon support in preview in Android Studio
Themed icon support in Preview helps you visually check how your existing launcher icon adapts to monochromatic theming.


Compose Preview Enhancements

Iterating on your Compose UI is now faster and better organized:

  • Enhanced Zoom: Navigate complex layouts more easily with smoother, more responsive zooming in your Compose previews.
  • Collapsible Groups: Tidy up your preview surface by collapsing groups of related composables under their @Preview annotation names, letting you focus on specific parts of the UI without clutter.
  • Grid Mode by Default: Grid mode is now the default for a clear overview. Gallery mode (for flipping through individual previews) is available via right-click, while List view has been removed to streamline the experience.
moving image of Compose previews in Android Studio
Compose previews render more smoothly and make it easier to hide previews you're not focused on.


Build and Deploy

KMP Shared Module Integration

Android Studio now streamlines adding shared logic to your Android app with the new Kotlin Multiplatform Shared Module template. This provides a dedicated starting point within your Android project, making it easier to structure and build shared business logic for both Android and iOS directly from Android Studio.

Kotlin Multiplatform template in Android Studio
The new Kotlin Multiplatform module template makes it easier to add shared business logic to your existing app.


Updated UX for Adding Devices

Spend less time configuring test devices. The new Device Manager UX for adding virtual and remote devices makes it much easier to configure the devices you want from the Device Manager. To get started, click the '+' action at the top of the window and select one of these options:

  • Create Virtual Device: New filters, recommendations, and creation flow guide you towards creating AVDs that are best suited for your intended purpose and your machine's performance.
  • Add Remote Devices: With Android Device Streaming, powered by Firebase, you can connect and debug your app with a variety of real physical devices. With a new catalog view and filters, it's now easier to locate and start using the device you need in just a few clicks.
moving image of configuring virtual devices in Android Studio
It's now easier to configure virtual devices that are optimized for your workstation.


Google Play Deprecated SDK Warnings

Stay more informed about SDKs you publish with your app. Android Studio now displays warnings from the Google Play SDK Index when an SDK used in your app has been deprecated by its author. These warnings include information about suggested alternative SDKs, helping you proactively manage dependencies and avoid potential issues related to outdated or insecure libraries.

Google Play Deprecated SDK warnings in Android Studio
Play deprecated SDK warnings help you avoid potential issues related to outdated or insecure libraries.


Updated Build Menu and Actions

We've refined the Build menu for a more intuitive experience:

  • New 'Build run-configuration-name' Action: Builds the currently selected run configuration (e.g., :app or a specific test). This is now the default action for the toolbar button and Control/Command+F9.
  • Reordered Actions: The new build action is prioritized at the top, followed by Compile and Assemble actions.
  • Clearer Naming: "Rebuild Project" is now "Clean and Assemble Project with Tests". "Make Project" is renamed to "Assemble Project", and a new "Assemble Project with Tests" action is available.
Build menu in Android Studio
The Build menu includes behavior and naming changes to simplify and streamline the experience.


Standardized Config Directories

Switching between Stable, Beta, and Canary versions of Android Studio is now smoother. Configuration directories are standardized, removing the "Preview" suffix for non-stable builds. We've also added the micro version (e.g., AndroidStudio2024.3.2) to the path, allowing different feature drops to run side-by-side without conflicts. This simplifies managing your IDE settings, especially if you work with multiple Android Studio installations.

IntelliJ platform update

Android Studio Meerkat Feature Drop (2024.3.2) includes the IntelliJ 2024.3 platform release, which has many new features such as a feature complete K2 mode, more reliable Java** and Kotlin code inspections, grammar checks during indexing, debugger improvements, speed and quality of life improvements to Terminal, and more.

For more information, read the full IntelliJ 2024.3 release notes.

Summary

Android Studio Meerkat Feature Drop (2024.3.2) delivers these key features and enhancements:

  • Developer Productivity:
    • Analyze Crash Reports with Gemini
    • Generate Unit Test Scenarios with Gemini
    • Gemini Prompt Library
  • Compose and UI:
    • Themed Icon Preview
    • Compose Preview Enhancements (Zoom, Collapsible Groups, View Modes)
  • Build and Deploy:
    • KMP Shared Module Template
    • Updated UX for Adding Devices
    • Google Play SDK Insights: Deprecated SDK Warnings
    • Updated Build Menu & Actions
    • Standardized Config Directories
  • IntelliJ Platform Update
    • Feature complete K2 mode
    • Improved Kotlin and Java** inspection reliability
    • Debugger improvements
    • Speed and quality of life improvements in Terminal

Getting Started

Ready to elevate your Android development? Download Android Studio Meerkat Feature Drop and start using these powerful new features today!

As always, your feedback is crucial. Check known issues, report bugs, suggest improvements, and connect with the community on LinkedIn, Medium, YouTube, or X. Let's continue building amazing Android apps together!


**Java is a trademark or registered trademark of Oracle and/or its affiliates.

06 May 2025 5:15pm GMT

30 Apr 2025

feedAndroid Developers Blog

Announcing Android support of digital credentials

Posted by Rohey Livne - Group Product Manager

In today's interconnected world, managing digital identity is essential. Android aims to support open standards that ensure seamless interoperability with various identity providers and services. As part of this goal, we are excited to announce that Android, via Credential Manager's DigitalCredential API, now natively supports OpenID4VP and OpenID4VCI for digital credential presentation and issuance respectively.

What are digital credentials?

Digital credentials are cryptographically verifiable documents. The most common emerging use case for digital credentials is identity documents such as driver's licenses, passports, or national ID cards. In the coming years, it is anticipated that Android developers will develop innovative applications of this technology for a wider range of personal credentials that users will need to present digitally, including education certifications, insurance policies, memberships, permits, and more.

Digital credentials can be provided by any installed Android app. These apps are known as "credential holders"; typically digital wallet apps such as Google Wallet or Samsung Wallet.

Other apps not necessarily thought of as "wallets" may also have a use for exposing a digital credential. For example an airline app might want to offer their users' air miles reward program membership as a digital credential to be presented to other apps or websites.

Digital credentials can be presented by the user to any other app or website on the same device, and Android also supports securely presenting Digital Credentials between devices using the same industry standard protocols used by passkeys (CTAP), by establishing encrypted communication tunnels.

Users can store multiple credentials across multiple apps on their device. By leveraging OpenID4VP requests from websites using the W3C Digital Credential API, or from native apps using Android Credential Manager API, a user can select what credential to present from across all available credentials across all installed digital wallet apps.

How digital credentials work

Presentation

To present the credential, the verifier sends an OpenID4VP request to the Digital Credential API, which then prompts the user to select a credential across all the credentials that can satisfy this request. Note that the user is selecting a credential, not a digital wallet app:

Digital credentials selection interface on a mobile device
Digital credentials selection interface


Once the user chooses a credential to proceed with, Android platform redirects the original OpenID4VP request to the digital wallet app that holds the chosen credential to complete the presentation back to the verifier. When the digital wallet app receives the OpenID4VP request from Android, it can also perform any additional due-diligence steps it needs to perform prior to releasing the credential to the verifier.

Issuance

Android also allows developers to issue their own Digital Credentials to a user's digital wallet app. This process can be done using an OpenID4VCI request, which prompts the user to choose the digital wallet app that they want to store the credential in. Alternatively, the issuance could be done directly from within the digital wallet app (some apps might not even have an explicit user facing issuance step if they store credentials based on their association to a signed-in user account).

a single credential in a user's digital wallet app
A wallet app holds a single credential


Over time, the user can repeat this process to issue multiple credentials across multiple digital wallet apps:

multiple credentials in multiple digital wallets held by a single user
Multiple wallet apps hold multiple credentials


Note: To ensure that at presentation time Android can appropriately list all the credentials that digital wallet apps hold, digital wallets must register their credentials' metadata with Credential Manager. Credential Manager uses this metadata to match credentials across available digital wallet apps to the verifier's request, so that it can only present a list of valid credentials that can satisfy the request for the user to select from.

Early adopters

As Google Wallet announced yesterday, soon users will be able to use digital credentials to recover Amazon accounts, access online health services with CVS and MyChart by Epic, and verify profiles or identity on platforms like Uber and Bumble.

These use cases will take advantage of users' digital credentials stored in any digital wallet app users have on their Android device. To that end, we're also happy to share that both Samsung Wallet and 1Password will hold users' digital credentials as digital wallets and support OpenID standards via Android's Credential Manager API.

Learn more

Credential Manager API lets every Android app implement credential verification or provide credentials on the Android platform.

Check out our new digital credential documentation on how to become a credential verifier, taking advantage of users' existing digital credentials using Jetpack Credential Manager, or to become a digital wallet app holding your own credentials for other apps or websites to verify.

30 Apr 2025 4:00pm GMT

23 Apr 2025

feedAndroid Developers Blog

What’s new in the Jetpack Compose April ’25 release

Posted by Jolanda Verhoef - Developer Relations Engineer

Today, as part of the Compose April '25 Bill of Materials, we're releasing version 1.8 of Jetpack Compose, Android's modern, native UI toolkit, used by many developers. This release contains new features like autofill, various text improvements, visibility tracking, and new ways to animate a composable's size and location. It also stabilizes many experimental APIs and fixes a number of bugs.

To use today's release, upgrade your Compose BOM version to 2025.04.01 :

implementation(platform("androidx.compose:compose-bom:2025.04.01"))

Note: If you are not using the Bill of Materials, make sure to upgrade Compose Foundation and Compose UI at the same time. Otherwise, autofill will not work correctly.

Autofill

Autofill is a service that simplifies data entry. It enables users to fill out forms, login screens, and checkout processes without manually typing in every detail. Now, you can integrate this functionality into your Compose applications.

Setting up Autofill in your Compose text fields is straightforward:

1. Set the contentType Semantics: Use Modifier.semantics and set the appropriate contentType for your text fields. For example:

TextField(
  state = rememberTextFieldState(),
  modifier = Modifier.semantics {
    contentType = ContentType.Username 
  }
)

2. Handle saving credentials (for new or updated information):

a. Implicitly through navigation: If a user navigates away from the page, commit will be called automatically - no code needed!

b. Explicitly through a button: To trigger saving credentials when the user submits a form (by tapping a button, for instance), retrieve the local AutofillManager and call commit().

For full details on how to implement autofill in your application, see the Autofill in Compose documentation.

Text

When placing text inside a container, you can now use the autoSize parameter in BasicText to let the text size automatically adapt to the container size:

Box {
    BasicText(
        text = "Hello World",
        maxLines = 1,
        autoSize = TextAutoSize.StepBased()
    )
}
moving image of Hello World text inside a container


You can customize sizing by setting a minimum and/or maximum font size and define a step size. Compose Foundation 1.8 contains this new BasicText overload, with Material 1.4 to follow soon with an updated Text overload.

Furthermore, Compose 1.8 enhances text overflow handling with new TextOverflow.StartEllipsis or TextOverflow.MiddleEllipsis options, which allow you to display ellipses at the beginning or middle of a text line.

val text = "This is a long text that will overflow"
Column(Modifier.width(200.dp)) {
  Text(text, maxLines = 1, overflow = TextOverflow.Ellipsis)
  Text(text, maxLines = 1, overflow = TextOverflow.StartEllipsis)
  Text(text, maxLines = 1, overflow = TextOverflow.MiddleEllipsis)
}
text overflow handling displaying ellipses at the beginning and middle of a text line


And finally, we're expanding support for HTML formatting in AnnotatedString, with the addition of bulleted lists:

Text(
  AnnotatedString.fromHtml(
    """
    <h1>HTML content</h1>
    <ul>
      <li>Hello,</li>
      <li>World</li>
    </ul>
    """.trimIndent()
  )
)
a bulleted list of two items


Visibility tracking

Compose UI 1.8 introduces a new modifier: onLayoutRectChanged. This API solves many use cases that the existing onGloballyPositioned modifier does; however, it does so with much less overhead. The onLayoutRectChanged modifier can debounce and throttle the callback per what the use case demands, which helps with performance when it's added onto an item in LazyColumn or LazyRow.

This new API unlocks features that depend on a composable's visibility on screen. Compose 1.9 will add higher-level abstractions to this low-level API to simplify common use cases.

Animate composable bounds

Last year we introduced shared element transitions, which smoothly animate content in your apps. The 1.8 Animation module graduates LookaheadScope to stable, includes numerous performance and stability improvements, and includes a new modifier, animateBounds. When used inside a LookaheadScope, this modifier automatically animates its composable's size and position on screen, when those change:

Box(
  Modifier
    .width(if(expanded) 180.dp else 110.dp)
    .offset(x = if (expanded) 0.dp else 100.dp)
    .animateBounds(lookaheadScope = this@LookaheadScope)
    .background(Color.LightGray, shape = RoundedCornerShape(12.dp))
    .height(50.dp)
) {
  Text("Layout Content", Modifier.align(Alignment.Center))
}
a moving image depicting animate composable bounds


Increased API stability

Jetpack Compose has utilized @Experimental annotations to mark APIs that are liable to change across releases, for features that require more than a library's alpha period to stabilize. We have heard your feedback that a number of features have been marked as experimental for some time with no changes, contributing to a sense of instability. We are actively looking at stabilizing existing experimental APIs-in the UI and Foundation modules, we have reduced the experimental APIs from 172 in the 1.7 release to 70 in the 1.8 release. We plan to continue this stabilization trend across modules in future releases.

Deprecation of contextual flow rows and columns

As part of the work to reduce experimental annotations, we identified APIs added in recent releases that are less than optimal solutions for their use cases. This has led to the decision to deprecate the experimental ContextualFlowRow and ContextualFlowColumn APIs, added in Foundation 1.7. If you need the deprecated functionality, our recommendation for now is to copy over the implementation and adapt it as needed, while we work on a plan for future components that can cover these functionalities better.

The related APIs FlowRow and FlowColumn are now stable; however, the new overflow parameter that was added in the last release is now deprecated.

Improvements and fixes for core features

In response to developer feedback, we have shipped some particularly in-demand features and bug fixes in our core libraries:

  • Make dialogs go edge to edge: When displayed full screen, dialogs now take into account the full size of the screen and will draw behind system bars.

Get started!

We're grateful for all of the bug reports and feature requests submitted to our issue tracker - they help us to improve Compose and build the APIs you need. Continue providing your feedback, and help us make Compose better.

Happy composing!

23 Apr 2025 9:00pm GMT

Get ready for Google I/O: Program lineup revealed

Posted by the Google I/O team

The Google I/O agenda is live. We're excited to share Google's biggest announcements across AI, Android, Web, and Cloud May 20-21. Tune in to learn how we're making development easier so you can build faster.

We'll kick things off with the Google Keynote at 10:00 AM PT on May 20th, followed by the Developer Keynote at 1:30 PM PT. This year, we're livestreaming two days of sessions directly from Mountain View, bringing more of the I/O experience to you, wherever you are.

Here's a sneak peek of what we'll cover:

  • AI advancements: Learn how Gemini models enable you to build new applications and unlock new levels of productivity. Explore the flexibility offered by options like our Gemma open models and on-device capabilities.
  • Build excellent apps, across devices with Android: Crafting exceptional app experiences across devices is now even easier with Android. Dive into sessions focused on building intelligent apps with Google AI and boosting your productivity, alongside creating adaptive user experiences and leveraging the power of Google Play.
  • Powerful web, made easier: Exciting new features continue to accelerate web development, helping you to build richer, more reliable web experiences. We'll share the latest innovations in web UI, Baseline progress, new multimodal built-in AI APIs using Gemini Nano, and how AI in DevTools streamline building innovative web experiences.

Plan your I/O

Join us online for livestreams May 20-21, followed by on-demand sessions and codelabs on May 22. Register today and explore the full program for sessions like these:

We're excited to share what's next and see what you build!

23 Apr 2025 4:30pm GMT

17 Apr 2025

feedAndroid Developers Blog

The Fourth Beta of Android 16

Posted by Matthew McCullough - VP of Product Management, Android Developer


Today we're bringing you Android 16 beta 4, the last scheduled update in our Android 16 beta program. Make sure your app or game is ready. It's also the last chance to give us feedback before Android 16 is released.

Android 16 Beta 4

This is our second platform stability release; the developer APIs and all app-facing behaviors are final. Apps targeting Android 16 can be made available in Google Play. Beta 4 includes our latest fixes and optimizations, giving you everything you need to complete your testing. Head over to our Android 16 summary page for a list of the features and behavior changes we've been covering in this series of blog posts, or read on for some of the top changes of which you should be aware.

Android 16 Release timeline showing Platform Stability milestone in April

Now available on more devices

The Android 16 Beta is now available on handset, tablet, and foldable form factors from partners including Honor, iQOO, Lenovo, OnePlus, OPPO, Realme, vivo, and Xiaomi. With more Android 16 partners and device types, many more users can run your app on the Android 16 Beta.

Android 16 Beta Release Partners: Google Pixel, iQOO, Lenovo, OnePlus, Sharp, Oppo, RealMe, vivo, Xiaomi, and Honor

Get your apps, libraries, tools, and game engines ready!

If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates to your SDK are needed to fully support Android 16.

Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 16 Beta 4. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply, even if you aren't yet targeting Android 16:

  • JobScheduler: JobScheduler quotas are enforced more strictly in Android 16; enforcement will occur if a job executes while the app is on top, when a foreground service is running, or in the active standby bucket. setImportantWhileForeground is now a no-op. The new stop reason STOP_REASON_TIMEOUT_ABANDONED occurs when we detect that the app can no longer stop the job.
  • Broadcasts: Ordered broadcasts using priorities only work within the same process. Use other IPC if you need cross-process ordering.
  • ART: If you use reflection, JNI, or any other means to access Android internals, your app might break. This is never a best practice. Test thoroughly.
  • 16KB Page Size: If your app isn't 16KB-page-size ready, you can use the new compatibility mode flag, but we recommend migrating to 16KB for best performance.

Other changes that will be impactful once your app targets Android 16:

Get your app ready for the future:

  • Local network protection: Consider testing your app with the upcoming Local Network Protection feature. It will give users more control over which apps can access devices on their local network in a future Android major release.

Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.

Once you've published the Android 16-compatible version of your app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 16 and use the compatibility framework to help quickly detect issues.

Two Android API releases in 2025

This Beta is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. This Q2 major release will be the only release in 2025 that includes behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-breaking behavior changes.

Android 16 2025 SDK release timeline

We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We're putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.

There's no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.

Get started with Android 16

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 3 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 4.

While the API and behaviors are final and we are very close to release, we'd still like you to report issues on the feedback page. The earlier we get your feedback, the better chance we'll be able to address it in this or a future release.

For the best development experience with Android 16, we recommend that you use the latest Canary build of Android Studio Narwhal. Once you're set up, here are some of the things you should do:

  • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.

We'll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you've installed a beta build, you'll automatically get future updates over-the-air for all later previews and Betas.

For complete information on Android 16 please visit the Android 16 developer site.

17 Apr 2025 7:00pm GMT

14 Apr 2025

feedAndroid Developers Blog

From dashboards to deeper data: Improve app quality and performance with new Play Console insights

Posted by Dan Brown, Dina Gandal and Hadar Yanos - Product Managers, Google Play


At Google Play, we partner with developers like you to help your app or game business reach its full potential, providing powerful tools and insights every step of the way. In Google Play Console, you'll find the features needed to test, publish, improve, and grow your apps - and today, we're excited to share several enhancements to give you even more actionable insights, starting with a redesigned app dashboard tailored to your key workflows, and new metrics designed to help you improve your app quality.

Focus on the metrics that matter with the redesigned app dashboard

The first thing you'll notice is the redesigned app dashboard, which puts the most essential insights front and center. We know that when you visit Play Console, you usually have a goal in mind - whether that's checking on your release status or tracking installs. That's why you'll now see your most important metrics grouped into four core developer objectives:

  • Test and release
  • Monitor and improve
  • Grow users, and
  • Monetize with Play

Each objective highlights the three metrics most important to that goal, giving you a quick grasp of how your app is doing at a glance, as well as how those metrics have changed over time. For example, you can now easily compare between your latest production release against your app's overall performance, helping you to quickly identify any issues. In the screenshot below, the latest production release has a crash rate of 0.24%, a large improvement over the 28-day average crash rate shown under "Monitor and Improve."

screen recording of the redesigned app dashboard in Google Play Console
The redesigned app dashboard in Play Console helps you see your most important metrics at a glance.


At the top of the page, you'll see the status of your latest release changes prominently displayed so you know when it's been reviewed and approved. If you're using managed publishing, you can also see when things are ready to publish. And based on your feedback, engagement and monetization metrics now show a comparison to your previous year's data so you can make quick comparisons.

The new app dashboard also keeps you updated on the latest news from Play, including recent blog posts, new features relevant to your app, and even special invitations to early access programs.

In addition to what's automatically displayed on the dashboard, we know many of you track other vital metrics for your role or business. That's why we've added the "Monitor KPI trends" section at the bottom of your app dashboard. Simply scroll down and personalize your view by selecting the trends you need to monitor. This customized experience allows each user in your developer account to focus on their most important insights.

Later this year, we'll introduce new overview pages for each of the four core developer objectives. These pages will help you quickly understand your performance, showcase tools and features within each domain, and list recommended actions to optimize performance, engagement, and revenue across all your apps.

Get actionable notifications when and where you need them

If you spend a lot of time in Play Console, you may have already noticed the new notification center. Accessible from every page, the notification center helps you to stay up to date with your account and apps, and helps you to identify any issues that may need urgent attention.

To help you quickly understand and act on important information, we now group notifications about the same issue across multiple apps. Additionally, notifications that are no longer relevant will automatically expire, ensuring you only see what needs your attention. Plus, notifications will be displayed on the new app dashboard within the relevant objectives.

Improve app quality and performance with new Play Console metrics

One of Play's top goals is to provide the insights you need to build high-quality apps that deliver exceptional user experiences. We're continuing to expand these insights, helping you prevent issues like crashes or ANRs, optimize your app's performance, and reduce resource consumption on users' devices.

Users expect a polished experience across their devices, and we've learned from you it can be difficult to make your app layouts work seamlessly across phones and large screens. To help with this, we've introduced pre-review checks for incorrect edge-to-edge rendering, while another new check helps you detect and prevent large screen layout issues caused by letterboxing and restricted layouts, along with resources on how to fix them.

We're also making it easier to find and triage the most important quality issues in your app. The release dashboard in Play Console now displays prioritized quality issues from your latest release, alongside the existing dashboard features for monitoring post-launch, like crashes and ANRs This addition provides a centralized view of user-impacting issues, along with clear instructions to help you resolve critical user issues to improve your users' experiences.

The quality panel in the redesigned app dashboard in Google Play Console
The quality panel at the top of the release dashboard gives you a prioritized view of issues that affect users on your latest release and provides instructions on how to fix them.


A new "low memory kill" (LMK) metric is available in Android vitals and the Reporting API. Low memory issues cause your app to terminate without any logging, and can be notoriously difficult to detect. We are making these issues visible with device-specific insights into memory constraints to help you identify and fix these problems. This will improve app stability and user engagement, which is especially crucial for games where LMKs can disrupt real-time gameplay.

The quality panel in the redesigned app dashboard in Google Play Console
The low memory kill metric in Android vitals gives you device-specific insights into low memory terminations, helping you improve app stability and user engagement.


We're also collaborating closely with leading OEMs like Samsung, leveraging their real-world insights to define consistent benchmarks for optimal technical quality across Android devices. Excessive wakelocks are a leading cause of battery drain, a top frustration for users. Today, we're launching the first of these new metrics in beta: excessive wake locks in Android vitals. Take a look at our wakelock documentation and provide feedback on the metric definition. Your input is essential as we refine this metric towards general availability, and will inform our strategy for making this information available to users on the Play Store so they can make informed decisions when choosing apps.

Together, these updates provide you with even more visibility into your app's performance and quality, enabling you to build more stable, efficient, and user-friendly apps across the Android ecosystem. We'll continue to add more metrics and insights over time. To stay informed about all the latest Play Console enhancements and easily find updates relevant to your workflow, explore our new What's new in Play Console page, where you can filter features by the four developer objectives.

14 Apr 2025 5:00pm GMT

Boost app performance and battery life: New Android Vitals Metrics are here

Posted by Karan Jhavar - Product Manager, Android Frameworks, and Dan Brown - Product Manager, Google Play

Android has long championed performance, continuously evolving to deliver exceptional user experiences. Building upon years of refinement, we're now focusing on pinpointing resource-intensive use cases and developing platform-level solutions that benefit all users, across the vast Android ecosystem.

Since the launch of Android vitals in Play Console in 2017, Play has been investing in providing fleet-wide visibility into performance issues, making it easier to identify and fix problems as they occur. Today, Android and Google Play are taking a significant step forward in partnership with top OEMs, like Samsung, leveraging their real-world insights into excessive resource consumption. Our shared goal is to make Android development more streamlined and consistent by providing a standardized definition of what good and great looks like when it comes to technical quality.

"Samsung is excited to collaborate with Android and Google Play on these new performance metrics. By sharing our user experience insights, we aim to help developers build truly optimized apps that deliver exceptional performance and battery life across the ecosystem. We believe this collaboration will lead to a more consistent and positive experience for all Android users."

- Samsung

We're embarking on a multi-year plan to empower you with the tools and data you need to understand, diagnose, and improve your app's resource consumption, resulting in happier and more engaged users, both for your app, and Android as a whole.

Today, we're launching the first of these new metrics in beta: excessive wake locks. This metric directly addresses one of the most significant frustrations for Android users - excessive battery drain. By optimizing your app's wake lock behavior, you can significantly enhance battery life and user satisfaction.

The Android vitals beta metric reports partial wake lock use as excessive when all of the partial wake locks, added together, run for more than 3 hours in a 24-hour period. The current iteration of excessive wake lock metrics tracks time only if the wake lock is held when the app is in the background and does not have a foreground service.

These new metrics will provide comprehensive, fleet-wide visibility into performance and battery life, equipping developers with the data needed to diagnose and resolve performance bottlenecks. We have also revamped our wake lock documentation which shares effective wake lock implementation strategies and best practices.

In addition, we are also launching the excessive wake lock metric documentation to provide clear guidance on interpreting the metrics. We highly encourage developers to check out this page and provide feedback with their use case on this new metric. Your input is invaluable in refining these metrics before their general availability. In this beta phase, we're actively seeking feedback on the metric definition and how it aligns with your app's use cases. Once we reach general availability, we will explore Play Store treatments to help users choose apps that meet their needs.

Later this year, we may introduce additional metrics in Android vitals highlighting additional critical performance issues.

Thank you for your ongoing commitment to delivering delightful, fast, and high-performance experiences to users across the entire Android ecosystem.

14 Apr 2025 4:59pm GMT

09 Apr 2025

feedAndroid Developers Blog

Prioritize media privacy with Android Photo Picker and build user trust

Posted by Tatiana van Maaren - Global T&S Partnerships Lead, Privacy & Security, and Roxanna Aliabadi Walker - Product Manager


At Google Play, we're dedicated to building user trust, especially when it comes to sensitive permissions and your data. We understand that managing files and media permissions can be confusing, and users often worry about which files apps can access. Since these files often contain sensitive information like family photos or financial documents, it's crucial that users feel in control. That's why we're working to provide clearer choices, so users can confidently grant permissions without sacrificing app functionality or their privacy.

Below are a set of best practices to consider for improving user trust in the sharing of broad access files, ultimately leading to a more successful and sustainable app ecosystem.

Prioritize user privacy with data minimization

Building user trust starts with requesting only the permissions essential for your app's core functions. We understand that photos and videos are sensitive data, and broad access increases security risks. That's why Google Play now restricts READ_MEDIA_IMAGES and READ_MEDIA_VIDEO permissions, allowing developers to request them only when absolutely necessary, typically for apps like photo/video managers and galleries.

Leverage privacy-friendly solutions

Instead of requesting broad storage access, we encourage developers to use the Android Photo Picker, introduced in Android 13. This tool offers a privacy-centric way for users to select specific media files without granting access to their entire library. Android photo picker provides an intuitive interface, including access to cloud-backed photos and videos, and allows for customization to fit your app's needs. In addition, this system picker is backported to Android 4.4, ensuring a consistent experience for all users. By eliminating runtime permissions, Android photo picker simplifies the user experience and builds trust through transparency.

Build trust through transparent data practices

We understand that some developers have historically used custom photo pickers for tailored user experiences. However, regardless of whether you use a custom or system picker, transparency with users is crucial. Users want to know why your app needs access to their photos and videos.

Developers should strive to provide clear and concise explanations within their apps, ideally at the point where the permission is requested. Take the following in consideration while crafting your permission request mechanisms as possible best practices guidelines:

  • When requesting media access, provide clear explanations within your app. Specifically, tell users which media your app needs (e.g., all photos, profile pictures, sharing videos) and explain the functionality that relies on it (e.g., 'To choose a profile picture,' 'To share videos with friends').
  • Clearly outline how user data will be used and protected in your privacy policies. Explain whether data is stored locally, transmitted to a server, or shared with third parties. Reassure users that their data will be handled responsibly and securely.


Learn how Snap has embraced the Android System Picker to prioritize user privacy and streamline their media selection experience. Here's what they have to say about their implementation:

A grid of photos in the photo library is shown on a smartphone screen, including a waterfall and two people smiling and posing for the camera. The Google Photos interface is at the top, with the Photos tab selected, and one photo from the grid is selected for use


"One of our goals is to provide a seamless and intuitive communication experience while ensuring Snapchatters have control over their content. The new flow of the Android Photo Picker is the perfect balance of providing user control of the content they want to share while ensuring fast communication with friends on Snapchat."

- Marc Brown, Product Manager

Get started

Start building a more trustworthy app experience. Explore the Android Photo Picker and implement privacy-first data practices today.


Acknowledgement

Special thanks to: May Smith - Product Manager, and Anita Issagholyan - Senior Policy Specialist

09 Apr 2025 5:00pm GMT

Gemini in Android Studio for businesses: Develop with confidence, powered by AI

Posted by Sandhya Mohan - Product Manager


To empower Android developers at work, we're excited to announce a new offering of Gemini in Android Studio for businesses. This offering is specifically designed to meet the added privacy, security, and management needs of small and large organizations. We've heard that some people at businesses have additional needs that require more sensitive data protection, and this offering delivers the same Gemini in Android Studio that you've grown accustomed to, now with the additional privacy enhancements that your organization might require.

Developers and admins can unlock these features and benefits by subscribing to Gemini Code Assist Standard or Enterprise editions. A Google Cloud administrator can purchase a subscription and assign licenses to developers in their organization directly from the Google Cloud console.

Your code stays secure

Our data governance policy helps ensure customer code, customers' inputs, as well as the recommendations generated will not be used to train any shared models. Customers control and own their data and IP. It also comes with security features like Private Google Access, VPC Service Controls, and Enterprise Access Controls with granular IAM permissions to help enterprises adopt AI assistance at scale without compromising on security and privacy. Using a Gemini Code Assist Standard or Enterprise license enables multiple industry certifications such as:

  • SOC 1/2/3, ISO/IEC 27001 (Information Security Management)
  • 27017 (Cloud Security)
  • 27018 (Protection of PII)
  • 27701 (Privacy Information Management)

More details are at Certifications and security for Gemini.

IP indemnification

Organizations will benefit from generative AI IP indemnification, safeguarding their organizations against third parties claiming copyright infringement related to the AI-generated code. This added layer of protection is the same indemnification policy we provide to Google Cloud customers using our generative AI APIs, and allows developers to leverage the power of AI with greater confidence and reduced risk.

Code customization

Developers with a Code Assist Enterprise license can get tailored assistance customized to their organization's codebases by connecting to their GitHub, GitLab or BitBucket repositories (including on-premise installations), giving Gemini in Android Studio awareness of the classes and methods their team is most likely to use. This allows Gemini to tailor code completion suggestions, code generations, and chat responses to their business's best practices, and save developers time they would otherwise have to spend integrating with their company's preferred frameworks.

Designed for Android development

As always, we've designed Gemini in Android Studio with the unique needs of Android developers in mind, offering tailored assistance at every stage of the software development lifecycle. From the initial phases of writing, refactoring, and documenting your code, Gemini acts as an intelligent coding companion to boost productivity. With features like:

  • Build & Sync error support: Get targeted insights to help solve build and sync errors
screenshot of build and sync error support by Gemini in Android Studio


  • Gemini-powered App Quality Insights: Analyze crashes reported by Google Play Console and Firebase Crashlytics
screenshot of app quality insights by Gemini in Android Studio


  • Get help with Logcat crashes: Simply click on "Ask Gemini" to get a contextual response on how to resolve the crash.
screenshot of getting contextual responses on how to resolve a crash from by Gemini in Android Studio


In Android Studio, Gemini is designed specifically for the Android ecosystem, making it an invaluable tool throughout the entire journey of creating and publishing an Android app.

Check out Gemini in Android Studio for business

This offering for businesses marks a significant step forward in empowering Android development teams with the power of AI. With this subscription-based offering, no code is stored, and crucially, your code is never used for model training. By providing generative AI indemnification and robust enterprise management tools, we're enabling organizations to innovate faster and build high-quality Android applications with confidence.

Ready to get started? Here's what you need

To get started, you'll need a Gemini Code Assist Enterprise license and Android Studio Narwhal or Android Studio for Platform found on the canary release channel. Purchase your Gemini Code Assist license or contact a Google Cloud sales team today for a personalized consultation on how you can unlock the power of AI for your organization.

Note: Gemini for businesses is also available for Android Studio Platform users.

We appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue and also check out known issues. Remember to also follow us on X, LinkedIn, Blog, or YouTube for more Android development updates!

09 Apr 2025 12:00am GMT

07 Apr 2025

feedAndroid Developers Blog

Widgets take center stage with One UI 7

Posted by André Labonté - Senior Product Manager, Android Widgets


On April 7th, Samsung will begin rolling out One UI 7 to more devices globally. Included in this bold new design is greater personalization with an optimized widget experience and updated set of One UI 7 widgets. Ushering in a new era where widgets are more prominent to users, and integral to the daily device experience.

This update presents a prime opportunity for Android developers to enhance their app experience with a widget

  • More Visibility: Widgets put your brand and key features front and center on the user's device, so they're more likely to see it.
  • Better User Engagement: By giving users quick access to important features, widgets encourage them to use your app more often.
  • Increased Conversions: You can use widgets to recommend personalized content or promote premium features, which could lead to more conversions.
  • Happier Users Who Stick Around: Easy access to app content and features through widgets can lead to overall better user experience, and contribute to retention.

More discoverable than ever with Google Play's Widget Discovery features!

  • Dedicated Widgets Search Filter: Users can now directly search for apps with widgets using a dedicated filter on Google Play. This means your apps/games with widgets will be easily identified, helping drive targeted downloads and engagement.
  • New Widget Badges on App Detail Pages: We've introduced a visual badge on your app's detail pages to clearly indicate the presence of widgets. This eliminates guesswork for users and highlights your widget offerings, encouraging them to explore and utilize this capability.
  • Curated Widgets Editorial Page: We're actively educating users on the value of widgets through a new editorial page. This curated space showcases collections of excellent widgets and promotes the apps that leverage them. This provides an additional channel for your widgets to gain visibility and reach a wider audience.

Getting started with Widgets

Whether you are planning a new widget, or investing in an update to an existing widget, we have tools to help!

  • Quality Tiers are a great starting point to understand what makes a great Android widget. Consider making your widget resizable to the recommended sizes, so users can customize the size just right for them.


Leverage widgets for increased app visibility, enhanced user engagement, and ultimately, higher conversions. By embracing widgets, you're not just optimizing for a specific OS update; you're aligning with a broader trend towards user-centric, glanceable experiences.

07 Apr 2025 7:00pm GMT

27 Mar 2025

feedAndroid Developers Blog

Media3 1.6.0 — what’s new?

Posted by Andrew Lewis - Software Engineer

This article is cross-published on Medium

Media3 1.6.0 is now available!

This release includes a host of bug fixes, performance improvements and new features. Read on to find out more, and as always please check out the full release notes for a comprehensive overview of changes in this release.


Playback, MediaSession and UI

ExoPlayer now supports HLS interstitials for ad insertion in HLS streams. To play these ads using ExoPlayer's built-in playlist support, pass an HlsInterstitialsAdsLoader.AdsMediaSourceFactory as the media source factory when creating the player. For more information see the official documentation.

This release also includes experimental support for 'pre-warming' decoders. Without pre-warming, transitions from one playlist item to the next may not be seamless in some cases, for example, we may need to switch codecs, or decode some video frames to reach the start position of the new media item. With pre-warming enabled, a secondary video renderer can start decoding the new media item earlier, giving near-seamless transitions. You can try this feature out by enabling it on the DefaultRenderersFactory. We're actively working on further improvements to the way we interact with decoders, including adding a 'fast seeking mode' so stay tuned for updates in this area.

Media3 1.6.0 introduces a new media3-ui-compose module that contains functionality for building Compose UIs for playback. You can find a reference implementation in the Media3 Compose demo and learn more in Getting started with Compose-based UI. At this point we're providing a first set of foundational state classes that link to the Player, in addition to some basic composable building blocks. You can use these to build your own customized UI widgets. We plan to publish default Material-themed composables in a later release.

Some other improvements in this release include: moving system calls off the application's main thread to the background (which should reduce ANRs), a new decoder module wrapping libmpegh (for bundling object-based audio decoding in your app), and a fix for the Cast extension for apps targeting API 34+. There are also fixes across MPEG-TS and WebVTT extraction, DRM, downloading/caching, MediaSession and more.

Media extraction and frame retrieval

The new MediaExtractorCompat is a drop-in replacement for the framework MediaExtractor but implemented using Media3's extractors. If you're using the Android framework MediaExtractor, consider migrating to get consistent behavior across devices and reduce crashes.

We've also added experimental support for retrieving video frames in a new class ExperimentalFrameExtractor, which can act as a replacement for the MediaMetadataRetriever getFrameAtTime methods. There are a few benefits over the framework implementation: HDR input is supported (by default tonemapping down to SDR, but with the option to produce HLG bitmaps from Android 14 onwards), Media3 effects can be applied (including Presentation to scale the output to a desired size) and it runs faster on some devices due to moving color space conversion to the GPU. Here's an example of using the new API:

val bitmap =
    withContext(Dispatchers.IO) {
        val configuration =
            ExperimentalFrameExtractor.Configuration
                .Builder()
                .setExtractHdrFrames(true)
                .build()
        val frameExtractor =
            ExperimentalFrameExtractor(
                context,
                configuration,
            )

        frameExtractor.setMediaItem(mediaItem, /*effects*/ listOf())

        val frame = frameExtractor.getFrame(timestamps).await()
        frameExtractor.release()
        frame.bitmap
    }

Editing, transcoding and export

Media3 1.6.0 includes performance, stability and functional improvements in Transformer. Highlights include: support for transcoding/transmuxing Dolby Vision streams on devices that support this format and a new MediaProjectionAssetLoader for recording from the screen, which you can try out in the Transformer demo app.

Check out Common media processing operations with Jetpack Media3 Transformer for some code snippets showing how to process media with Transformer, and tips to reduce latency.

This release also includes a new Kotlin-based demo app showcasing Media3's video effects framework. You can select from a variety of video effects and preview them via ExoPlayer.setVideoEffects.

Media3 video effect animation
Animation showing contrast adjustment and a confetti effect in the new demo app


Get started with Media3 1.6.0

Please get in touch via the Media3 issue Tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!

27 Mar 2025 4:30pm GMT