10 Jan 2025

feedAndroid Developers Blog

Apps adopt Transformer to support more reliable and performant media editing use cases

Posted by Caren Chang - Developer Relations Engineer

The Jetpack Media3 library enables Android apps to build high quality media apps. As part of the Media3 library, the Transformer module aims to provide easy to use, reliable, and performant APIs for transcoding and editing media.

For example, apps can use Transformer to apply editing operations such as trimming a long piece of media file, or applying effects to video tracks. Transformer can also be used to convert media files from one format to another, such as adjusting the resolution or encoding of the media file.

Developing Transformer APIs

As part of the process to introduce new APIs, our engineering team works closely with Google apps such as Google Photos to test and experiment the new APIs. Experimental flags are first introduced to enable performance improvements. Once the results are successful and conclusive, these experimental features are then built into the default API implementations or promoted to public APIs for all apps to use. This approach allows Transformer APIs to be tested on a wide variety of devices.

Transformer Adoption in apps

Apps that have been using Transformer in production observed in-app performance improvements, less code to maintain, and better developer experience. Let's take a closer look at how Transformer has helped apps for their media-editing use cases.

One of users' favorite features in Google Photos is memory sharing, where snippets of your life story that are curated and presented as Google Photos memories can now be shared as videos to social media and chat apps. However, the process of combining media items to create a video on device is resource intensive and subject to significant latency, especially on low-end devices. To reduce this latency and enable the feature on a wider range of devices, Photos adopted Transformer in their media creation pipeline. Along with other improvements made, the team found that Transformer played a part in reducing the median user latency for creating memory videos by 41% on high-end devices and 27% on mid-range devices.

The Photos app also enables users to perform media edits such as trimming or rotating a video. By adopting Transformer APIs for rotating videos, median save latency was reduced by 79% for applicable videos. The app also adopted Transformer's API for optimizing video trimming, and observed video save latency decrease by 64%.

1 Second Everyday is a personal video journal that helps you create captivating montages and timelapses. One of the app's main user journeys is sequentially combining short videos to create a meaningful movie. After adopting Transformer for this use case, the app observed that video encoding performance was up to 5x faster, allowing them to explore enabling 4k and HDR support. The Transformer adoption also helped decrease relevant code by 30%, making it easier for the developers to maintain the code base.

BandLab is the next-generation music creation platform used by millions around the world to make and share their music. The app originally used MediaCodecs for their video creation use cases, but found that the low level implementation resulted in native crashes that were difficult to debug. After researching more on Transformer, the team made the decision to migrate from MediaCodecs to Transformer. Overall, it only took the team 12 working days for the migration, and this resulted in a simpler codebase and more maintainable pipeline for their media creation use cases. In addition, the app observed that all previously observed native crashes were no longer occurring anymore.

What's next for Transformers?

We're excited to see Transformer's adoption in the developer community, and will continue adding new features to support more media-editing use cases for the Android ecosystem including:

  • Better support for previewing media edits
  • Improving the performance and developer experience for video frame extraction
  • Easier integration with AI effects
  • and much more

Keep an eye out on what we're working on in the Media3 Github, and file feature requests to help shape the future of Transformer!

10 Jan 2025 5:00pm GMT

09 Jan 2025

feedAndroid Developers Blog

Android Studio Ladybug Feature Drop is Stable!

Posted by Steven Jenkins - Product Manager, Android Studio

Today, we are thrilled to announce the stable release of Android Studio Ladybug 🐞 Feature Drop (2024.2.2)!

Accelerate your productivity with Gemini in Android Studio, Animation Preview support for Wear Tiles, App Links Assistant and much more. All of these new features are designed to help you build high-quality Android apps faster.

Read on to learn more about all the updates, quality improvements, and new features across your key workflows in Android Studio Ladybug Feature Drop, and download the latest stable version today to try them out!

Android Studio Ladybug Feature Drop

Gemini in Android Studio

Gemini Code Transforms

Gemini Code Transforms can help you modify, optimize, or add code to your app with AI assistance. Simply right-click in your code editor and select "Gemini > Generate code" or highlight code and select "Gemini > Transform selected code." You can also use the keyboard shortcut Ctrl+\ (⌘+\ on macOS) to bring up the Gemini prompt. Describe the changes you want to make to your code, and Gemini will suggest a code diff, allowing you to easily review and accept only the suggestions you want.

With Gemini Code Transforms, you can simplify complex code, perform specific code transformations, or even generate new functions. You can also refine the suggested code to iterate on the code suggestions with Gemini. It's an AI coding assistant right in your editor, helping you write better code more efficiently.

Android Studio displays a code editor window open to Gemini Code Transform
Gemini Code Transform

Rename

Gemini in Android Studio enhances your workflow with intelligent assistance for common tasks. When renaming a single variable, class, or method from the code editor, the "Refactor > Rename" action uses Gemini to suggest contextually appropriate names, making it smoother and more efficient to refactor names as you're coding in the editor.

A code editor window open to Gemini renaming a variable in Android Studio
Rename

Rethink

For larger renaming refactors, Gemini can "Rethink variable names" across your whole file. This feature analyzes your code and suggests more intuitive and descriptive names for variables and methods, improving readability and maintainability.

A code editor window open to Gemini analyzing code and suggesting more descriptive names for variables in Android Studio
Rethink

Commit Message

Gemini now assists with commit messages. When committing changes to version control, it analyzes your code modifications and suggests a detailed commit message.

A code editor window open to Gemini analyzing code and suggesting a detailed commit message in Android Studio
Commit Message

Generate Documentation

Gemini in Android Studio makes documenting your code easier than ever. To generate clear and concise documentation, select a code snippet, right-click in the editor and choose "Gemini > Document Function" (or "Document Class" or "Document Property", depending on the context). Gemini will generate a draft that you can then refine and perfect before accepting the changes. This streamlined process helps you create informative documentation quickly and efficiently.

A code editor window open to Gemini adding documentation to a code snippet in Android Studio
Generate Documentation

Debug

Animation Preview support for Wear OS Tiles

Animation Preview support for Wear OS Tiles helps you visualize and debug tile animations with ease. It provides a real-time view of your animations, allowing you to preview them, control playback with options like play, pause, and speed adjustment, and inspect key properties such as initial/end states and animation curves. You can even dynamically modify animation code and instantly observe the results within the inspector, streamlining the debugging and refinement process.

A code editor window open to animation preview support in Android Studio
Animation Preview support for Wear OS Tiles

Wear Health Services

The Wear Health Services feature in Android Studio simplifies the process of testing health and fitness apps by enabling Wear Health Services within the emulator. You can now easily customize various parameters for a given exercise such as heart rate, distance, and speed without needing a physical device or performing the activity itself. This streamlines the development and testing workflow, allowing for faster iteration and more efficient debugging of health-related features.

A code editor window open to Wear Health Services in Android Studio Emulator
Wear Health Services

Optimize

App Links Assistant

App Links Assistant simplifies the process of implementing app links by serving valid JSON syntax that resolves broken deep links for your app. You can review the JSON file and then upload it to your website, resolving issues quickly. This eliminates the manual creation of the JSON file, saving you time and effort. The tool also allows you to compare existing JSON files with newly generated ones to easily identify any discrepancies.

A code editor window open to App Links Assistant in Android Studio
App Links Assistant

Google Play SDK Insights Integration

Android Studio now provides enhanced lint warnings for public SDKs from the Google Play SDK Index and the Google Play SDK Console, helping you identify and address potential issues. These warnings alert you if an SDK is outdated, violates Google Play policies, or has known security vulnerabilities. Furthermore, Android Studio provides helpful quick fixes and recommended version ranges whenever possible, making it easier to update your dependencies and keeping your app more secure and compliant.

Android Studio displays a code editor window open to a Gradle build file. The IDE warns that an outdated Firebase authentication library is being used, preventing release to Google Play Console.
Google Play SDK Insights Integration

Quality improvements

Beyond new features, we also continued to improve the overall quality and stability of Android Studio. In fact, the Android Studio team addressed over 770 bugs during the Ladybug Feature Drop development cycle.

IntelliJ platform update

Android Studio Ladybug Feature Drop (2024.2.2) includes the IntelliJ 2024.2 platform release, which has many new features such as more intuitive full line code completion suggestions, a preview in the Search Everywhere dialog and improved log management for the Java** and Kotlin programming languages.

See the full IntelliJ 2024.2 release notes.

Summary

To recap, Android Studio Ladybug Feature Drop includes the following enhancements and features:

Gemini in Android Studio

  • Gemini Code Transforms
  • Rename
  • Rethink
  • Commit Message
  • Generate Documentation

Debug

  • Animation Preview support for Wear OS Tiles
  • Wear Health Services

Optimize

  • App Links Assistant
  • Google Play SDK Insights Integration

Quality Improvements

  • 770+ bugs addressed

IntelliJ Platform Update

  • More intuitive full line code completion suggestions
  • Preview in the Search Everywhere dialog
  • Improved log management for Java and Kotlin programming languages

Getting Started

Ready for next-level Android development? Download Android Studio Ladybug Feature Drop and unlock these cutting-edge features today. As always, your feedback is important to us - check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!


**Java is a trademark or registered trademark of Oracle and/or its affiliates.

09 Jan 2025 7:00pm GMT

08 Jan 2025

feedAndroid Developers Blog

Performance Class helps Google Maps deliver premium experiences

Posted by Nevin Mital - Developer Relations Engineer, Android Media

The Android ecosystem features a diverse range of devices, and it can be difficult to build experiences that take advantage of new or premium hardware features while still working well for users on all devices. With Android 12, we introduced the Media Performance Class (MPC) standard to help developers better understand a device's capabilities and identify high-performing devices. For a refresher on what MPC is, please see our last blog post, Using performance class to optimize your user experience, or check out the Performance Class documentation.

Earlier this year, we published the first stable release of the Jetpack Core Performance library as the recommended solution for more reliably obtaining a device's MPC level. In particular, this library introduces the PlayServicesDevicePerformance class, an API that queries Google Play Services to get the most up-to-date MPC level for the current device and build. I'll get into the technical details further down, but let's start by taking a look at how Google Maps was able to tailor a feature launch to best fit each device with MPC.

Performance Class unblocks premium experience launch for Google Maps

Google Maps recently took advantage of the expanded device coverage enabled by the Play Services module to unblock a feature launch. Google Maps wanted to update their UI by increasing the transparency of some layers. Consequently, this meant they would need to render more of the map, and found they had to stop the rollout due to latency increases on many devices, especially towards the low-end. To resolve this, the Maps team started by slicing an existing key metric, "seconds to UI item visibility", by MPC level, which revealed that while all devices had a small increase in this latency, devices without an MPC level had the largest increase.

A bar graph displays A/B test results for Seconds to UI item visibility, comparing control results with those using increased transparency across different Media Performance Class Levels. A green horizontal line and text indicate the updated experience shipped to devices qualifying for MPC. A vertical green dotted line separates results for devices without a specific MPC level, which kept the previous UI.


With these results in hand, Google Maps started their rollout again, but this time only launching the feature on devices that report an MPC level. As devices continue to get updated and meet the bar for MPC, the updated Google Maps UI will be available to them as well.

The new Play Services module

MPC level requirements are defined in the Android Compatibility Definition Document (CDD), then devices and Android builds are validated against these requirements by the Android Compatibility Test Suite (CTS). The Play Services module of the Jetpack Core Performance library leverages these test results to continually update a device's reported MPC level without any additional effort on your end. This also means that you'll immediately have access to the MPC level for new device launches without needing to acquire and test each device yourself, since it already passed CTS. If the MPC level is not available from Google Play Services, the library will fall back to the MPC level declared by the OEM as a build constant.

A flowchart depicts the process of determining Performance Class levels for Android devices, involving manufacturers, CTS tests, a Grader, the Play Services module, and the CDD.


As of writing, more than 190M in-market devices covering over 500 models across 40+ brands report an MPC level. This coverage will continue to grow over time, as older devices update to newer builds, from Android 11 and up.

Using the Core Performance library

To use Jetpack Core Performance, start by adding a dependency for the relevant modules in your Gradle configuration, and create an instance of DevicePerformance. Initializing a DevicePerformance should only happen once in your app, as early as possible - for example, in the onCreate() lifecycle event of your Application. In this example, we'll use the Google Play services implementation of DevicePerformance.

// Implementation of Jetpack Core library.
implementation("androidx.core:core-ktx:1.12.0")
// Enable APIs to query for device-reported performance class.
implementation("androidx.core:core-performance:1.0.0")
// Enable APIs to query Google Play Services for performance class.
implementation("androidx.core:core-performance-play-services:1.0.0")


import androidx.core.performance.play.services.PlayServicesDevicePerformance

class MyApplication : Application() {
  lateinit var devicePerformance: DevicePerformance

  override fun onCreate() {
    // Use a class derived from the DevicePerformance interface
    devicePerformance = PlayServicesDevicePerformance(applicationContext)
  }
}

Then, later in your app when you want to retrieve the device's MPC level, you can call getMediaPerformanceClass():

class MyActivity : Activity() {
  private lateinit var devicePerformance: DevicePerformance
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    // Note: Good app architecture is to use a dependency framework. See
    // https://developer.android.com/training/dependency-injection for more
    // information.
    devicePerformance = (application as MyApplication).devicePerformance
  }

  override fun onResume() {
    super.onResume()
    when {
      devicePerformance.mediaPerformanceClass >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE -> {
        // MPC level 34 and later.
        // Provide the most premium experience for the highest performing devices.
      }
      devicePerformance.mediaPerformanceClass == Build.VERSION_CODES.TIRAMISU -> {
        // MPC level 33.
        // Provide a high quality experience.
      }
      else -> {
        // MPC level 31, 30, or undefined.
        // Remove extras to keep experience functional.
      }
    }
  }
}

Strategies for using Performance Class

MPC is intended to identify high-end devices, so you can expect to see MPC levels for the top devices from each year, which are the devices you're likely to want to be able to support for the longest time. For example, the Pixel 9 Pro released with Android 14 and reports an MPC level of 34, the latest level definition when it launched.

You should use MPC as a complement to any existing Device Clustering solutions you already use, such as querying a device's static specs or manually blocklisting problematic devices. An area where MPC can be a particularly helpful tool is for new device launches. New devices should be included at launch, so you can use MPC to gauge new devices' capabilities right from the start, without needing to acquire the hardware yourself or manually test each device.

A great first step to get involved is to include MPC levels in your telemetry. This can help you identify patterns in error reports or generally get a better sense of the devices your user base uses if you segment key metrics by MPC level. From there, you might consider using MPC as a dimension in your experimentation pipeline, for example by setting up A/B testing groups based on MPC level, or by starting a feature rollout with the highest MPC level and working your way down. As discussed previously, this is the approach that Google Maps took.

You could further use MPC to tune a user-facing feature, for example by adjusting the number of concurrent video playbacks your app attempts based on the MPC level's concurrent codec guarantees. However, make sure to still query a device's runtime capabilities when using this approach, as they may differ depending on the environment and state the device is in.

Get in touch!

If MPC sounds like it could be useful for your app, please give it a try! You can get started by taking a look at our sample code or documentation. We welcome you to share any questions or feedback you have in this short form.


This blog post is a part of Camera and Media Spotlight Week. We're providing resources - blog posts, videos, sample code, and more - all designed to help you uplevel the media experiences in your app.

To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.

08 Jan 2025 5:00pm GMT