04 Feb 2025
TalkAndroid
Board Kings Free Rolls – Updated Every Day!
Run out of rolls for Board Kings? Find links for free rolls right here, updated daily!
04 Feb 2025 6:55pm GMT
Coin Tales Free Spins – Updated Every Day!
Tired of running out of Coin Tales Free Spins? We update our links daily, so you won't have that problem again!
04 Feb 2025 6:53pm GMT
Avatar World Codes – February 2025 – Updated Daily
Find all the latest Avatar World Codes right here in this article! Read on for more!
04 Feb 2025 6:50pm GMT
Coin Master Free Spins & Coins Links
Find all the latest Coin Master free spins right here! We update daily, so be sure to check in daily!
04 Feb 2025 6:49pm GMT
Monopoly Go Events Schedule Today – Updated Daily
Current active events are Knightly Quest Event, and Iceberg Hop Tournament Event. Special Event - Juggle Jam
04 Feb 2025 6:46pm GMT
Family Island Free Energy Links (Updated Daily)
Tired of running out of energy on Family Island? We have all the latest Family Island Free Energy links right here, and we update these daily!
04 Feb 2025 6:40pm GMT
Monopoly Go – Free Dice Links Today (Updated Daily)
If you keep on running out of dice, we have just the solution! Find all the latest Monopoly Go free dice links right here!
04 Feb 2025 6:37pm GMT
Crazy Fox Free Spins & Coins (Updated Daily)
If you need free coins and spins in Crazy Fox, look no further! We update our links daily to bring you the newest working links!
04 Feb 2025 6:32pm GMT
Match Masters Free Gifts, Coins, And Boosters (Updated Daily)
Tired of running out of boosters for Match Masters? Find new Match Masters free gifts, coins, and booster links right here! Updated Daily!
04 Feb 2025 6:26pm GMT
Solitaire Grand Harvest – Free Coins (Updated Daily)
Get Solitaire Grand Harvest free coins now, new links added daily. Only tested and working links, complete with a guide on how to redeem the links.
04 Feb 2025 6:24pm GMT
Dice Dreams Free Rolls – Updated Daily
Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.
04 Feb 2025 6:23pm GMT
Samsung May Take a Page Out of the Z Fold Book with the S26 Ultra
The under-display selfie camera might find its way to Samsung's next S series flagship.
04 Feb 2025 6:00pm GMT
Verizon Highlights Some Interesting Consumer Facts for 2024
Verizon used its position to explore a lot of consumer trends in 2024, and there are some pretty enlightening insights.
04 Feb 2025 4:30pm GMT
Vivo X200 Pro Review: Seriously Good
Vivo's flagship X200 Pro is in for review, just how serious a contender is it?
04 Feb 2025 3:04pm GMT
Here Are the Cheapest Countries to Buy a Samsung Flagship
Not all Galaxy prices are built equal, and buying the latest Samsung flagships is more expensive in some regions than others.
04 Feb 2025 2:22pm GMT
My Hotpot Story Codes – February 2025
Find the latest My Hotspot Story codes here! Keep reading for more!
04 Feb 2025 5:30am GMT
30 Jan 2025
Android Developers Blog
Meet the Android Studio Team: A Conversation with Product Manager, Paris Hsu
Posted by Ashley Tschudin - Social Media Specialist, MTP at Google
Welcome to "Meet the Android Studio Team"; a short blog series where we pull back the curtain and introduce you to the passionate people who build your favorite Android development tools. Get to know the talented minds - engineers, designers, product managers, and more - who pour their hearts into crafting the best possible experience for Android developers.
Join us each week to meet a new member of the team and explore their unique perspectives.
Paris Hsu: Empowering Android developers with Compose tools
Meet Paris Hsu, a Product Manager at Google passionate about empowering developers to build incredible Android apps.
Her journey to the Android Studio team started with a serendipitous internship at Microsoft, where she discovered the power of developer tools. Now, as part of the UI Tools team, Paris champions intuitive solutions that streamline the development process, like the innovative Compose Tools suite.
In this installment of "Meet the Android Studio Team," Paris shares insights into her work, the importance of developer feedback, and her dream Android feature (hint: it involves acing that forehand).
Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?
Honestly, I joined a bit by chance! The summer before my last year of grad school, I was in the Microsoft's Garage incubator internship program. Our project, InkToCode, turned handwritten designs into code. It was my first experience building developer tools and made me realize how powerful developer tools can be, which led me to the Android Studio team. Now, after 6 years, I'm constantly amazed by what Android developers create - from innovative productivity apps to immersive games. It's incredibly rewarding to build tools that empower developers to create more.
In your opinion, what is the most impactful feature or improvement the Android Studio team has introduced in recent years, and why?
As part of the UI Tools team in Android Studio, I'm biased towards Compose Tools! Our team spent a lot of time rethinking how we can take a code-first approach for tools as we transition the community for XML to Compose. Features like the Compose Preview and its submodes (Interactive, Animation, Deploy preview) enable fast UI iteration, while features such as Layout Inspector or Compose UI Check helps find and diagnose UI issues with ease. We are also exploring ways to apply multimodal AI into these tools to help developers write more high quality, adaptive, and inclusive Compose code quicker.
How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?
We are constantly engaging and listening to developer feedback to ensure we are meeting their needs! some examples:
- Direct feedback: UXR studies, Annual developer surveys, and Buganizer reports provide valuable insights.
- Early access: We release Early Access Programs (EAPs) for new features, allowing developers to test them and provide feedback before official launch.
- Community engagement: We have advisory boards with experienced Android developers, gather feedback from Google Developer Experts (GDEs), and attend conferences to connect directly with the community.
How does the Studio team contribute to Google's broader vision for the Android platform?
I think Android Studio contributes to Google's broader mission by providing Android developers with powerful and intuitive tools. This way, developers are empowered to create amazing apps that bring the best of Google's services and information to our users. Whether it's accessing knowledge through Search, leveraging Gemini, staying connected with Maps, or enjoying entertainment on YouTube, Android Studio helps developers build the experiences that connect people to what matters most.
If you could wave a magic wand and add one dream feature to the Android universe, what would it be and why?
Anyone who knows me knows that I am recently super obsessed with tennis. I would love to see more coaching wearables (e.g. Pixel Watch, Pixel Racket?!). I would love real-time feedback on my serve and especially forehand stroke analysis.
Learn more about Compose Tools
Inspired by Paris' passion for empowering developers to build incredible Android apps? To learn more about how Compose Tools can streamline your app development process, check out the Compose Tools documentation and get started with the Jetpack Compose Tutorial.
Stay tuned
Keep an eye out for the next installment in our "Meet the Android Studio Team" series, where we'll shine the spotlight on another team member and delve into their unique insights.
Find Paris Hsu on LinkedIn, X, and Medium.
30 Jan 2025 9:00pm GMT
29 Jan 2025
Android Developers Blog
Production-ready generative AI on Android with Vertex AI in Firebase
Posted by Thomas Ezan - Sr. Developer Relation Engineer (@lethargicpanda)
Gemini can help you build and launch new user features that will boost engagement and create personalized experiences for your users.
The Vertex AI in Firebase SDK lets you access Google's Gemini Cloud models (like Gemini 1.5 Flash and Gemini 1.5 Pro) and add GenAI capabilities to your Android app. It became generally available last October which means it's now ready for production and it is already used by many apps in Google Play.
Here are tips for a successful deployment to production.
Implement App Check to prevent API abuse
When using the Vertex AI in Firebase API it is crucial to implement robust security measures to prevent unauthorized access and misuse.
Firebase App Check helps protect backend resources (like Vertex AI in Firebase, Cloud Functions for Firebase, or even your own custom backend) from abuse. It does this by attesting that incoming traffic is coming from your authentic app running on an authentic and untampered Android device.
To get started, add Firebase to your Android project and enable the Play Integrity API for your app in the Google Play console. Back in the Firebase console, go to the App Check section of your Firebase project to register your app by providing its SHA-256 fingerprint.
Then, update your Android project's Gradle dependencies with the App Check library for Android:
dependencies { // BoM for the Firebase platform implementation(platform("com.google.firebase:firebase-bom:33.7.0")) // Dependency for App Check implementation("com.google.firebase:firebase-appcheck-playintegrity") }
Finally, in your Kotlin code, initialize App Check before using any other Firebase SDK:
Firebase.initialize(context) Firebase.appCheck.installAppCheckProviderFactory( PlayIntegrityAppCheckProviderFactory.getInstance(), )
To enhance the security of your generative AI feature, you should implement and enforce App Check before releasing your app to production. Additionally, if your app utilizes other Firebase services like Firebase Authentication, Firestore, or Cloud Functions, App Check provides an extra layer of protection for those resources as well.
Once App Check is enforced, you'll be able to monitor your app's requests in the Firebase console.
You can learn more about App Check on Android in the Firebase documentation.
Use Remote Config for server-controlled configuration
The generative AI landscape evolves quickly. Every few months, new Gemini model iterations become available and some models are removed. See the Vertex AI in Firebase Gemini models page for details.
Because of this, instead of hardcoding the model name in your app, we recommend using a server-controlled variable using Firebase Remote Config. This allows you to dynamically update the model your app uses without having to deploy a new version of your app or require your users to pick up a new version.
You define parameters that you want to control (like model name) using the Firebase console. Then, you add these parameters into your app, along with default "fallback" values for each parameter. Back in the Firebase console, you can change the value of these parameters at any time. Your app will automatically fetch the new value.
Here's how to implement Remote Config in your app:
// Initialize the remote configuration by defining the refresh time val remoteConfig: FirebaseRemoteConfig = Firebase.remoteConfig val configSettings = remoteConfigSettings { minimumFetchIntervalInSeconds = 3600 } remoteConfig.setConfigSettingsAsync(configSettings) // Set default values defined in your app resources remoteConfig.setDefaultsAsync(R.xml.remote_config_defaults) // Load the model name val modelName = remoteConfig.getString("model_name")
Read more about using Remote Config with Vertex AI in Firebase.
Gather user feedback to evaluate impact
As you roll out your AI-enabled feature to production, it's critical to build feedback mechanisms into your product and allow users to easily signal whether the AI output was helpful, accurate, or relevant. For example, you can incorporate interactive elements such as thumb-up and thumb-down buttons and detailed feedback forms within the user interface. The Material Icons in Compose package provides ready to use icons to help you implement it.
You can easily track the user interaction with these elements as custom analytics events by using Google Analytics logEvent() function:
Row { Button ( onClick = { firebaseAnalytics.logEvent("model_response_feedback") { param("feedback", "thumb_up") } } ) { Icon(Icons.Default.ThumbUp, contentDescription = "Thumb up") }, Button ( onClick = { firebaseAnalytics.logEvent("model_response_feedback") { param("feedback", "thumb_down") } } ) { Icon(Icons.Default.ThumbDown, contentDescription = "Thumb down") } }
Learn more about Google Analytics and its event logging capabilities in the Firebase documentation.
User privacy and responsible AI
When you use Vertex AI in Firebase for inference, you have the guarantee that the data sent to Google won't be used by Google to train AI models (see Vertex AI documentation for details).
It's also important to be transparent with your users when they're engaging with generative AI technology. You should highlight the possibility of unexpected model behavior.
Finally, users should have control within your app over how their activity related to AI model interactions is stored and deleted.
You can learn more about how Google is approaching Generative AI responsibly in the Google Cloud documentation.
29 Jan 2025 5:00pm GMT
28 Jan 2025
Android Developers Blog
Helping users find trusted apps on Google Play
Posted by JJ Zou - Product Manager, and Scott Lin - Product Manager
At Google Play, we're committed to empowering you with the tools and resources you need to build successful and secure apps that users can rely on. That's why we're introducing a new way to recognize VPN apps that go above and beyond to protect their users: a "Verified" badge for consumer-facing VPN apps.
This new badge is designed to highlight apps that prioritize user privacy and safety, help users make more informed choices about the VPN apps they use, and build confidence in the apps they ultimately download. This badge complements existing features such as the Google Play Store banner for VPNs and Data Safety section declaration in the Play Store.
Build user trust with more transparency
Earning the VPN badge isn't just about checking a box- it's proof that your VPN app invests in app safety. This badge signifies that your app has gone above and beyond, adhering to the Play safety and security guidelines and successfully completed a Mobile Application Security Assessment (MASA) Level 2 validation.
The VPN badge helps your app stand out in a crowded marketplace. Once awarded, the badge is prominently displayed on your app's details page and in search results. Additionally, we have built new surfaces to showcase verified VPN applications.
Demonstrating commitment to security and safety
We're excited to share insights from some of our partners who have already earned the VPN badge and are leading the way in building a safe and trusted Google Play ecosystem. Learn how partners like NordVPN, hide.me, and Aloha are using the badge and implementing best practices for user security:
NordVPN
"We're excited that the new 'Verified' badge will help users easily identify VPNs that meet high standards for security and privacy. In a market where trust is key, this badge not only provides reassurance to customers, but also highlights the integrity of developers committed to delivering secure and reliable products."
hide.me
"Privacy and user safety are fundamental to our VPN's architecture. The MASA program has been valuable in validating our security practices and maintaining high standards. This accreditation provides independent verification of our commitment to protecting user privacy."
Aloha Browser
"The certification process is well-organized and accessible to any company. If your product is developed with security as a core focus, passing the required audits should not pose any difficulty. We regularly conduct third-party audits and have been active participants in the MASA program since its inception. Additionally, it fosters discipline in your development practices, knowing that regular re-certification is required. Ultimately, it's the end user who benefits the most-a secure and satisfied user is the ultimate goal for every app developer."
Getting your App Badge-Ready
To take advantage of this opportunity to enhance your app's profile and attract more users, learn more about the specific criteria and start the validation process today.
To be considered for the "Verified" badge, your VPN app needs to:
- Complete a Mobile Application Security Assessment (MASA) Level 2 validation
- Have an Organization developer account type
- Meet target API level requirements for Google Play apps
- Have at least 10,000 installs and 250 reviews
- Be published on Google Play for at least 90 days
- Submit a Data Safety section declaration, opting into:
- Independent security review, under 'Additional badges'
- Encryption in transit
Note: This list is not exhaustive and doesn't fully represent all the criteria used to display the badge. While other factors contribute to the evaluation, fulfilling these requirements significantly increases your chances of seeing your VPN app "Verified."
Join us in our mission to create a safer and more transparent Google Play ecosystem. We're here to support you with the tools and resources you need to build trusted apps.
28 Jan 2025 6:00pm GMT
24 Jan 2025
Android Developers Blog
Android Studio’s 10 year anniversary
Posted by Tor Norbye - Engineering Director, Jamal Eason - Director of Product Management, and Xavier Ducrohet - Tech Lead | Android Studio
Android Studio provides you an integrated development environment (IDE) to develop, test, debug, and package Android apps that can reach billions of users across a diverse set of Android devices. Last month we reached a big milestone for the product: 10 years since the Android Studio 1.0 release reached the stable channel. You can hear a bit more about its history in the most recent episode of Android Developers Backstage, or watch some of the team's favorite moments: 🎉
When we set out to develop Android Studio we started with these three principles:
First, we wanted to build and release a complete IDE, not just a plugin. Before Android Studio, users had to go download a JDK, then download Eclipse, then configure it with an update center to point to Android, install the Eclipse plugin for Android, and then configure that plugin to point to an Android SDK install. Not only did we want everything to work out-of-the-box, but we also wanted to be able to configure and improve everything: from having an integrated dependency management system to offering code inspections that were relevant to Android app developers to having a single place to report bugs.
Second, we wanted to build it on top of an actively maintained, open-sourced, and best-of-breed Java programing language IDE. Not too long before releasing Android Studio, we had all used IntelliJ and felt it was superior from a code editing perspective.
And third, we wanted to not only provide a build system that was better suited for Android app development, but to also enable this build system to work consistently from both from the command line and from inside the IDE. This was important because in the previous tool chain, we found that there were discrepancies in behavior and capability between the in-IDE builds with Eclipse, and CI builds with Ant.
This led to the release of Android Studio, including these highlights:
Here are some nostalgic screenshots from that first version of Android Studio:
Android Studio has come a long way since those early days, but our mission of empowering Android developers with excellent tools continues to be our focus.
Let's hear from some team members across Android, JetBrains, and Gradle as they reflect on this milestone and how far the ecosystem has come since then.
Android Studio team
"Inside the Android team, engineers who didn't work on apps had the choice between using Eclipse and using IntelliJ, and most of them chose IntelliJ. We knew that it was the gold standard for Java development (and still is, all these years later.) So we asked ourselves: if this is what developers prefer when given a choice, wouldn't this be for our users as well?
And the warm reception when we unveiled the alpha at I/O in 2013 made it clear that it was the right choice."
- Tor Norbye, Engineering Director of Android Studio at Google
"We had a vision of creating a truly Integrated Development Environment for Android app development instead of a collection of related tools. In our previous working model, we had contributions of Android tools from a range of frameworks and UX flows that did not 100% work well end-to-end. The move to the open-sourced JetBrains IntelliJ platform enabled the Google team to tie tools together in a thoughtful way with Android Studio, plus it allowed others to contribute in a more seamless way. Lastly, looking back at the last 10 years, I'm proud of the partnership with Jetbrains and Gradle, plus the community of contributors to bring the best suite of tools to Android app developers."
- Jamal Eason, Director of Product Management of Android Studio at Google
JetBrains
"Google choosing IntelliJ as the platform to build Android Studio was a very exciting moment for us at JetBrains. It allowed us to strengthen and build on the platform even further, and paved the way for further collaboration in other projects such as Kotlin."
- Hadi Hariri, VP of Program Management at JetBrains
Gradle
"Android Studio's 10th anniversary marks a decade of incredible progress for Android developers. We are proud that Gradle Build Tool has continued to be a foundational part of the Android toolchain, enabling millions of Android developers to build their apps faster, more elegantly, and at scale."
- Hans Dockter, creator of Gradle Build Tool and CEO/Founder of Gradle Inc.
"Our long-standing strategic partnership with Google and our mutual commitment to improving the developer experience continues to impact millions of developers. We look forward to continuing that journey for many years to come."
- Piotr Jagielski, VP of Engineering, Gradle Build Tool
Last but not least, we want to thank you for your feedback and support over the last decade. Android Studio wouldn't be where it is today without the active community of developers who are using it to build Android apps for their communities and the world and providing input on how we can make it better each day.
As we head into this new year, we'll be bringing Gemini into more aspects of Android Studio to help you across the development lifecycle to build quality apps faster. We'll strive to make it easier and more seamless to build, test, and deploy your apps with Jetpack Compose across the range of form factors. We are proud of what we launch, but we always have room to improve in the evolving mobile ecosystem. Therefore, quality and stability of the IDE is our top priority so that you can be as productive as possible.
We look forward to continuing to empower you with great tools and improvements as we take Android Studio forward into the next decade. 🚀 We also welcome you to be a part of our developer community on LinkedIn, Medium, YouTube, or X.
24 Jan 2025 6:00pm GMT
23 Jan 2025
Android Developers Blog
The First Beta of Android 16
Posted by Matthew McCullough - VP of Product Management, Android Developer
The first beta of Android 16 is now available, which means it's time to open the experience up to both developers and early adopters. You can now enroll any supported Pixel device here to get this and future Android Beta updates over-the-air.
This build includes support for the future of app adaptivity, Live Updates, the Advanced Professional Video format, and more. We're looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone.
Android adaptive apps
Users expect apps to work seamlessly on all their devices, regardless of display size and form factor. To that end, Android 16 is phasing out the ability for apps to restrict screen orientation and resizability on large screens. This is similar to features OEMs have added over the last several years to large screen devices to allow users to run apps at any window size and aspect ratio.
On screens larger than 600dp wide, apps that target API level 36 will have app windows that resize; you should check your apps to ensure your existing UIs scale seamlessly, working well across portrait and landscape aspect ratios. We're providing frameworks, tooling, and libraries to help.
Key changes:
- Manifest attributes and APIs that restrict orientation and resizing will be ignored for apps - but not games - on large screens.
Timeline:
- Android 16 (2025): Changes apply to large screens (600dp in width) for apps targeting API level 36 (developers can opt-out)
- Android release in 2026: Changes apply to large screens for apps targeting API level 37 (no opt-out)
- It's a great time to make your app adaptive! You can test these overrides without targeting using the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag. Learn more about changes to orientation and resizability APIs in Android 16.
Live Updates
Live Updates are a new class of notifications that help users monitor and quickly access important ongoing activities.
The new ProgressStyle notification template provides a consistent user experience for Live Updates, helping you build for these progress-centric user journeys: rideshare, delivery, and navigation. It includes support for custom icons for the start, end, and current progress tracking, segments and points, user journey states, milestones, and more.
ProgressStyle notifications are suggested only for ride sharing, food delivery, and navigation use cases.
@Override protected Notification getNotification() { return new Notification.Builder(mContext, CHANNEL_ID) .setSmallIcon(R.drawable.ic_app_icon) .setContentTitle("Ride requested") .setContentText("Looking for nearby drivers") .setStyle( new Notification.ProgressStyle() .addProgressSegment( new Notification.ProgressStyle.Segment(100) .setColor(COLOR_ORANGE) ).setProgressIndeterminate(true) ).build(); }
Camera and media updates
Android 16 advances support for the playback, creation, and editing of high-quality media, a critical use case for social and productivity apps.
Advanced Professional Video
Android 16 introduces support for the Advanced Professional Video (APV) codec which is designed to be used for professional level high quality video recording and post production.
The APV codec standard has the following features:
- Perceptually lossless video quality (close to raw video quality)
- Low complexity and high throughput intra-frame-only coding (without pixel domain prediction) to better support editing workflows
- Support for high bit-rate range up to a few Gbps for 2K, 4K and 8K resolution content, enabled by a lightweight entropy coding scheme
- Frame tiling for immersive content and for enabling parallel encoding and decoding
- Support for various chroma sampling formats and bit-depths
- Support for multiple decoding and re-encoding without severe visual quality degradation
- Support multi-view video and auxiliary video like depth, alpha, and preview
- Support for HDR10/10+ and user-defined metadata
A reference implementation of APV is provided through the OpenAPV project. Android 16 will implement support for the APV 422-10 Profile that provides YUV 422 color sampling along with 10-bit encoding and for target bitrates of up to 2Gbps.
Camera night mode scene detection
To help your app know when to switch to and from a night mode camera session, Android 16 adds EXTENSION_NIGHT_MODE_INDICATOR. If supported, it's available in the CaptureResult within Camera2.
This is the API we briefly mentioned as coming soon in the "How Instagram enabled users to take stunning low light photos" blogpost. That post is a practical guide on how to implement night mode together with a case study that links higher-quality, in-app, night mode photos with an increase in the number of photos shared from the in-app camera.
Vertical Text
Android 16 adds low-level support for rendering and measuring text vertically to provide foundational vertical writing support for library developers. This is particularly useful for languages like Japanese that commonly use vertical writing systems. A new flag, VERTICAL_TEXT_FLAG, has been added to the Paint class. When this flag is set using Paint.setFlags, Paint's text measurement APIs will report vertical advances instead of horizontal advances, and Canvas will draw text vertically.
Note: Current high level text APIs, such as Text in Jetpack Compose, TextView, Layout classes and their subclasses do not support vertical writing systems, and do not support using the VERTICAL_TEXT_FLAG.
val text = "「春は、曙。」" Box(Modifier .padding(innerPadding) .background(Color.White) .fillMaxSize() .drawWithContent { drawIntoCanvas { canvas -> val paint = Paint().apply { textSize = 64.sp.toPx() } // Draw text vertically paint.flags = paint.flags or VERTICAL_TEXT_FLAG val height = paint.measureText(text) canvas.nativeCanvas.drawText( text, 0, text.length, size.width / 2, (size.height - height) / 2, paint ) } }) {}
Accessibility
Android 16 adds new accessibility APIs to help you bring your app to every user.
Supplemental descriptions
When an accessibility service describes a ViewGroup, it combines content labels from its child views. If you provide a contentDescription for the ViewGroup, accessibility services assume you are also overriding the content of non-focusable child views. This can be problematic if you want to label things like a drop down (e.g. "Font Family") while preserving the current selection for accessibility (e.g. "Roboto"). Android 16 adds setSupplementalDescription so you can provide text that provides information about a ViewGroup without overriding information from its children.
Required form fields
Android 16 adds setFieldRequired to AccessibilityNodeInfo so apps can tell an accessibility service that input to a form field is required. This is an important scenario for users filling out many types of forms, even things as simple as a required terms and conditions checkbox, helping users to consistently identify and quickly navigate between required fields.
Generic ranging APIs
Android 16 includes the new RangingManager, which provides ways to determine the distance and angle on supported hardware between the local device and a remote device. RangingManager supports the usage of a variety of ranging technologies such as BLE channel sounding, BLE RSSI-based ranging, Ultra-Wideband, and WiFi round trip time.
Behavior changes
With every Android release, we seek to make the platform more efficient and robust, balancing the needs of your apps against things like system performance and battery life. This can result in behavior changes that impact compatibility.
ART internal changes
Code that leverages internal structures of the Android Runtime (ART) may not work correctly on devices running Android 16 along with earlier Android versions that update the ART module through Google Play system updates. These structures are changing in ways that help improve the Android Runtime's (ART's) performance.
Impacted apps will need to be updated. Relying on internal structures can always lead to compatibility problems, but it's particularly important to avoid relying on code (or libraries containing code) that leverages internal ART structures, since ART changes aren't tied to the platform version the device is running on; they go out to over a billion devices through Google Play system updates.
For more information, see the Android 16 changes affecting all apps and the restrictions on non-SDK interfaces.
Migration or opt-out required for predictive back
For apps targeting Android 16 or higher and running on an Android 16 or higher device, the predictive back system animations (back-to-home, cross-task, and cross-activity) are enabled by default. Additionally, the deprecated onBackPressed is not called and KeyEvent.KEYCODE_BACK is no longer dispatched.
If your app intercepts the back event and you haven't migrated to predictive back yet, update your app to use supported back navigation APIs or temporarily opt out by setting the android:enableOnBackInvokedCallback attribute to false in the <application> or <activity> tag of your app's AndroidManifest.xml file.
Predictive back support for 3-button navigation
Android 16 brings predictive back support to 3-button navigation for apps that have properly migrated to predictive back. Long-pressing the back button initiates a predictive back animation, giving users a preview of where the back button takes them.
This behavior applies across all areas of the system that support predictive back animations, including the system animations (back-to-home, cross-task, and cross-activity).
Fixed rate work scheduling optimization
Prior to targeting Android 16, when scheduleAtFixedRate missed a task execution due to being outside a valid process lifecycle, all missed executions will immediately execute when app returns to a valid lifecycle.
When targeting Android 16, at most one missed execution of scheduleAtFixedRate will be immediately executed when the app returns to a valid lifecycle. This behavior change is expected to improve app performance. Please test the behavior to ensure your application is not impacted. You can also test by using the app compatibility framework and enabling the STPE_SKIP_MULTIPLE_MISSED_PERIODIC_TASKS compat flag.
Ordered broadcast priority scope no longer global
In Android 16, broadcast delivery order using the android:priority attribute or IntentFilter#setPriority() across different processes will not be guaranteed. Broadcast priorities for ordered broadcasts will only be respected within the same application process rather than across all system processes.
Additionally, broadcast priorities will be automatically confined to the range (SYSTEM_LOW_PRIORITY + 1, SYSTEM_HIGH_PRIORITY - 1).
Your application may be impacted if it does either of the following:
1. Your application has declared multiple processes that have set broadcast receiver priorities for the same intent.
2. Your application process interacts with other processes and has expectations around receiving a broadcast intent in a certain order.
If the processes need to coordinate with each other, they should communicate using other coordination channels.
Gemini Extensions
Samsung just launched new Gemini Extensions on the S25 series, demonstrating new ways Android apps can integrate with the power of Gemini. We're working to make this functionality available on even more form factors.
Two Android API releases in 2025
This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include planned behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; it will not include any app-impacting behavior changes.
We'll continue to have quarterly Android releases. The Q1 and Q3 updates, which will land in-between the Q2 and Q4 API releases, will provide incremental updates to ensure continuous quality. We're putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.
There's no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.
How to get ready
In addition to performing compatibility testing on this next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing.
App compatibility
The Android 16 Preview program runs from November 2024 until the final public release in Q2 of 2025. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website.
We're targeting March of 2025 for our Platform Stability milestone. At this milestone, we'll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. From that time you'll have several months before the final release to complete your testing. The release timeline details are here.
Get started with Android 16
Now that we've entered the beta phase, you can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.
If you are currently on Android 16 Developer Preview 2 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 1.
If you are in Android 25Q1 Beta and would like to take the final stable release of 25Q1 and exit Beta, you need to ignore the over-the-air update to 25Q2 Beta 1 and wait for the release of 25Q1.
We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.
For the best development experience with Android 16, we recommend that you use the latest preview of Android Studio (Meerkat). Once you're set up, here are some of the things you should do:
- Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
- Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it.
We'll update the preview/beta system images and SDK regularly throughout the Android 16 release cycle. Once you've installed a beta build, you'll automatically get future updates over-the-air for all later previews and Betas.
For complete information, visit the Android 16 developer site.
23 Jan 2025 7:30pm GMT
The future is adaptive: Changes to orientation and resizability APIs in Android 16
Posted by Maru Ahues Bouza - Director, Product Management
With 3+ billion Android devices in use globally, the Android ecosystem is more vibrant than ever. Android mobile apps run on a diverse range of devices, from phones and foldables to tablets, Chromebooks, cars, and most recently XR. Users buy into an entire device ecosystem and expect their apps to work across all devices. To thrive in this multi-device environment, your apps need to adapt seamlessly to different screen sizes and form factors.
Many Android apps rely on user interface approaches that work in a single orientation and/or restrict resizability. However, users want apps to make full use of their large screens, so Android device manufacturers added well-received features that override these app restrictions.
With this in mind, Android 16 is removing the ability for apps to restrict orientation and resizability at the platform level, and shifting to a consistent model of adaptive apps that seamlessly adjust to different screen sizes and orientations. This change will reduce fragmentation with behavior that better meets user expectations, and improves accessibility by respecting the user's preferred orientation. We're building tools, libraries, and platform APIs to help you do this to provide a consistently excellent user experience across the entire Android ecosystem.
What's changing?
Starting with Android 16, we're phasing out manifest attributes and runtime APIs used to restrict an app's orientation and resizability, enabling better user experiences for many apps across devices.
These changes will initially apply when the app is running on a large screen, where "large screen" means that the smaller dimension of the display is greater than or equal to 600dp. This includes:
- Inner displays of large screen foldables
- Tablets, including desktop windowing
- Desktop environments, including Chromebooks
The following manifest attributes and APIs will be ignored for apps targeting Android 16 (SDK 36) on large screens:
Manifest attributes/API | Ignored values |
screenOrientation | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
setRequestedOrientation() | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
resizeableActivity | all |
minAspectRatio | all |
maxAspectRatio | all |
There are some exceptions to these changes for controlling orientation, aspect ratio, and resizability:
- As mentioned before, these changes won't apply for screens that are smaller than sw600dp (e.g. most phones, flippables, outer displays on large screen foldables)
- Games will be excluded from these changes, based on the android:appCategory flag
Also, users have control. They can explicitly opt-in to using the app's default behavior in the aspect ratio settings.
Get ready for this change, by making your app adaptive
Apps will need to support landscape and portrait layouts for window sizes in the full range of aspect ratios that users can choose to use apps in, as there will no longer be a way to restrict the aspect ratio and orientation to portrait or to landscape.
To test if your app will be impacted by these changes, use the Android 16 Beta 1 developer preview with the Pixel Tablet and Pixel Fold series emulators in Android Studio, and either set targetSdkPreview = "Baklava" or use the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag.
For existing apps that restrict orientation and aspect ratio, these changes may result in problems like overlapping layouts. To solve these issues and meet user expectations, our vision is that apps are built to be adaptive, to provide an optimal experience whether someone is using the app on a phone, foldable, tablet, Chromebook, XR or in a car.
Resolving common problems
- Avoid stretched UI components: If layouts were designed and built with the assumption of phone screens, then app functionality may break for other aspect ratios. For example, if a layout was built assuming a portrait aspect ratio, then UI elements that fill the max width of the window will appear stretched in landscape-oriented windows. If layouts aren't built to scroll, then users may not be able to click on buttons or other UI elements that are offscreen, resulting in confusing or broken behavior. Add a maximum width to components to avoid stretching, and add scrolling to ensure all content is reachable.
- Ensure camera compatibility in both orientations: Camera viewfinder previews might assume a specific aspect ratio and orientation relative to the camera sensor, resulting in stretching or flipped previews when those assumptions are broken. Ensure viewfinders rotate properly and account for the UI aspect ratio differing from the sensor aspect ratio.
- Preserve state across when window size changes: Removing orientation and aspect ratio restrictions also means that the window sizes of apps will change more frequently in response to how the user prefers to use an app, such as by rotating, folding, or resizing an app in multi-window or free-form windowing modes. Orientation changes and resizing will result in Activity recreation by default. To ensure a good user experience, it is critical that app state is preserved through these configuration changes so that users don't lose their place in the app when changing posture or changing windowing modes.
To account for different window sizes and aspect ratios, use window size classes to drive layout behavior in a way that doesn't require device-specific customizations. Apps should also be built with the assumption that window sizes will frequently change. It's not necessary to build duplicate orientation-specific layouts - instead, ensure your existing UIs can re-layout well no matter what the window size is. If you have a landscape- or portrait-specific layout, those layouts will still be used.
Optimizing for window sizes by building adaptive
If you're already building adaptive layouts and supporting all orientations, you're set up for success as your app will be prepared for each of the device types and windowing modes your users want to use your app in and these changes should have minimal impact.
We've also got a range of testing resources to help you guarantee reliability. You can automate testing with tools like the Espresso testing framework and Jetpack Compose testing APIs.
FlipaClip is a great example of why building for multiple form-factors matters: they saw 54% growth in tablet users in the four months after they optimized their app to be adaptive.
Timeline
We understand that the changes are significant for apps that have traditionally only supported portrait orientation. UI issues like buttons going off screen, overlapping content, or screens with camera viewfinders may need adjustments.
To help you plan ahead and make the necessary adjustments, here's the planned timeline outlining when these changes will take effect:
- Android 16 (2025): Changes described above will be the baseline experience for large screen devices (smallest screen width > 600dp) for apps that target API level 36, with the option for developers to opt-out.
- Android release in 2026: Changes described above will be the baseline experience for large screen devices (smallest screen width >600dp) for apps that target API level 37. Developers will not have an option to opt-out.
Target API level | Applicable devices | Developer opt-out allowed |
36 (Android 16) | Large screen devices (smallest screen width >600dp) | Yes |
37 (Anticipated) | Large screen devices (smallest screen width >600dp) | No |
The deadlines for targeting a specific API level are app store specific. For Google Play, the plan is that targeting API 36 will be required in August 2026 and targeting API 37 will be required in August 2027.
Preparing for Android 16
Refer to the Android 16 changes page for all changes impacting apps in Android 16, as well as additional resources for updating your apps if you are impacted. To test your app, download the Android 16 Beta 1 developer preview and update to targetSdkPreview = "Baklava" or use the app compatibility framework to enable specific changes.
We're committed to helping developers embrace this new era of adaptive apps and unlock the full potential of their apps across the diverse Android ecosystem. Check out the do's and don'ts for designing and building across multiple window sizes and form factors, as well how to test across the variety of devices that your app will be used in.
Stay tuned for more updates and resources as we approach the release of Android 16!
23 Jan 2025 5:00pm GMT
22 Jan 2025
Android Developers Blog
Build kids app experiences for Wear OS
Posted by John Zoeller - Developer Relations Engineer, and Caroline Vander Wilt - Group Product Manager
New Wear OS features enable 'standalone' watches for kids, unlocking new possibilities for Wear OS app developers
In collaboration with Samsung, Wear OS is introducing Galaxy Watch for Kids, a new kids experience enabling kids to explore while staying connected with their families from their smartwatch, no phone necessary. This launch unlocks new opportunities for Wear OS developers to reach younger audiences.
Galaxy Watch for Kids is rolling out to Galaxy Watch7 LTE models , with features including:
- No phone ownership required: This experience enables the watch and its associated apps to operate on a fully standalone basis using LTE, and when available, Wifi connectivity. This includes calling, texting, games, and more.
- Selection of kid-friendly apps: From gaming to health, kids can browse and request installs of Teacher Approved apps and watch faces onGoogle Play. In addition to approving and blocking apps, parents can also monitor app usage from Google Family Link.
- Stay in touch with parent-managed contacts: Parents can ensure safer communications by limiting text and calling to approved contacts.
- Location sharing: Offers peace of mind with location sharing and geofencing notifications when kids leave or arrive at designated areas.
- School time: Limits watch functionality during scheduled hours of the day, so kids can focus while in school or studying.
Building kids experiences with standalone functionality enables you to reach both standalone and tethered watches for kids. Apps like Math Tango have already created great Wear OS experiences for kids. Check out the video below to learn how they built a rich and engaging Wear OS app.
Our new kids-focused design and content principles and developer guidance are also available today. Check out some of the highlights in the next section.
New principles and guidelines for development
We've created new design principles and guidelines to help developers take advantage of this opportunity to build and improve apps and watch faces for kids.
Design principle: Active and fun
Build engaging healthy experiences for children by including activity-based features.
A great example of this is the Odd Squad Time Unit app from PBS KIDS that encourages children to get up and be physically active. By using the on-device sensors and power-efficient platform APIs, the app is able to provide a fun experience all day and still maintain battery life of the watch from wakeup to bed time.
Note that while experiences should be catered to kids, they must also follow the Wear OS quality requirements related to the visual experience of your app, especially when crafting touch targets and font sizes.
Content principle: Thoughtfully crafted
Consider adjusting your content to make it not only appropriate, but also consumable and intuitive for younger kids (including those as young as 6). This includes both audio and visual app components.
Tinkercast's Two Whats?! And a Wow! app uses age-appropriate vocabulary and fun characters to aid in their teaching. It's a great example of how a developer should account for reading comprehension.
Development guidelines
New Wear OS kids apps must adhere to the Wear OS app quality guidelines, the guidelines for standalone apps, and the new Kids development guide.
Minimize impact on device battery
Minimize events that affect battery life over the course of one session. Kids use watches that provide important safety features for their parents or guardians, which depend on the device having enough battery life. Below are best practices for reducing battery impact.
✅ DO design for offline use cases so that kids can play without incurring network-related battery costs
✅ DO minimize tasks that require an internet or GPS connection
✅ DO use power efficient APIs for all day activity tracking as well as tracking exercises
🚫 DO NOT use direct sensor tracking as this will significantly reduce the battery life
🚫 DO NOT include long-running animations
Choose a development environment
To develop kid-friendly apps and games you can use Compose for Wear OS, our recommended approach for building UI for Wear OS, as well as Unity for Android.
We recommend Unity for developing games on Wear OS if you're familiar and comfortable with its workflows and capabilities. However, for games with only a few animations, Compose Animation should be sufficient and is better supported within the Android environment.
Be sure to consider that some Wear OS quality requirements may require custom Unity implementations, such as support for Rotary Input.
Originator's MathTango showcases the flexibility and richness of developing with Unity:
Creating Watch Faces
Developing watch faces for kids requires the use of Watch Face Format. Watch faces should adhere to our content and design principles mentioned above, as well as our quality standards, including our ambient mode requirement.
The following examples demonstrate our Content Principle: Appealing. The content is relevant, engaging, and fun for kids, sparking their interest and imagination.
The Crayola Pets Watch Face comes with a great variety of customization options, and demonstrates an informative and pleasant watch face:
The Marvel Watch Faces (Captain America shown) provide a fun and useful step tracking feature:
Kids experience publishing requirements
Developers looking to get started on a new kids experience will need to keep a few things in mind when publishing on the Play Store.
- Age and Content Rating: Kids apps should be configured in the Play Store to meet the age and content requirements appropriate to their functionality
- Standalone Functionality: Apps must have 'standalone' defined in their manifest and meet all associated requirements, which will apply when the watch is set up with a child account
- Using Watch Face Format: Only watch faces which are built with Watch Face Format will be made available for kids
Expand your reach with Wear OS
Get ready to reach a new generation of Wear OS users! We've created all-new guidelines to help you build engaging experiences for kids. Here's a quick recap:
- Continue to use the baseline set of Wear OS development resources, including Get started with Wear OS and Wear OS app quality
- Focus on enrichment and age-tailoring
- Make sure it works with Standalone, and keep an eye on the battery
With the Wear for Kids experience, developers can reach an entirely new audience of users and be part of the next generation of learning and enrichment on Wear OS.
Check out all of the new experiences on the Play Store!
22 Jan 2025 4:00pm GMT
10 Jan 2025
Android Developers Blog
Apps adopt Transformer to support more reliable and performant media editing use cases
Posted by Caren Chang - Developer Relations Engineer
The Jetpack Media3 library enables Android apps to build high quality media apps. As part of the Media3 library, the Transformer module aims to provide easy to use, reliable, and performant APIs for transcoding and editing media.
For example, apps can use Transformer to apply editing operations such as trimming a long piece of media file, or applying effects to video tracks. Transformer can also be used to convert media files from one format to another, such as adjusting the resolution or encoding of the media file.
Developing Transformer APIs
As part of the process to introduce new APIs, our engineering team works closely with Google apps such as Google Photos to test and experiment the new APIs. Experimental flags are first introduced to enable performance improvements. Once the results are successful and conclusive, these experimental features are then built into the default API implementations or promoted to public APIs for all apps to use. This approach allows Transformer APIs to be tested on a wide variety of devices.
Transformer Adoption in apps
Apps that have been using Transformer in production observed in-app performance improvements, less code to maintain, and better developer experience. Let's take a closer look at how Transformer has helped apps for their media-editing use cases.
One of users' favorite features in Google Photos is memory sharing, where snippets of your life story that are curated and presented as Google Photos memories can now be shared as videos to social media and chat apps. However, the process of combining media items to create a video on device is resource intensive and subject to significant latency, especially on low-end devices. To reduce this latency and enable the feature on a wider range of devices, Photos adopted Transformer in their media creation pipeline. Along with other improvements made, the team found that Transformer played a part in reducing the median user latency for creating memory videos by 41% on high-end devices and 27% on mid-range devices.
The Photos app also enables users to perform media edits such as trimming or rotating a video. By adopting Transformer APIs for rotating videos, median save latency was reduced by 79% for applicable videos. The app also adopted Transformer's API for optimizing video trimming, and observed video save latency decrease by 64%.
1 Second Everyday is a personal video journal that helps you create captivating montages and timelapses. One of the app's main user journeys is sequentially combining short videos to create a meaningful movie. After adopting Transformer for this use case, the app observed that video encoding performance was up to 5x faster, allowing them to explore enabling 4k and HDR support. The Transformer adoption also helped decrease relevant code by 30%, making it easier for the developers to maintain the code base.
BandLab is the next-generation music creation platform used by millions around the world to make and share their music. The app originally used MediaCodecs for their video creation use cases, but found that the low level implementation resulted in native crashes that were difficult to debug. After researching more on Transformer, the team made the decision to migrate from MediaCodecs to Transformer. Overall, it only took the team 12 working days for the migration, and this resulted in a simpler codebase and more maintainable pipeline for their media creation use cases. In addition, the app observed that all previously observed native crashes were no longer occurring anymore.
What's next for Transformers?
We're excited to see Transformer's adoption in the developer community, and will continue adding new features to support more media-editing use cases for the Android ecosystem including:
- Better support for previewing media edits
- Improving the performance and developer experience for video frame extraction
- Easier integration with AI effects
- and much more
Keep an eye out on what we're working on in the Media3 Github, and file feature requests to help shape the future of Transformer!
10 Jan 2025 5:00pm GMT
09 Jan 2025
Android Developers Blog
Android Studio Ladybug Feature Drop is Stable!
Posted by Steven Jenkins - Product Manager, Android Studio
Today, we are thrilled to announce the stable release of Android Studio Ladybug 🐞 Feature Drop (2024.2.2)!
Accelerate your productivity with Gemini in Android Studio, Animation Preview support for Wear Tiles, App Links Assistant and much more. All of these new features are designed to help you build high-quality Android apps faster.
Read on to learn more about all the updates, quality improvements, and new features across your key workflows in Android Studio Ladybug Feature Drop, and download the latest stable version today to try them out!
Gemini in Android Studio
Gemini Code Transforms
Gemini Code Transforms can help you modify, optimize, or add code to your app with AI assistance. Simply right-click in your code editor and select "Gemini > Generate code" or highlight code and select "Gemini > Transform selected code." You can also use the keyboard shortcut Ctrl+\ (⌘+\ on macOS) to bring up the Gemini prompt. Describe the changes you want to make to your code, and Gemini will suggest a code diff, allowing you to easily review and accept only the suggestions you want.
With Gemini Code Transforms, you can simplify complex code, perform specific code transformations, or even generate new functions. You can also refine the suggested code to iterate on the code suggestions with Gemini. It's an AI coding assistant right in your editor, helping you write better code more efficiently.
Rename
Gemini in Android Studio enhances your workflow with intelligent assistance for common tasks. When renaming a single variable, class, or method from the code editor, the "Refactor > Rename" action uses Gemini to suggest contextually appropriate names, making it smoother and more efficient to refactor names as you're coding in the editor.
Rethink
For larger renaming refactors, Gemini can "Rethink variable names" across your whole file. This feature analyzes your code and suggests more intuitive and descriptive names for variables and methods, improving readability and maintainability.
Commit Message
Gemini now assists with commit messages. When committing changes to version control, it analyzes your code modifications and suggests a detailed commit message.
Generate Documentation
Gemini in Android Studio makes documenting your code easier than ever. To generate clear and concise documentation, select a code snippet, right-click in the editor and choose "Gemini > Document Function" (or "Document Class" or "Document Property", depending on the context). Gemini will generate a draft that you can then refine and perfect before accepting the changes. This streamlined process helps you create informative documentation quickly and efficiently.
Debug
Animation Preview support for Wear OS Tiles
Animation Preview support for Wear OS Tiles helps you visualize and debug tile animations with ease. It provides a real-time view of your animations, allowing you to preview them, control playback with options like play, pause, and speed adjustment, and inspect key properties such as initial/end states and animation curves. You can even dynamically modify animation code and instantly observe the results within the inspector, streamlining the debugging and refinement process.
Wear Health Services
The Wear Health Services feature in Android Studio simplifies the process of testing health and fitness apps by enabling Wear Health Services within the emulator. You can now easily customize various parameters for a given exercise such as heart rate, distance, and speed without needing a physical device or performing the activity itself. This streamlines the development and testing workflow, allowing for faster iteration and more efficient debugging of health-related features.
Optimize
App Links Assistant
App Links Assistant simplifies the process of implementing app links by serving valid JSON syntax that resolves broken deep links for your app. You can review the JSON file and then upload it to your website, resolving issues quickly. This eliminates the manual creation of the JSON file, saving you time and effort. The tool also allows you to compare existing JSON files with newly generated ones to easily identify any discrepancies.
Google Play SDK Insights Integration
Android Studio now provides enhanced lint warnings for public SDKs from the Google Play SDK Index and the Google Play SDK Console, helping you identify and address potential issues. These warnings alert you if an SDK is outdated, violates Google Play policies, or has known security vulnerabilities. Furthermore, Android Studio provides helpful quick fixes and recommended version ranges whenever possible, making it easier to update your dependencies and keeping your app more secure and compliant.
Quality improvements
Beyond new features, we also continued to improve the overall quality and stability of Android Studio. In fact, the Android Studio team addressed over 770 bugs during the Ladybug Feature Drop development cycle.
IntelliJ platform update
Android Studio Ladybug Feature Drop (2024.2.2) includes the IntelliJ 2024.2 platform release, which has many new features such as more intuitive full line code completion suggestions, a preview in the Search Everywhere dialog and improved log management for the Java** and Kotlin programming languages.
See the full IntelliJ 2024.2 release notes.
Summary
To recap, Android Studio Ladybug Feature Drop includes the following enhancements and features:
Gemini in Android Studio
- Gemini Code Transforms
- Rename
- Rethink
- Commit Message
- Generate Documentation
Debug
- Animation Preview support for Wear OS Tiles
- Wear Health Services
Optimize
- App Links Assistant
- Google Play SDK Insights Integration
Quality Improvements
- 770+ bugs addressed
IntelliJ Platform Update
- More intuitive full line code completion suggestions
- Preview in the Search Everywhere dialog
- Improved log management for Java and Kotlin programming languages
Getting Started
Ready for next-level Android development? Download Android Studio Ladybug Feature Drop and unlock these cutting-edge features today. As always, your feedback is important to us - check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!
**Java is a trademark or registered trademark of Oracle and/or its affiliates.
09 Jan 2025 7:00pm GMT
08 Jan 2025
Android Developers Blog
Performance Class helps Google Maps deliver premium experiences
Posted by Nevin Mital - Developer Relations Engineer, Android Media
The Android ecosystem features a diverse range of devices, and it can be difficult to build experiences that take advantage of new or premium hardware features while still working well for users on all devices. With Android 12, we introduced the Media Performance Class (MPC) standard to help developers better understand a device's capabilities and identify high-performing devices. For a refresher on what MPC is, please see our last blog post, Using performance class to optimize your user experience, or check out the Performance Class documentation.
Earlier this year, we published the first stable release of the Jetpack Core Performance library as the recommended solution for more reliably obtaining a device's MPC level. In particular, this library introduces the PlayServicesDevicePerformance class, an API that queries Google Play Services to get the most up-to-date MPC level for the current device and build. I'll get into the technical details further down, but let's start by taking a look at how Google Maps was able to tailor a feature launch to best fit each device with MPC.
Performance Class unblocks premium experience launch for Google Maps
Google Maps recently took advantage of the expanded device coverage enabled by the Play Services module to unblock a feature launch. Google Maps wanted to update their UI by increasing the transparency of some layers. Consequently, this meant they would need to render more of the map, and found they had to stop the rollout due to latency increases on many devices, especially towards the low-end. To resolve this, the Maps team started by slicing an existing key metric, "seconds to UI item visibility", by MPC level, which revealed that while all devices had a small increase in this latency, devices without an MPC level had the largest increase.
With these results in hand, Google Maps started their rollout again, but this time only launching the feature on devices that report an MPC level. As devices continue to get updated and meet the bar for MPC, the updated Google Maps UI will be available to them as well.
The new Play Services module
MPC level requirements are defined in the Android Compatibility Definition Document (CDD), then devices and Android builds are validated against these requirements by the Android Compatibility Test Suite (CTS). The Play Services module of the Jetpack Core Performance library leverages these test results to continually update a device's reported MPC level without any additional effort on your end. This also means that you'll immediately have access to the MPC level for new device launches without needing to acquire and test each device yourself, since it already passed CTS. If the MPC level is not available from Google Play Services, the library will fall back to the MPC level declared by the OEM as a build constant.
As of writing, more than 190M in-market devices covering over 500 models across 40+ brands report an MPC level. This coverage will continue to grow over time, as older devices update to newer builds, from Android 11 and up.
Using the Core Performance library
To use Jetpack Core Performance, start by adding a dependency for the relevant modules in your Gradle configuration, and create an instance of DevicePerformance. Initializing a DevicePerformance should only happen once in your app, as early as possible - for example, in the onCreate() lifecycle event of your Application. In this example, we'll use the Google Play services implementation of DevicePerformance.
// Implementation of Jetpack Core library. implementation("androidx.core:core-ktx:1.12.0") // Enable APIs to query for device-reported performance class. implementation("androidx.core:core-performance:1.0.0") // Enable APIs to query Google Play Services for performance class. implementation("androidx.core:core-performance-play-services:1.0.0")
import androidx.core.performance.play.services.PlayServicesDevicePerformance class MyApplication : Application() { lateinit var devicePerformance: DevicePerformance override fun onCreate() { // Use a class derived from the DevicePerformance interface devicePerformance = PlayServicesDevicePerformance(applicationContext) } }
Then, later in your app when you want to retrieve the device's MPC level, you can call getMediaPerformanceClass():
class MyActivity : Activity() { private lateinit var devicePerformance: DevicePerformance override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) // Note: Good app architecture is to use a dependency framework. See // https://developer.android.com/training/dependency-injection for more // information. devicePerformance = (application as MyApplication).devicePerformance } override fun onResume() { super.onResume() when { devicePerformance.mediaPerformanceClass >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE -> { // MPC level 34 and later. // Provide the most premium experience for the highest performing devices. } devicePerformance.mediaPerformanceClass == Build.VERSION_CODES.TIRAMISU -> { // MPC level 33. // Provide a high quality experience. } else -> { // MPC level 31, 30, or undefined. // Remove extras to keep experience functional. } } } }
Strategies for using Performance Class
MPC is intended to identify high-end devices, so you can expect to see MPC levels for the top devices from each year, which are the devices you're likely to want to be able to support for the longest time. For example, the Pixel 9 Pro released with Android 14 and reports an MPC level of 34, the latest level definition when it launched.
You should use MPC as a complement to any existing Device Clustering solutions you already use, such as querying a device's static specs or manually blocklisting problematic devices. An area where MPC can be a particularly helpful tool is for new device launches. New devices should be included at launch, so you can use MPC to gauge new devices' capabilities right from the start, without needing to acquire the hardware yourself or manually test each device.
A great first step to get involved is to include MPC levels in your telemetry. This can help you identify patterns in error reports or generally get a better sense of the devices your user base uses if you segment key metrics by MPC level. From there, you might consider using MPC as a dimension in your experimentation pipeline, for example by setting up A/B testing groups based on MPC level, or by starting a feature rollout with the highest MPC level and working your way down. As discussed previously, this is the approach that Google Maps took.
You could further use MPC to tune a user-facing feature, for example by adjusting the number of concurrent video playbacks your app attempts based on the MPC level's concurrent codec guarantees. However, make sure to still query a device's runtime capabilities when using this approach, as they may differ depending on the environment and state the device is in.
Get in touch!
If MPC sounds like it could be useful for your app, please give it a try! You can get started by taking a look at our sample code or documentation. We welcome you to share any questions or feedback you have in this short form.
This blog post is a part of Camera and Media Spotlight Week. We're providing resources - blog posts, videos, sample code, and more - all designed to help you uplevel the media experiences in your app.
To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.
08 Jan 2025 5:00pm GMT
07 Jan 2025
Android Developers Blog
Spotlight Week: Android Camera and Media
Posted by Caren Chang- Android Developer Relations Engineer
Android offers Camera and Media APIs to help you build apps that can capture, edit, share, and play media. To help you enhance Android Camera and Media experiences to be even more delightful for your users, this week we will be kicking off the Camera and Media Spotlight week.
This Spotlight Week will provide resources-blog posts, videos, sample code, and more-all designed to help you uplevel the media experiences in your app. Check out highlights from the latest releases in Camera and Media APIs, including better Jetpack Compose support in CameraX, motion photo support in Media3 Transformer, simpler ExoPlayer setup, and much more! We'll also bring in developers from the community to talk about their experiences building Android camera and media apps.
Here's what we're covering during Camera and Media Spotlight week:
What's new in camera and media
Tuesday, January 7
Check out what's new in the latest CameraX and Media3 releases, including how to get started with building Camera apps with Compose.
Creating delightful and premium experiences
Wednesday, January 8
Building delightful and premium experiences for your users is what can help your app really stand out. Learn about different ways to achieve this such as utilizing the Media Performance Class or enabling HDR video capture in your app. Learn from developers, such as how Google Drive enabled Ultra HDR images in their Android app, and Instagram improved the in-app image capture experience by implementing Night Mode.
Adaptive for camera and media, for large screens and now XR!
Thursday, January 9
Thinking adaptive is important, so your app works just as well on phones as it does large screens, like foldables, tablets, ChromeOS, cars, and the new Android XR platform! On Thursday, we'll be diving into the media experience on large screen devices, and how you can build in a smooth tabletop mode for your camera applications. Prepare your apps for XR devices by considering Spatial Audio and Video.
Media creation
Friday, January 10
Capturing, editing, and processing media content are fundamental features of the Android ecosystem. Learn about how Media3's Transformer module can help your app's media processing use cases, and see case studies of apps that are using Transformer in production. Listen in to how the 1 Second Everyday Android app approaches media use cases, and check out a new API that allows apps to capture concurrent camera streams.Learn from Android Google Developer Tom Colvin on how he experimented with building an AI-powered Camera app.
These are just some of the things to think about when building camera and media experiences in your app. Keep checking this blog post for updates; we'll be adding links and more throughout the week.
07 Jan 2025 5:30pm GMT
Media3 1.5.0 — what’s new?
Posted by Kristina Simakova - Engineering Manager
This article is cross-published on Medium
Media3 1.5.0 is now available!
Transformer now supports motion photos and faster image encoding. We've also simplified the setup for DefaultPreloadManager and ExoPlayer, making it easier to use. But that's not all! We've included a new IAMF decoder, a Kotlin listener extension, and easier Player optimization through delegation.
To learn more about all new APIs and bug fixes, check out the full release notes.
Transformer improvements
Motion photo support
Transformer now supports exporting motion photos. The motion photo's image is exported if the corresponding MediaItem's image duration is set (see MediaItem.Builder().setImageDurationMs()) Otherwise, the motion photo's video is exported. Note that the EditedMediaItem's duration should not be set in either case as it will automatically be set to the corresponding MediaItem's image duration.
Faster image encoding
This release accelerates image-to-video encoding, thanks to optimizations in DefaultVideoFrameProcessor.queueInputBitmap(). DefaultVideoFrameProcessor now treats the Bitmap given to queueInputBitmap() as immutable. The GL pipeline will resample and color-convert the input Bitmap only once. As a result, Transformer operations that take large (e.g. 12 megapixels) images as input execute faster.
AudioEncoderSettings
Similar to VideoEncoderSettings, Transformer now supports AudioEncoderSettings which can be used to set the desired encoding profile and bitrate.
Edit list support
Transformer now shifts the first video frame to start from 0. This fixes A/V sync issues in some files where an edit list is present.
Unsupported track type logging
This release includes improved logging for unsupported track types, providing more detailed information for troubleshooting and debugging.
Media3 muxer
In one of the previous releases we added a new muxer library which can be used to create MP4 container files. The media3 muxer offers support for a wide range of audio and video codecs, enabling seamless handling of diverse media formats. This new library also brings advanced features including:
- B-frame support
- Fragmented MP4 output
- Edit list support
The muxer library can be included as a gradle dependency:
implementation ("androidx.media3:media3-muxer:1.5.0")
Media3 muxer with Transformer
To use the media3 muxer with Transformer, set an InAppMuxer.Factory (which internally wraps media3 muxer) as the muxer factory when creating a Transformer:
val transformer = Transformer.Builder(context)
.setMuxerFactory(InAppMuxer.Factory.Builder().build())
.build()
Simpler setup for DefaultPreloadManager and ExoPlayer
With Media3 1.5.0, we added DefaultPreloadManager.Builder, which makes it much easier to build the preload components and the player. Previously we asked you to instantiate several required components (RenderersFactory, TrackSelectorFactory, LoadControl, BandwidthMeter and preload / playback Looper) first, and be super cautious on correctly sharing those components when injecting them into the DefaultPreloadManager constructor and the ExoPlayer.Builder. With the new DefaultPreloadManager.Builder this becomes a lot simpler:
- Build a DefaultPreloadManager and ExoPlayer instances with all default components.
val preloadManagerBuilder = DefaultPreloadManager.Builder() val preloadManager = preloadManagerBuilder.build() val player = preloadManagerBuilder.buildExoPlayer()
- Build a DefaultPreloadManager and ExoPlayer instances with custom sharing components.
val preloadManagerBuilder = DefaultPreloadManager.Builder().setRenderersFactory(customRenderersFactory) // The resulting preloadManager uses customRenderersFactory val preloadManager = preloadManagerBuilder.build() // The resulting player uses customRenderersFactory val player = preloadManagerBuilder.buildExoPlayer()
- Build a DefaultPreloadManager and ExoPlayer instances, while setting the custom playback-only configurations on the ExoPlayers.
val preloadManagerBuilder = DefaultPreloadManager.Builder() val preloadManager = preloadManagerBuilder.build() // Tune the playback-only configurations val playerBuilder = ExoPlayer.Builder().setFooEnabled() // The resulting player will have playback feature "Foo" enabled val player = preloadManagerBuilder.buildExoPlayer(playerBuilder)
Preloading the next playlist item
We've added the ability to preload the next item in the playlist of ExoPlayer. By default, playlist preloading is disabled but can be enabled by setting the duration which should be preloaded to memory:
player.preloadConfiguration = PreloadConfiguration(/* targetPreloadDurationUs= */ 5_000_000L)
With the PreloadConfiguration above, the player tries to preload five seconds of media for the next item in the playlist. Preloading is only started when no media is being loaded that is required for the ongoing playback. This way preloading doesn't compete for bandwidth with the primary playback.
When enabled, preloading can help minimize join latency when a user skips to the next item before the playback buffer reaches the next item. The first period of the next window is prepared and video, audio and text samples are preloaded into its sample queues. The preloaded period is later queued into the player with preloaded samples immediately available and ready to be fed to the codec for rendering.
Once opted-in, playlist preloading can be turned off again by using PreloadConfiguration.DEFAULT to disable playlist preloading:
player.preloadConfiguration = PreloadConfiguration.DEFAULT
New IAMF decoder and Kotlin listener extension
The 1.5.0 release includes a new media3-decoder-iamf module, which allows playback of IAMF immersive audio tracks in MP4 files. Apps wanting to try this out will need to build the libiamf decoder locally. See the media3 README for full instructions.
implementation ("androidx.media3:media3-decoder-iamf:1.5.0")
This release also includes a new media3-common-ktx module, a home for Kotlin-specific functionality. The first version of this module contains a suspend function that lets the caller listen to Player.Listener.onEvents. This is a building block that's used by the upcoming media3-ui-compose module (launching with media3 1.6.0) to power a Jetpack Compose playback UI.
implementation ("androidx.media3:media3-common-ktx:1.5.0")
Easier Player customization via delegation
Media3 has provided a ForwardingPlayer implementation since version 1.0.0, and we have previously suggested that apps should use it when they want to customize the way certain Player operations work, by using the decorator pattern. One very common use-case is to allow or disallow certain player commands (in order to show/hide certain buttons in a UI). Unfortunately, doing this correctly with ForwardingPlayer is surprisingly hard and error-prone, because you have to consistently override multiple methods, and handle the listener as well. The example code to demonstrate how fiddly this is too long for this blog, so we've put it in a gist instead.
In order to make these sorts of customizations easier, 1.5.0 includes a new ForwardingSimpleBasePlayer, which builds on the consistency guarantees provided by SimpleBasePlayer to make it easier to create consistent Player implementations following the decorator pattern. The same command-modifying Player is now much simpler to implement:
class PlayerWithoutSeekToNext(player: Player) : ForwardingSimpleBasePlayer(player) { override fun getState(): State { val state = super.getState() return state .buildUpon() .setAvailableCommands( state.availableCommands.buildUpon().remove(COMMAND_SEEK_TO_NEXT).build() ) .build() } // We don't need to override handleSeek, because it is guaranteed not to be called for // COMMAND_SEEK_TO_NEXT since we've marked that command unavailable. }
MediaSession: Command button for media items
Command buttons for media items allow a session app to declare commands supported by certain media items that then can be conveniently displayed and executed by a MediaController or MediaBrowser:
You'll find the detailed documentation on android.developer.com.
This is the Media3 equivalent of the legacy "custom browse actions" API, with which Media3 is fully interoperable. Unlike the legacy API, command buttons for media items do not require a MediaLibraryService but are a feature of the Media3 MediaSession instead. Hence they are available for MediaController and MediaBrowser in the same way.
If you encounter any issues, have feature requests, or want to share feedback, please let us know using the Media3 issue tracker on GitHub. We look forward to hearing from you!
This blog post is a part of Camera and Media Spotlight Week. We're providing resources - blog posts, videos, sample code, and more - all designed to help you uplevel the media experiences in your app.
To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.
07 Jan 2025 5:29pm GMT
19 Dec 2024
Android Developers Blog
Celebrating Another Year of #WeArePlay
Posted by Robbie McLachlan - Developer Marketing
This year #WeArePlay took us on a journey across the globe, spotlighting 300 people behind apps and games on Google Play. From a founder whose app uses AI to assist visually impaired people to a game where nimble-fingered players slice flying fruits and use special combos to beat their own high score, we met founders transforming ideas into thriving businesses.
Let's start by taking a look back at the people featured in our global film series. From a mother and son duo preserving African languages, to a founder whose app helps kids become published authors - check out the full playlist.
We also continued our global tour around the world with:
- 153 new stories from the United States like Ashley's Get Mom Strong, which gives access to rehabilitation and fitness plans to help moms heal and get strong after childbirth
- 49 new stories from Japan like Toshiya's Mirairo ID, an app that empowers the disabled community by digitizing disability certificates
- 50 new stories from Australia, including apps like Tristan's Bushfire.io, which supports communities during natural disasters
And we released global collections of 36 stories, each with a theme reflecting the diversity of the app and game community on Google Play, including:
- LGBTQ+ founders creating safe spaces and fostering representation
- Women founders breaking barriers and building impactful businesses
- Creators turning personal passions-such as fitness, mental health, or creativity-into inspiring apps
- Founders building sports apps and games that bring players, fans, and communities together
To the global community of app and game founders, thank you for sharing your inspiring journey. As we enter 2025, we look forward to discovering even more stories of the people behind games and apps businesses on Google Play.
19 Dec 2024 7:00pm GMT
18 Dec 2024
Android Developers Blog
The Second Developer Preview of Android 16
Posted by Matthew McCullough - VP of Product Management, Android Developer
The second developer preview of Android 16 is now available to test with your apps. This build includes changes designed to enhance the app experience, improve battery life, and boost performance while minimizing incompatibilities, and your feedback is critical in helping us understand the full impact of this work.
System triggered profiling
ProfilingManager was added in Android 15, giving apps the ability to request profiling data collection using Perfetto on public devices in the field. To help capture challenging trace scenarios such as startups or ANRs, ProfilingManager now includes System Triggered Profiling. Apps can use ProfilingManager#addProfilingTriggers() to register interest in receiving information about these flows. Flows covered in this release include onFullyDrawn for activity based cold starts, and ANRs.
val anrTrigger = ProfilingTrigger.Builder( ProfilingTrigger.TRIGGER_TYPE_ANR ) .setRateLimitingPeriodHours(1) .build() val startupTrigger: ProfilingTrigger = //... mProfilingManager.addProfilingTriggers(listOf(anrTrigger, startupTrigger))
Start component in ApplicationStartInfo
ApplicationStartInfo was added in Android 15, allowing an app to see reasons for process start, start type, start times, throttling, and other useful diagnostic data. Android 16 adds getStartComponent() to distinguish what component type triggered the start, which can be helpful for optimizing the startup flow of your app.
Richer Haptics
Android has exposed limited control over the haptic actuator since its inception.
Android 11 added support for more complex haptic effects that more advanced actuators can support through VibrationEffect.Compositions of device-defined semantic primitives.
Android 16 adds haptic APIs that let apps define the amplitude and frequency curves of a haptic effect while abstracting away differences between device capabilities.
Better job introspection
Android 16 introduces JobScheduler#getPendingJobReasons(int jobId) which can return multiple reasons why a job is pending, due to both explicit constraints set by the developer and implicit constraints set by the system.
We're also introducing JobScheduler#getPendingJobReasonsHistory(int jobId), which returns a list of the most recent constraint changes.
The API can help you debug why your jobs may not be executing, especially if you're seeing reduced success rates with certain tasks or latency issues with job completion as well. This can also better help you understand if certain jobs are not completing due to system defined constraints versus explicitly set constraints.
Adaptive refresh rate
Adaptive refresh rate (ARR), introduced in Android 15, enables the display refresh rate on supported hardware to adapt to the content frame rate using discrete VSync steps. This reduces power consumption while eliminating the need for potentially jank-inducing mode-switching.
Android 16 DP2 introduces hasArrSupport() and getSuggestedFrameRate(int) while restoring getSupportedRefreshRates() to make it easier for your apps to take advantage of ARR.
RecyclerView 1.4 internally supports ARR when it is settling from a fling or smooth scroll, and we're continuing our work to add ARR support into more Jetpack libraries. This frame rate article covers many of the APIs you can use to set the frame rate so that your app can directly leverage ARR.
Job execution optimizations
Starting in Android 16, we're adjusting regular and expedited job execution runtime quota based on the following factors:
- Which app standby bucket the application is in; active standby buckets will be given a generous runtime quota.
- Jobs started while the app is visible to the user and continues after the app becomes invisible will adhere to the job runtime quota.
- Jobs that are executing concurrently with a foreground service will adhere to the job runtime quota. If you need to perform a data transfer that may take a long time consider using a user initiated data transfer.
Note: To understand how to further debug and test the behavior change, read more about JobScheduler quota optimizations.
Fully deprecating JobInfo#setImportantWhileForeground
The JobInfo.Builder#setImportantWhileForeground(boolean) method indicates the importance of a job while the scheduling app is in the foreground or when temporarily exempted from background restrictions.
This method has been deprecated since Android 12 (API 31). Starting in Android 16, it will no longer function effectively and calling this method will be ignored.
This removal of functionality also applies to JobInfo#isImportantWhileForeground(). Starting in Android 16, if the method is called, the method will return false.
Deprecated Disruptive Accessibility Announcements
Android 16 DP2 deprecates accessibility announcements, characterized by the use of announceForAccessibility or the dispatch of TYPE_ANNOUNCEMENT AccessibilityEvents. They can create inconsistent user experiences for users of TalkBack and Android's screen reader, and alternatives better serve a broader range of user needs across a variety of Android's assistive technologies.
Examples of alternatives:
- For significant UI changes like window changes, use Activity.setTitle(CharSequence) and setAccessibilityPaneTitle(java.lang.CharSequence). In Compose use Modifier.semantics { paneTitle = "paneTitle" }
- To inform the user of changes to critical UI, use setAccessibilityLiveRegion(int). In Compose use Modifier.semantics { liveRegion = LiveRegionMode.[Polite|Assertive] }. These should be used sparingly as they may generate announcements every time a View or composable is updated.
- To notify users about errors, send an AccessibilityEvent of type AccessibilityEvent#CONTENT_CHANGE_TYPE_ERROR and set AccessibilityNodeInfo#setError(CharSequence), or use TextView#setError(CharSequence).
The deprecated announceForAccessibility API includes more detail on suggested alternatives.
Cloud search in photo picker
The photo picker provides a safe, built-in way for users to grant your app access to selected images and videos from both local and cloud storage, instead of their entire media library. Using a combination of Modular System Components through Google System Updates and Google Play services, it's supported back to Android 4.4 (API level 19). Integration requires just a few lines of code with the associated Android Jetpack library.
The developer preview includes new APIs to enable searching from the cloud media provider for the Android photo picker. Search functionality in the photo picker is coming soon.
Ranging with enhanced security
Android 16 adds support for robust security features in WiFi location on supported devices with WiFi 6's 802.11az, allowing apps to combine the higher accuracy, greater scalability, and dynamic scheduling of the protocol with security enhancements including AES-256-based encryption and protection against MITM attacks. This allows it to be used more safely in proximity use cases, such as unlocking a laptop or a vehicle door. 802.11az is integrated with the Wi-Fi 6 standard, leveraging its infrastructure and capabilities for wider adoption and easier deployment.
Health Connect updates
Health Connect in the developer preview adds ACTIVITY_INTENSITY, a new datatype defined according to World Health Organization guidelines around moderate and vigorous activity. Each record requires the start time, the end time and whether the activity intensity is moderate or vigorous.
Health Connect also contains updated APIs supporting health records. This allows apps to read and write medical records in FHIR format with explicit user consent. This API is currently in an early access program. Sign up if you'd like to be part of our early access program.
Predictive back additions
Android 16 adds new APIs to help you enable predictive back system animations in gesture navigation such as the back-to-home animation. Registering the onBackInvokedCallback with the new PRIORITY_SYSTEM_NAVIGATION_OBSERVER allows your app to receive the regular onBackInvoked call whenever the system handles a back navigation without impacting the normal back navigation flow.
Android 16 additionally adds the finishAndRemoveTaskCallback() and moveTaskToBackCallback(). By registering these callbacks with the OnBackInvokedDispatcher, the system can trigger specific behaviors and play corresponding ahead-of-time animations when the back gesture is invoked.
Two Android API releases in 2025
This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include planned behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; it will not include any app-impacting behavior changes.
We'll continue to have quarterly Android releases. The Q1 and Q3 updates in-between the API releases will provide incremental updates to help ensure continuous quality. We're actively working with our device partners to bring the Q2 release to as many devices as possible.
There's no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, and that will be tied to the major API level.
How to get ready
In addition to performing compatibility testing on the next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing.
App compatibility
The Android 16 Preview program runs from November 2024 until the final public release next year. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website.
We're targeting Late Q1 of 2025 for our Platform Stability milestone. At this milestone, we'll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. We're expecting to reach Platform Stability in March 2025, and from that time you'll have several months before the official release to do your final testing. Learn more in the release timeline details.
Get started with Android 16
You can get started today with Developer Preview 2 by flashing a system image and updating the tools. If you are currently on Developer Preview 1, you will automatically get an over-the-air update to Developer Preview 2. We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in the final release.
For the best development experience with Android 16, we recommend that you use the latest preview of the Android Studio Ladybug feature drop. Once you're set up, here are some of the things you should do:
- Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
- Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it.
We'll update the preview system images and SDK regularly throughout the Android 16 release cycle. This preview release is for developers only and not intended for daily consumer use. We're making it available by manual download. Once you've manually installed a preview build, you'll automatically get future updates over-the-air for all later previews and Betas.
If you've already installed Android 15 QPR Beta 2 and would like to flash Android 16 Developer Preview 2, you can do so without first having to wipe your device.
As we reach our Beta releases, we'll be inviting consumers to try Android 16 as well, and we'll open up enrollment for Android 16 in the Android Beta program at that time.
For complete information, visit the Android 16 developer site.
18 Dec 2024 7:00pm GMT
17 Dec 2024
Android Developers Blog
How Instagram enabled users to take stunning Low Light Photos
Posted by Donovan McMurray - Developer Relations Engineer
Instagram, the popular photo and video sharing social networking service, is constantly delighting users with a best-in-class camera experience. Recently, Instagram launched another improvement on Android with their Night Mode implementation.
As devices and their cameras become more and more capable, users expect better quality images in a wider variety of settings. Whether it's a night out with friends or the calmness right after you get your baby to fall asleep, the special moments users want to capture often don't have ideal lighting conditions.
Now, when Instagram users on Android take a photo in low light environments, they'll see a moon icon that allows them to activate Night Mode for better image quality. This feature is currently available to users with any Pixel device from the 6 series and up, a Samsung Galaxy S24Ultra, or a Samsung Flip6 or Fold6, with more devices to follow.
Leveraging Device-specific Camera Technologies
Android enables apps to take advantage of device-specific camera features through the Camera Extensions API. The Extensions framework currently provides functionality like Night Mode for low-light image captures, Bokeh for applying portrait-style background blur, and Face Retouch for beauty filters. All of these features are implemented by the Original Equipment Manufacturers (OEMs) in order to maximize the quality of each feature on the hardware it's running on.
Furthermore, exposing this OEM-specific functionality through the Extensions API allows developers to use a consistent implementation across all of these devices, getting the best of both worlds: implementations that are tuned to a wide-range of devices with a unified API surface. According to Nilesh Patel, a Software Engineer at Instagram, "for Meta's billions of users, having to write custom code for each new device is simply not scalable. It would also add unnecessary app size when Meta users download the app. Hence our guideline is 'write once to scale to billions', favoring platform APIs."
More and more OEMs are supporting Extensions, too! There are already over 120 different devices that support the Camera Extensions, representing over 75 million monthly active users. There's never been a better time to integrate Extensions into your Android app to give your users the best possible camera experience.
Impact on Instagram
The results of adding Night Mode to Instagram have been very positive for Instagram users. Jin Cui, a Partner Engineer on Instagram, said "Night Mode has increased the number of photos captured and shared with the Instagram camera, since the quality of the photos are now visibly better in low-light scenes."
Compare the following photos to see just how big of a difference Night Mode makes. The first photo is taken in Instagram with Night Mode off, the second photo is taken in Instagram with Night Mode on, and the third photo is taken with the native camera app with the device's own low-light processing enabled.
Ensuring Quality through Image Test Suite (ITS)
The Android Camera Image Test Suite (ITS) is a framework for testing images from Android cameras. ITS tests configure the camera and capture shots to verify expected image data. These tests are functional and ensure advertised camera features work as expected. A tablet mounted on one side of the ITS box displays the test chart. The device under test is mounted on the opposite side of the ITS box.
Devices must pass the ITS tests for any feature that the device claims to support for apps to use, including the tests we have for the Night Mode Camera Extension.
The Android Camera team faced the challenge of ensuring the Night Mode Camera Extension feature functioned consistently across all devices in a scalable way. This required creating a testing environment with very low light and a wide dynamic range. This configuration was necessary to simulate real-world lighting scenarios, such as a city at night with varying levels of brightness and shadow, or the atmospheric lighting of a restaurant.
The first step to designing the test was to define the specific lighting conditions to simulate. Field testing with a light meter in various locations and lighting conditions was conducted to determine the target lux level. The goal was to ensure the camera could capture clear images in low-light conditions, which led to the establishment of 3 lux as the target lux level. The figure below shows various lighting conditions and their respective lux value.
The next step was to develop a test chart to accurately measure a wide dynamic range in a low light environment. The team developed and iterated on several test charts and arrived at the following test chart shown below. This chart arranges a grid of squares in varying shades of grey. A red outline defines the test area for cropping. This enables excluding darker external regions. The grid follows a Hilbert curve pattern to minimize abrupt light or dark transitions. The design allows for both quantitative measurements and simulation of a broad range of light conditions.
The test chart captures an image using the Night Mode Camera Extension in low light conditions. The image is used to evaluate the improvement in the shadows and midtones while ensuring the highlights aren't saturated. This evaluation involves two criteria: ensure the average luma value of the six darkest boxes is at least 85, and ensure the average luma contrast between these boxes is at least 17. The figure below shows the test capture and chart results.
By leveraging the existing ITS infrastructure, the Android Camera team was able to provide consistent, high quality Night Mode Camera Extension captures. This gives application developers the confidence to integrate and enable Night Mode captures for their users. It also allows OEMs to validate their implementations and ensure users get the best quality capture.
How to Implement Night Mode with Camera Extensions
Camera Extensions are available to apps built with Camera2 or CameraX. In this section, we'll walk through each of the features Instagram implemented. The code examples will use CameraX, but you'll find links to the Camera2 documentation at each step.
Enabling Night Mode Extension
Night Mode involves combining multiple exposures into a single still photo for better quality shots in low-light environments. So first, you'll need to check for Night Mode availability, and tell the camera system to start a Camera Extension session. With CameraX, this is done with an ExtensionsManager instead of the standard CameraManager.
private suspend fun setUpCamera() { // Obtain an instance of a process camera provider. The camera provider // provides access to the set of cameras associated with the device. // The camera obtained from the provider will be bound to the activity lifecycle. val cameraProvider = ProcessCameraProvider.getInstance(application).await() // Obtain an instance of the extensions manager. The extensions manager // enables a camera to use extension capabilities available on the device. val extensionsManager = ExtensionsManager.getInstanceAsync( application, cameraProvider).await() // Select the camera. val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA // Query if extension is available. Not all devices will support // extensions or might only support a subset of extensions. if (extensionsManager.isExtensionAvailable(cameraSelector, ExtensionMode.NIGHT)) { // Unbind all use cases before enabling different extension modes. try { cameraProvider.unbindAll() // Retrieve a night extension enabled camera selector val nightCameraSelector = extensionsManager.getExtensionEnabledCameraSelector( cameraSelector, ExtensionMode.NIGHT ) // Bind image capture and preview use cases with the extension enabled camera // selector. val imageCapture = ImageCapture.Builder().build() val preview = Preview.Builder().build() // Connect the preview to receive the surface the camera outputs the frames // to. This will allow displaying the camera frames in either a TextureView // or SurfaceView. The SurfaceProvider can be obtained from the PreviewView. preview.setSurfaceProvider(surfaceProvider) // Returns an instance of the camera bound to the lifecycle // Use this camera object to control various operations with the camera // Example: flash, zoom, focus metering etc. val camera = cameraProvider.bindToLifecycle( lifecycleOwner, nightCameraSelector, imageCapture, preview ) } catch (e: Exception) { Log.e(TAG, "Use case binding failed", e) } } else { // In the case where the extension isn't available, you should set up // CameraX normally with non-extension-enabled CameraSelector. } }
To do this in Camera2, see the Create a CameraExtensionSession with the Camera2 Extensions API guide.
Implementing the Progress Bar and PostView Image
For an even more elevated user experience, you can provide feedback while the Night Mode capture is processing. In Android 14, we added callbacks for the progress and for post view, which is a temporary image capture before the Night Mode processing is complete. The below code shows how to use these callbacks in the takePicture() method. The actual implementation to update the UI is very app-dependent, so we'll leave the actual UI updating code to you.
// When setting up the ImageCapture.Builder, set postviewEnabled and // posviewResolutionSelector in order to get a PostView bitmap in the // onPostviewBitmapAvailable callback when takePicture() is called. val cameraInfo = cameraProvider.getCameraInfo(cameraSelector) val isPostviewSupported = ImageCapture.getImageCaptureCapabilities(cameraInfo).isPostviewSupported val postviewResolutionSelector = ResolutionSelector.Builder() .setAspectRatioStrategy(AspectRatioStrategy( AspectRatioStrategy.RATIO_16_9_FALLBACK_AUTO_STRATEGY, AspectRatioStrategy.FALLBACK_RULE_AUTO)) .setResolutionStrategy(ResolutionStrategy( previewSize, ResolutionStrategy.FALLBACK_RULE_CLOSEST_LOWER_THEN_HIGHER )) .build() imageCapture = ImageCapture.Builder() .setTargetAspectRatio(AspectRatio.RATIO_16_9) .setPostviewEnabled(isPostviewSupported) .setPostviewResolutionSelector(postviewResolutionSelector) .build() // When the Night Mode photo is being taken, define these additional callbacks // to implement PostView and a progress indicator in your app. imageCapture.takePicture( outputFileOptions, Dispatchers.Default.asExecutor(), object : ImageCapture.OnImageSavedCallback { override fun onPostviewBitmapAvailable(bitmap: Bitmap) { // Add the Bitmap to your UI as a placeholder while the final result is processed } override fun onCaptureProcessProgressed(progress: Int) { // Use the progress value to update your UI; values go from 0 to 100. } } )
To accomplish this in Camera2, see the CameraFragment.kt file in the Camera2Extensions sample app.
Implementing the Moon Icon Indicator
Another user-focused design touch is showing the moon icon to let the user know that a Night Mode capture will happen. It's also a good idea to let the user tap the moon icon to disable Night Mode capture. There's an upcoming API in Android 16 next year to let you know when the device is in a low-light environment.
Here are the possible values for the Night Mode Indicator API:
UNKNOWN
- The camera is unable to reliably detect the lighting conditions of the current scene to determine if a photo will benefit from a Night Mode Camera Extension capture.
OFF
- The camera has detected lighting conditions that are sufficiently bright. Night Mode Camera Extension is available but may not be able to optimize the camera settings to take a higher quality photo.
ON
- The camera has detected low-light conditions. It is recommended to use Night Mode Camera Extension to optimize the camera settings to take a high-quality photo in the dark.
Next Steps
Read more about Android's camera APIs in the Camera2 guides and the CameraX guides. Once you've got the basics down, check out the Android Camera and Media Dev Center to take your camera app development to the next level. For more details on upcoming Android features, like the Night Mode Indicator API, get started with the Android 16 Preview program.
17 Dec 2024 8:15pm GMT
What's new in CameraX 1.4.0 and a sneak peek of Jetpack Compose support
Posted by Scott Nien - Software Engineer (scottnien@)
Get ready to level up your Android camera apps! CameraX 1.4.0 just dropped with a load of awesome new features and improvements. We're talking expanded HDR capabilities, preview stabilization and the versatile effect framework, and a whole lot of cool stuff to explore. We will also explore how to seamlessly integrate CameraX with Jetpack Compose! Let's dive in and see how these enhancements can take your camera app to the next level.
HDR preview and Ultra HDR
High Dynamic Range (HDR) is a game-changer for photography, capturing a wider range of light and detail to create stunningly realistic images. With CameraX 1.3.0, we brought you HDR video recording capabilities, and now in 1.4.0, we're taking it even further! Get ready for HDR Preview and Ultra HDR. These exciting additions empower you to deliver an even richer visual experience to your users.
HDR Preview
This new feature allows you to enable HDR on Preview without needing to bind a VideoCapture use case. This is especially useful for apps that use a single preview stream for both showing preview on display and video recording with an OpenGL pipeline.
To fully enable the HDR, you need to ensure your OpenGL pipeline is capable of processing the specific dynamic range format and then check the camera capability.
See following code snippet as an example to enable HLG10 which is the baseline HDR standard that device makers must support on cameras with 10-bit output.
// Declare your OpenGL pipeline supported dynamic range format. val openGLPipelineSupportedDynamicRange = setOf( DynamicRange.SDR, DynamicRange.HLG_10_BIT ) // Check camera dynamic range capabilities. val isHlg10Supported = cameraProvider.getCameraInfo(cameraSelector) .querySupportedDynamicRanges(openGLPipelineSupportedDynamicRange) .contains(DynamicRange.HLG_10_BIT) val preview = Preview.Builder().apply { if (isHlg10Supported) { setDynamicRange(DynamicRange.HLG_10_BIT) } }
Ultra HDR
Introducing Ultra HDR, a new format in Android 14 that lets users capture stunningly realistic photos with incredible dynamic range. And the best part? CameraX 1.4.0 makes it incredibly easy to add Ultra HDR capture to your app with just a few lines of code:
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA val cameraInfo = cameraProvider.getCameraInfo(cameraSelector) val isUltraHdrSupported = ImageCapture.getImageCaptureCapabilities(cameraInfo) .supportedOutputFormats .contains(ImageCapture.OUTPUT_FORMAT_JPEG_ULTRA_HDR) val imageCapture = ImageCapture.Builder().apply { if (isUltraHdrSupported) { setOutputFormat(ImageCapture.OUTPUT_FORMAT_JPEG_ULTRA_HDR) } }.build()
Jetpack Compose support
While this post focuses on 1.4.0, we're excited to announce the Jetpack Compose support in CameraX 1.5.0 alpha. We're adding support for a Composable Viewfinder built on top of AndroidExternalSurface and AndroidEmbeddedExternalSurface. The CameraXViewfinder Composable hooks up a display surface to a CameraX Preview use case, handling the complexities of rotation, scaling and Surface lifecycle so you don't need to.
// in build.gradle implementation ("androidx.camera:camera-compose:1.5.0-alpha03") class PreviewViewModel : ViewModel() { private val _surfaceRequests = MutableStateFlow<SurfaceRequest?>(null) val surfaceRequests: StateFlow<SurfaceRequest?> get() = _surfaceRequests.asStateFlow() private fun produceSurfaceRequests(previewUseCase: Preview) { // Always publish new SurfaceRequests from Preview previewUseCase.setSurfaceProvider { newSurfaceRequest -> _surfaceRequests.value = newSurfaceRequest } } // ... } @Composable fun MyCameraViewfinder( viewModel: PreviewViewModel, modifier: Modifier = Modifier ) { val currentSurfaceRequest: SurfaceRequest? by viewModel.surfaceRequests.collectAsState() currentSurfaceRequest?.let { surfaceRequest -> CameraXViewfinder( surfaceRequest = surfaceRequest, implementationMode = ImplementationMode.EXTERNAL, // Or EMBEDDED modifier = modifier ) } }
Learn more about unlocking the power of CameraX in Jetpack Compose, read Part 1 of the Getting Started with CameraX in Jetpack Compose blog series.
Kotlin-friendly APIs
CameraX is getting even more Kotlin-friendly! In 1.4.0, we've introduced two new suspend functions to streamline camera initialization and image capture.
// CameraX initialization val cameraProvider = ProcessCameraProvider.awaitInstance() val imageProxy = imageCapture.takePicture() // Processing imageProxy imageProxy.close()
Preview Stabilization and Mirror mode
Preview Stabilization
Preview stabilization mode was added in Android 13 to enable the stabilization on all non-RAW streams, including previews and MediaCodec input surfaces. Compared to the previous video stabilization mode, which may have inconsistent FoV (Field of View) between the preview and recorded video, this new preview stabilization mode ensures consistency and thus provides a better user experience. For apps that record the preview directly for video recording, this mode is also the only way to enable stabilization.
Follow the code below to enable preview stabilization. Please note that once preview stabilization is turned on, it is not only applied to the Preview but also to the VideoCapture if it is bound as well.
val isPreviewStabilizationSupported = Preview.getPreviewCapabilities(cameraProvider.getCameraInfo(cameraSelector)) .isStabilizationSupported val preview = Preview.Builder().apply { if (isPreviewStabilizationSupported) { setPreviewStabilizationEnabled(true) } }.build()
MirrorMode
While CameraX 1.3.0 introduced mirror mode for VideoCapture, we've now brought this handy feature to Preview in 1.4.0. This is especially useful for devices with outer displays, allowing you to create a more natural selfie experience when using the rear camera.
To enable the mirror mode, simply call Preview.Builder.setMirrorMode APIs. This feature is supported for Android 13 and above.
Real-time Effect
CameraX 1.3.0 introduced the CameraEffect framework, giving you the power to customize your camera output with OpenGL. Now, in 1.4.0, we're taking it a step further. In addition to applying your own custom effects, you can now leverage a set of pre-built effects provided by CameraX and Media3, making it easier than ever to enhance your app's camera features.
Overlay Effect
The new camera-effects artifact aims to provide ready-to-use effect implementations, starting with the OverlayEffect. This effect lets you draw overlays on top of camera frames using the familiar Canvas API.
The following sample code shows how to detect the QR code and draw the shape of the QR code once it is detected.
By default, drawing is performed in surface frame coordinates. But what if you need to use camera sensor coordinates? No problem! OverlayEffect provides the Frame#getSensorToBufferTransform function, allowing you to apply the necessary transformation matrix to your overlayCanvas.
In this example, we use CameraX's MLKit Vision APIs (MlKitAnalyzer) and specify COORDINATE_SYSTEM_SENSOR to obtain QR code corner points in sensor coordinates. This ensures accurate overlay placement regardless of device orientation or screen aspect ratio.
// in build.gradle implementation ("androidx.camera:camera-effects:1.4.1}") implementation ("androidx.camera:camera-mlkit-vision:1.4.1") var qrcodePoints: Array<Point>? = null var qrcodeTimestamp = 0L val qrcodeBoxEffect = OverlayEffect( PREVIEW /* applied on the preview only */, 5, /* hold multiple frames in the queue so we can match analysis result with preview frame */, Handler(Looper.getMainLooper()), {} ) fun initCamera() { qrcodeBoxEffect.setOnDrawListener { frame -> if(frame.timestamp != qrcodeTimestamp) { // Do not change the drawing if the frame doesn't match the analysis // result. return@setOnDrawListener true } frame.overlayCanvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR) qrcodePoints?.let { // Using sensor coordinates to draw. frame.overlayCanvas.setMatrix(frame.sensorToBufferTransform) val path = android.graphics.Path().apply { it.forEachIndexed { index, point -> if (index == 0) { moveTo(point.x.toFloat(), point.y.toFloat()) } else { lineTo(point.x.toFloat(), point.y.toFloat()) } } lineTo(it[0].x.toFloat(), it[0].y.toFloat()) } frame.overlayCanvas.drawPath(path, paint) } true } val imageAnalysis = ImageAnalysis.Builder() .build() .apply { setAnalyzer(executor, MlKitAnalyzer( listOf(barcodeScanner!!), COORDINATE_SYSTEM_SENSOR, executor ) { result -> val barcodes = result.getValue(barcodeScanner!!) qrcodePoints = barcodes?.takeIf { it.size > 0}?.get(0)?.cornerPoints // track the timestamp of the analysis result and release the // preview frame. qrcodeTimestamp = result.timestamp qrcodeBoxEffect.drawFrameAsync(qrcodeTimestamp) } ) } val useCaseGroup = UseCaseGroup.Builder() .addUseCase(preview) .addUseCase(imageAnalysis) .addEffect(qrcodeBoxEffect) .build() cameraProvider.bindToLifecycle( lifecycleOwner, cameraSelector, usecaseGroup) }
Media3 Effect
Want to add stunning camera effects to your CameraX app? Now you can tap into the power of Media3's rich effects framework! This exciting integration allows you to apply Media3 effects to your CameraX output, including Preview, VideoCapture, and ImageCapture.
This means you can easily enhance your app with a wide range of professional-grade effects, from blurs and color filters to transitions and more. To get started, simply use the new androidx.camera:media3:media3-effect artifact.
Here's a quick example of how to apply a Gaussian blur to your camera output:
// in build.gradle implementation ("androidx.camera.media3:media3-effect:1.0.0-alpha01") implementation ("androidx.media3:media3-effect:1.5.0") import androidx.camera.media3.effect.Media3Effect val media3Effect = Media3Effect( requireContext(), PREVIEW or VIDEO_CAPTURE or IMAGE_CAPTURE, mainThreadExecutor(), {} ) // use grayscale effect media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()) cameraController.setEffects(setOf(media3Effect)) // or using UseCaseGroup API
Here is what the effect looks like:
Screen Flash
Taking selfies in low light just got easier with CameraX 1.4.0! This release introduces a powerful new feature: screen flash. Instead of relying on a traditional LED flash which most selfie cameras don't have, screen flash cleverly utilizes your phone's display. By momentarily turning the screen bright white, it provides a burst of illumination that helps capture clear and vibrant selfies even in challenging lighting conditions.
Integrating screen flash into your CameraX app is flexible and straightforward. You have two main options:
1. Implement the ScreenFlash interface: This gives you full control over the screen flash behavior. You can customize the color, intensity, duration, and any other aspect of the flash. This is ideal if you need a highly tailored solution.
2. Use the built-in implementation: For a quick and easy solution, leverage the pre-built screen flash functionality in ScreenFlashView or PreviewView. This implementation handles all the heavy lifting for you.
If you're already using PreviewView in your app, enabling screen flash is incredibly simple. Just enable it directly on the PreviewView instance. If you need more control or aren't using PreviewView, you can use ScreenFlashView directly.
Here's a code example demonstrating how to enable screen flash:
// case 1: PreviewView + CameraX core API. previewView.setScreenFlashWindow(activity.getWindow()); imageCapture.screenFlash = previewView.screenFlash imageCapture.setFlashMode(ImageCapture.FLASH_MODE_SCREEN) // case 2: PreviewView + CameraController previewView.setScreenFlashWindow(activity.getWindow()); cameraController.setImageCaptureFlashMode(ImageCapture.FLASH_MODE_SCREEN); // case 3 : use ScreenFlashView screenFlashView.setScreenFlashWindow(activity.getWindow()); imageCapture.setScreenFlash(screenFlashView.getScreenFlash()); imageCapture.setFlashMode(ImageCapture.FLASH_MODE_SCREEN);
Camera Extensions new features
Camera Extensions APIs aim to help apps to access the cutting-edge capabilities previously available only on built-in camera apps. And the ecosystem is growing rapidly! In 2024, we've seen major players like Pixel, Samsung, Xiaomi, Oppo, OnePlus, Vivo, and Honor all embrace Camera Extensions, particularly for Night Mode and Bokeh Mode. CameraX 1.4.0 takes this even further by adding support for brand-new Android 15 Camera Extensions features, including:
- Postview: Provides a preview of the captured image almost instantly before the long-exposure shots are completed
- Capture Process Progress: Displays a progress indicator so users know how long capturing and processing will take, improving the experience for features like Night Mode
- Extensions Strength: Allows users to fine-tune the intensity of the applied effect
Below is an example of the improved UX that uses postview and capture process progress features on Samsung S24 Ultra.
Interested to know how this can be implemented? See the sample code below:
val extensionsCameraSelector = extensionsManager .getExtensionEnabledCameraSelector(DEFAULT_BACK_CAMERA, extensionMode) val isPostviewSupported = ImageCapture.getImageCaptureCapabilities( cameraProvider.getCameraInfo(extensionsCameraSelector) ).isPostviewSupported val imageCapture = ImageCapture.Builder().apply { setPostviewEnabled(isPostviewSupported) }.build() imageCapture.takePicture(outputfileOptions, executor, object : OnImageSavedCallback { override fun onImageSaved(outputFileResults: OutputFileResults) { // final image saved. } override fun onPostviewBitmapAvailable(bitmap: Bitmap) { // Postview bitmap is available. } override fun onCaptureProcessProgressed(progress: Int) { // capture process progress update } }
Important: If your app ran into the CameraX Extensions issue on Pixel 9 series devices, please use CameraX 1.4.1 instead. This release fixes a critical issue that prevented Night Mode from working correctly with takePicture.
What's Next
We hope you enjoy this new release. Our mission is to make camera development a joy, removing the friction and pain points so you can focus on innovation. With CameraX, you can easily harness the power of Android's camera capabilities and build truly amazing app experiences.
Have questions or want to connect with the CameraX team? Join the CameraX developer discussion group or file a bug report:
We can't wait to see what you create!
17 Dec 2024 8:00pm GMT