19 Apr 2025
TalkAndroid
AI stuns researchers by rewriting its own code to overcome limitations
In an unprecedented development that challenges our understanding of artificial intelligence boundaries, researchers at Sakana AI have witnessed…
19 Apr 2025 3:30pm GMT
Board Kings Free Rolls – Updated Every Day!
Run out of rolls for Board Kings? Find links for free rolls right here, updated daily!
19 Apr 2025 3:25pm GMT
Coin Tales Free Spins – Updated Every Day!
Tired of running out of Coin Tales Free Spins? We update our links daily, so you won't have that problem again!
19 Apr 2025 3:24pm GMT
Avatar World Codes – April 2025 – Updated Daily
Find all the latest Avatar World Codes right here in this article! Read on for more!
19 Apr 2025 3:22pm GMT
Coin Master Free Spins & Coins Links
Find all the latest Coin Master free spins right here! We update daily, so be sure to check in daily!
19 Apr 2025 3:21pm GMT
Monopoly Go Events Schedule Today – Updated Daily
Current active events are Easter Egg Hunt Event, Tournament -24 Carrot Contest, and Special Event - Spring Treasures
19 Apr 2025 3:20pm GMT
Monopoly Go – Free Dice Links Today (Updated Daily)
If you keep on running out of dice, we have just the solution! Find all the latest Monopoly Go free dice links right here!
19 Apr 2025 3:18pm GMT
Family Island Free Energy Links (Updated Daily)
Tired of running out of energy on Family Island? We have all the latest Family Island Free Energy links right here, and we update these daily!
19 Apr 2025 3:15pm GMT
Crazy Fox Free Spins & Coins (Updated Daily)
If you need free coins and spins in Crazy Fox, look no further! We update our links daily to bring you the newest working links!
19 Apr 2025 3:12pm GMT
Match Masters Free Gifts, Coins, And Boosters (Updated Daily)
Tired of running out of boosters for Match Masters? Find new Match Masters free gifts, coins, and booster links right here! Updated Daily!
19 Apr 2025 3:10pm GMT
Solitaire Grand Harvest – Free Coins (Updated Daily)
Get Solitaire Grand Harvest free coins now, new links added daily. Only tested and working links, complete with a guide on how to redeem the links.
19 Apr 2025 3:08pm GMT
Dice Dreams Free Rolls – Updated Daily
Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.
19 Apr 2025 3:07pm GMT
Unlock hidden Netflix categories in 2025 with these secret codes
Discover Netflix's secret world beyond its standard interface with hidden category codes that unlock thousands of specific genres.…
19 Apr 2025 6:30am GMT
18 Apr 2025
TalkAndroid
Comcast Unveils 5-Year Price Guarantee for Xfinity Internet Users
In an age of constant price hikes, this could be some much needed relief.
18 Apr 2025 5:00pm GMT
King Legacy Codes
Find all the latest King Legacy codes right here!
18 Apr 2025 4:07pm GMT
The Resistance Tycoon Codes
Here you will find all the latest The Resistance Tycoon Codes! Tap the link for more!
18 Apr 2025 4:07pm GMT
09 Apr 2025
Android Developers Blog
Prioritize media privacy with Android Photo Picker and build user trust
Posted by Tatiana van Maaren - Global T&S Partnerships Lead, Privacy & Security, and Roxanna Aliabadi Walker - Product Manager
At Google Play, we're dedicated to building user trust, especially when it comes to sensitive permissions and your data. We understand that managing files and media permissions can be confusing, and users often worry about which files apps can access. Since these files often contain sensitive information like family photos or financial documents, it's crucial that users feel in control. That's why we're working to provide clearer choices, so users can confidently grant permissions without sacrificing app functionality or their privacy.
Below are a set of best practices to consider for improving user trust in the sharing of broad access files, ultimately leading to a more successful and sustainable app ecosystem.
Prioritize user privacy with data minimization
Building user trust starts with requesting only the permissions essential for your app's core functions. We understand that photos and videos are sensitive data, and broad access increases security risks. That's why Google Play now restricts READ_MEDIA_IMAGES and READ_MEDIA_VIDEO permissions, allowing developers to request them only when absolutely necessary, typically for apps like photo/video managers and galleries.
Leverage privacy-friendly solutions
Instead of requesting broad storage access, we encourage developers to use the Android Photo Picker, introduced in Android 13. This tool offers a privacy-centric way for users to select specific media files without granting access to their entire library. Android photo picker provides an intuitive interface, including access to cloud-backed photos and videos, and allows for customization to fit your app's needs. In addition, this system picker is backported to Android 4.4, ensuring a consistent experience for all users. By eliminating runtime permissions, Android photo picker simplifies the user experience and builds trust through transparency.
Build trust through transparent data practices
We understand that some developers have historically used custom photo pickers for tailored user experiences. However, regardless of whether you use a custom or system picker, transparency with users is crucial. Users want to know why your app needs access to their photos and videos.
Developers should strive to provide clear and concise explanations within their apps, ideally at the point where the permission is requested. Take the following in consideration while crafting your permission request mechanisms as possible best practices guidelines:
- When requesting media access, provide clear explanations within your app. Specifically, tell users which media your app needs (e.g., all photos, profile pictures, sharing videos) and explain the functionality that relies on it (e.g., 'To choose a profile picture,' 'To share videos with friends').
- Clearly outline how user data will be used and protected in your privacy policies. Explain whether data is stored locally, transmitted to a server, or shared with third parties. Reassure users that their data will be handled responsibly and securely.
Learn how Snap has embraced the Android System Picker to prioritize user privacy and streamline their media selection experience. Here's what they have to say about their implementation:

"One of our goals is to provide a seamless and intuitive communication experience while ensuring Snapchatters have control over their content. The new flow of the Android Photo Picker is the perfect balance of providing user control of the content they want to share while ensuring fast communication with friends on Snapchat."
- Marc Brown, Product Manager
Get started
Start building a more trustworthy app experience. Explore the Android Photo Picker and implement privacy-first data practices today.
Acknowledgement
Special thanks to: May Smith - Product Manager, and Anita Issagholyan - Senior Policy Specialist
09 Apr 2025 5:00pm GMT
Gemini in Android Studio for businesses: Develop with confidence, powered by AI
Posted by Sandhya Mohan - Product Manager
To empower Android developers at work, we're excited to announce a new offering of Gemini in Android Studio for businesses. This offering is specifically designed to meet the added privacy, security, and management needs of small and large organizations. We've heard that some people at businesses have additional needs that require more sensitive data protection, and this offering delivers the same Gemini in Android Studio that you've grown accustomed to, now with the additional privacy enhancements that your organization might require.
Developers and admins can unlock these features and benefits by subscribing to Gemini Code Assist Standard or Enterprise editions. A Google Cloud administrator can purchase a subscription and assign licenses to developers in their organization directly from the Google Cloud console.
Your code stays secure
Our data governance policy helps ensure customer code, customers' inputs, as well as the recommendations generated will not be used to train any shared models. Customers control and own their data and IP. It also comes with security features like Private Google Access, VPC Service Controls, and Enterprise Access Controls with granular IAM permissions to help enterprises adopt AI assistance at scale without compromising on security and privacy. Using a Gemini Code Assist Standard or Enterprise license enables multiple industry certifications such as:
- SOC 1/2/3, ISO/IEC 27001 (Information Security Management)
- 27017 (Cloud Security)
- 27018 (Protection of PII)
- 27701 (Privacy Information Management)
More details are at Certifications and security for Gemini.
IP indemnification
Organizations will benefit from generative AI IP indemnification, safeguarding their organizations against third parties claiming copyright infringement related to the AI-generated code. This added layer of protection is the same indemnification policy we provide to Google Cloud customers using our generative AI APIs, and allows developers to leverage the power of AI with greater confidence and reduced risk.
Code customization
Developers with a Code Assist Enterprise license can get tailored assistance customized to their organization's codebases by connecting to their GitHub, GitLab or BitBucket repositories (including on-premise installations), giving Gemini in Android Studio awareness of the classes and methods their team is most likely to use. This allows Gemini to tailor code completion suggestions, code generations, and chat responses to their business's best practices, and save developers time they would otherwise have to spend integrating with their company's preferred frameworks.
Designed for Android development
As always, we've designed Gemini in Android Studio with the unique needs of Android developers in mind, offering tailored assistance at every stage of the software development lifecycle. From the initial phases of writing, refactoring, and documenting your code, Gemini acts as an intelligent coding companion to boost productivity. With features like:
- Build & Sync error support: Get targeted insights to help solve build and sync errors

- Gemini-powered App Quality Insights: Analyze crashes reported by Google Play Console and Firebase Crashlytics

- Get help with Logcat crashes: Simply click on "Ask Gemini" to get a contextual response on how to resolve the crash.

In Android Studio, Gemini is designed specifically for the Android ecosystem, making it an invaluable tool throughout the entire journey of creating and publishing an Android app.
Check out Gemini in Android Studio for business
This offering for businesses marks a significant step forward in empowering Android development teams with the power of AI. With this subscription-based offering, no code is stored, and crucially, your code is never used for model training. By providing generative AI indemnification and robust enterprise management tools, we're enabling organizations to innovate faster and build high-quality Android applications with confidence.
Ready to get started? Here's what you need
To get started, you'll need a Gemini Code Assist Enterprise license and Android Studio Narwhal or Android Studio for Platform found on the canary release channel. Purchase your Gemini Code Assist license or contact a Google Cloud sales team today for a personalized consultation on how you can unlock the power of AI for your organization.
Note: Gemini for businesses is also available for Android Studio Platform users.
We appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue and also check out known issues. Remember to also follow us on X, LinkedIn, Blog, or YouTube for more Android development updates!
09 Apr 2025 12:00am GMT
07 Apr 2025
Android Developers Blog
Widgets Take Center Stage with One UI 7
Posted by André Labonté - Senior Product Manager, Android Widgets
On April 7th, Samsung will begin rolling out One UI 7 to more devices globally. Included in this bold new design is greater personalization with an optimized widget experience and updated set of One UI 7 widgets. Ushering in a new era where widgets are more prominent to users, and integral to the daily device experience.
This update presents a prime opportunity for Android developers to enhance their app experience with a widget
- More Visibility: Widgets put your brand and key features front and center on the user's device, so they're more likely to see it.
- Better User Engagement: By giving users quick access to important features, widgets encourage them to use your app more often.
- Increased Conversions: You can use widgets to recommend personalized content or promote premium features, which could lead to more conversions.
- Happier Users Who Stick Around: Easy access to app content and features through widgets can lead to overall better user experience, and contribute to retention.
More discoverable than ever with Google Play's Widget Discovery features!
- Dedicated Widgets Search Filter: Users can now directly search for apps with widgets using a dedicated filter on Google Play. This means your apps/games with widgets will be easily identified, helping drive targeted downloads and engagement.
- New Widget Badges on App Detail Pages: We've introduced a visual badge on your app's detail pages to clearly indicate the presence of widgets. This eliminates guesswork for users and highlights your widget offerings, encouraging them to explore and utilize this capability.
- Curated Widgets Editorial Page: We're actively educating users on the value of widgets through a new editorial page. This curated space showcases collections of excellent widgets and promotes the apps that leverage them. This provides an additional channel for your widgets to gain visibility and reach a wider audience.
Getting started with Widgets
Whether you are planning a new widget, or investing in an update to an existing widget, we have tools to help!
- Quality Tiers are a great starting point to understand what makes a great Android widget. Consider making your widget resizable to the recommended sizes, so users can customize the size just right for them.
- Canonical Layouts make designing and building widget easier than ever. Designers, we see you - check out this new Figma Widget Design Kit.
- Jetpack Glance is the most efficient to build a great widget for your app. Follow along with the Coding Widgets layout video or codelab, using Jetpack Glance to code adaptive layouts!
Leverage widgets for increased app visibility, enhanced user engagement, and ultimately, higher conversions. By embracing widgets, you're not just optimizing for a specific OS update; you're aligning with a broader trend towards user-centric, glanceable experiences.
07 Apr 2025 7:00pm GMT
27 Mar 2025
Android Developers Blog
Media3 1.6.0 — what’s new?
Posted by Andrew Lewis - Software Engineer
This article is cross-published on Medium
Media3 1.6.0 is now available!
This release includes a host of bug fixes, performance improvements and new features. Read on to find out more, and as always please check out the full release notes for a comprehensive overview of changes in this release.
Playback, MediaSession and UI
ExoPlayer now supports HLS interstitials for ad insertion in HLS streams. To play these ads using ExoPlayer's built-in playlist support, pass an HlsInterstitialsAdsLoader.AdsMediaSourceFactory as the media source factory when creating the player. For more information see the official documentation.
This release also includes experimental support for 'pre-warming' decoders. Without pre-warming, transitions from one playlist item to the next may not be seamless in some cases, for example, we may need to switch codecs, or decode some video frames to reach the start position of the new media item. With pre-warming enabled, a secondary video renderer can start decoding the new media item earlier, giving near-seamless transitions. You can try this feature out by enabling it on the DefaultRenderersFactory. We're actively working on further improvements to the way we interact with decoders, including adding a 'fast seeking mode' so stay tuned for updates in this area.
Media3 1.6.0 introduces a new media3-ui-compose module that contains functionality for building Compose UIs for playback. You can find a reference implementation in the Media3 Compose demo and learn more in Getting started with Compose-based UI. At this point we're providing a first set of foundational state classes that link to the Player, in addition to some basic composable building blocks. You can use these to build your own customized UI widgets. We plan to publish default Material-themed composables in a later release.
Some other improvements in this release include: moving system calls off the application's main thread to the background (which should reduce ANRs), a new decoder module wrapping libmpegh (for bundling object-based audio decoding in your app), and a fix for the Cast extension for apps targeting API 34+. There are also fixes across MPEG-TS and WebVTT extraction, DRM, downloading/caching, MediaSession and more.
Media extraction and frame retrieval
The new MediaExtractorCompat is a drop-in replacement for the framework MediaExtractor but implemented using Media3's extractors. If you're using the Android framework MediaExtractor, consider migrating to get consistent behavior across devices and reduce crashes.
We've also added experimental support for retrieving video frames in a new class ExperimentalFrameExtractor, which can act as a replacement for the MediaMetadataRetriever getFrameAtTime methods. There are a few benefits over the framework implementation: HDR input is supported (by default tonemapping down to SDR, but with the option to produce HLG bitmaps from Android 14 onwards), Media3 effects can be applied (including Presentation to scale the output to a desired size) and it runs faster on some devices due to moving color space conversion to the GPU. Here's an example of using the new API:
val bitmap = withContext(Dispatchers.IO) { val configuration = ExperimentalFrameExtractor.Configuration .Builder() .setExtractHdrFrames(true) .build() val frameExtractor = ExperimentalFrameExtractor( context, configuration, ) frameExtractor.setMediaItem(mediaItem, /*effects*/ listOf()) val frame = frameExtractor.getFrame(timestamps).await() frameExtractor.release() frame.bitmap }
Editing, transcoding and export
Media3 1.6.0 includes performance, stability and functional improvements in Transformer. Highlights include: support for transcoding/transmuxing Dolby Vision streams on devices that support this format and a new MediaProjectionAssetLoader for recording from the screen, which you can try out in the Transformer demo app.
Check out Common media processing operations with Jetpack Media3 Transformer for some code snippets showing how to process media with Transformer, and tips to reduce latency.
This release also includes a new Kotlin-based demo app showcasing Media3's video effects framework. You can select from a variety of video effects and preview them via ExoPlayer.setVideoEffects.

Get started with Media3 1.6.0
Please get in touch via the Media3 issue Tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!
27 Mar 2025 4:30pm GMT
25 Mar 2025
Android Developers Blog
Strengthening Our App Ecosystem: Enhanced Tools for Secure & Efficient Development
Posted by Suzanne Frey - VP, Product, Trust & Growth for Android & Play
Knowing that you're building on a safe, secure ecosystem is essential for any app developer. We continuously invest in protecting Android and Google Play, so millions of users around the world can trust the apps they download and you can build thriving businesses. And we're dedicated to continually improving our developer tools to make world-class security even easier to implement.
Together, we've made Google Play one of the safest and most secure platforms for developers and users. Our partnership over the past few years includes helping you:
- Safeguard your business from scams and fraud with enhanced tools
- Fix policy and compatibility issues earlier with pre-review checks
- Share helpful and transparent information on Google Play to build consumer trust
- And stay protected as we've strengthened our threat-detection capabilities with Google's advanced AI proactively to keep bad actors out of our ecosystem
Today, we're excited to share more about how we're making it easier than ever for developers to build safe apps, while also continuing to strengthen our ecosystem's protection in 2025 and beyond.
Making it easier for you to build safer apps from the start
Google Play's policies are a critical component of ensuring a safe experience for our shared users. Play Console pre-review checks are a great way to resolve certain policy and compatibility issues before you submit your app for review. We recently added the ability to check privacy policy links and login credential requirements, and we're launching even more pre-review checks this year to help you avoid common policy pitfalls.
To help you avoid policy complications before you submit apps for review, we've been notifying you earlier about certain policies relevant to your apps - starting right as you code in Android Studio. We currently notify developers through Android Studio about a few key policy areas, but this year we'll expand to a much wider range of policies.
Providing more policy support
Acting on your feedback, we've improved our policy experience to give you clearer updates, more time for substantial changes, more flexible requirements while still maintaining safety standards, and more helpful information with live Q&A's. Soon, we'll be trying a new way of communicating with you in Play Console so you get information when you need it most. This year, we're investing in even more ways to get your feedback, help you understand our policies, navigate our Policy Center, and help to fix issues before app submission through new features in Console and Android Studio.
We're also expanding our popular Google Play Developer Help Community, which saw 2.7 million visits last year from developers looking to find answers to policy questions, share knowledge, and connect with fellow developers. This year, we're planning to expand the community to include more languages, such as Indonesian, Japanese, Korean, and Portuguese.
Protecting your business and users from scams and attacks
The Play Integrity API is an essential tool to help protect your business from abuse such as fraud, bots, cheating, and data theft. Developers are already using our new app access risk feature in Play Integrity API to make over 500M daily checks for potentially fraudulent or risky behavior. In fact, apps that use Play Integrity features to detect suspicious activity are seeing an 80% drop in unauthorized usage on average compared to other apps.

This year, we'll continue to enhance the Play Integrity API with stronger protection for even more users. We recently improved the technology that powers the API on all devices running Android 13 (API level 33) and above, making it faster, more reliable, and more private for users. We also launched enhanced security signals to help you decide how much you trust the environment your app is running in, which we'll automatically roll out to all developers who use the API in May. You can opt in now to start using the improved verdicts today.
We'll be adding new features later this year to help you deal with emerging threats, such as the ability to re-identify abusive and risky devices in a way that also preserves user privacy. We're also building more tools to help you guide users to fix issues, like if they need a security update or they're using a tampered version of your app.
Providing additional validation for your app
For apps in select categories, we offer badges that provide an extra layer of validation and connect users with safe, high-quality, and useful experiences. Building on the work of last year's "Government" badge, which helps users identify official government apps, this year we introduced a "Verified" badge to help users discover VPN apps that take extra steps to demonstrate their commitment to security. We'll continue to expand on this and add badges to more app categories in the future.
Partnering to keep kids safe
Whether your app is specifically designed for kids or simply attracts their attention, there is an added responsibility to ensure a safe and trusted experience. We want to partner with you to keep kids and teens safe online, and protect their privacy, and empower families. In addition to Google Play's Teacher Approved program, Families policies, and tools like Restrict Declared Minors setting within the Google Play Console, we're building tools like Credential Manager API, now in Beta for Digital IDs.
Strengthening the Android ecosystem
In addition to helping developers build stronger, safer apps on Google Play, we remain committed to protecting the broader Android ecosystem. Last year, our investments in stronger privacy policies, AI-powered threat detection and other security measures prevented 2.36 million policy-violating apps from being published on Google Play. By contrast, our most recent analysis found over 50 times more Android malware from Internet-sideloaded sources (like browsers and messaging apps) than on Google Play. This year we're working on ways to make it even harder for malicious actors to hide or trick users into harmful installs, which will not only protect your business from fraud but also help users download your apps with confidence.

Meanwhile, Google Play Protect is always evolving to combat new threats and protect users from harmful apps that can lead to scams and fraud. As this is a core part of user safety, we're doing more to keep users from being socially-engineered by scammers to turn this off. First, Google Play Protect live threat detection is expanding its protection to target malicious applications that try to impersonate financial apps. And our enhanced financial fraud protection pilot has continued to expand after a successful launch in select countries where we saw malware based financial fraud coming from Internet-sideloaded sources. We are planning to expand the pilot throughout this year to additional countries where we have seen higher levels of malware-based financial fraud.
We're even working with other leaders across the industry to protect all users, no matter what device they use or where they download their apps. As a founding member of the App Defense Alliance, we're working to establish and promote industry-wide security standards for mobile and web applications, as well as cloud configurations. Recently, the ADA launched Application Security Assessments (ASA) v1.0, which provides clear guidance to developers on protecting sensitive data and defending against cyber attacks to strengthen user trust.
What's next
Please keep the feedback coming! We appreciate knowing what can make our developers' experiences more efficient while ensuring we maintain the highest standards in app safety. Thank you for your continued partnership in making Android and Google Play a safe, thriving platform for everyone.
25 Mar 2025 5:00pm GMT
24 Mar 2025
Android Developers Blog
#WeArePlay | How Memory Lane Games helps people with dementia
Posted by Robbie McLachlan - Developer Marketing
In our latest #WeArePlay film, which celebrates the people behind apps and games, we meet Bruce - a co-founder of Memory Lane Games. His company turns cherished memories into simple, engaging quizzes for people with different types of dementia. Discover how Memory Lane Games blends nostalgia and technology to spark conversations and emotional connections.
What inspired the idea behind Memory Lane Games?
The idea for Memory Lane Games came about one day at the pub when Peter was telling me how his mum, even with vascular dementia, lights up when she looks at old family photos. It got me thinking about my own mum, who treasures old photos just as much. The idea hit us - why not turn those memories into games? We wanted to help people reconnect with their past and create moments where conversations could flow naturally.

Can you tell us of a memorable moment in the journey when you realized how powerful the game was?
We knew we were onto something meaningful when a caregiver in a memory cafe told us about a man who was pretty much non-verbal but would enjoy playing. He started humming along to one of our music trivia games, then suddenly said, "Roy Orbison is a way better singer than Elvis, but Elvis had a better manager." The caregiver was in tears-it was the first complete sentence he'd spoken in months. Moments like these remind us why we're doing this-it's not just about games; it's about unlocking moments of connection and joy that dementia often takes away.

One of the key features is having errorless fun with the games, why was that so important?
We strive for frustration-free design. With our games, there are no wrong answers-just gentle prompts to trigger memories and spark conversations about topics they are interested in. It's not about winning or losing; it's about rekindling connections and creating moments of happiness without any pressure or frustration. Dementia can make day-to-day tasks challenging, and the last thing anyone needs is a game that highlights what they might not remember or get right. Caregivers also like being able to redirect attention back to something familiar and fun when behaviour gets more challenging.
How has Google Play helped your journey?
What's been amazing is how Google Play has connected us with an incredibly active and engaged global community without any major marketing efforts on our part.
For instance, we got our first big traction in places like the Philippines and India-places we hadn't specifically targeted. Yet here we are, with thousands of downloads in more than 100 countries. That reach wouldn't have been possible without Google Play.

What is next for Memory Lane Games?
We're really excited about how we can use AI to take Memory Lane Games to the next level. Our goal is to use generative AI, like Google's Gemini, to create more personalized and localized game content. For example, instead of just focusing on general memories, we want to tailor the game to a specific village the player came from, or a TV show they used to watch, or even local landmarks from their family's hometown. AI will help us offer games that are deeply personal. Plus, with the power of AI, we can create games in multiple languages, tapping into new regions like Japan, Nigeria or Mexico.
Discover other inspiring app and game founders featured in #WeArePlay.
24 Mar 2025 7:00pm GMT
13 Mar 2025
Android Developers Blog
The Third Beta of Android 16
Posted by Matthew McCullough - VP of Product Management, Android Developer
Android 16 has officially reached Platform Stability today with Beta 3! That means the API surface is locked, the app-facing behaviors are final, and you can push your Android 16-targeted apps to the Play store right now. Read on for coverage of new security and accessibility features in Beta 3.
Android delivers enhancements and new features year-round, and your feedback on the Android beta program plays a key role in helping Android continuously improve. The Android 16 developer site has more information about the beta, including how to get it onto devices and the release timeline. We're looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that benefits everyone.
New in Android 16 Beta 3
At this late stage in the development cycle, there are only a few new things in the Android 16 Beta 3 release for you to consider when developing your apps.

Broadcast audio support
Pixel 9 devices on Android 16 Beta now support Auracast broadcast audio with compatible LE Audio hearing aids, part of Android's work to enhance audio accessibility. Built on the LE Audio standard, Auracast enables compatible hearing aids and earbuds to receive direct audio streams from public venues like airports, concerts, and classrooms. Our Keyword post has more on this technology.
Outline text for maximum text contrast
Users with low vision often have reduced contrast sensitivity, making it challenging to distinguish objects from their backgrounds. To help these users, Android 16 Beta 3 introduces outline text, replacing high contrast text, which draws a larger contrasting area around text to greatly improve legibility.
Android 16 also contains new AccessibilityManager APIs to allow your apps to check or register a listener to see if this mode is enabled. This is primarily for UI Toolkits like Compose to offer a similar visual experience. If you maintain a UI Toolkit library or your app performs custom text rendering that bypasses the android.text.Layout class then you can use this to know when outline text is enabled.

Test your app with Local Network Protection
Android 16 Beta 3 adds the ability to test the Local Network Protection (LNP) feature which is planned for a future Android major release. It gives users more control over which apps can access devices on their local network.
What's Changing?
Currently, any app with the INTERNET permission can communicate with devices on the user's local network. LNP will eventually require apps to request a specific permission to access the local network.
Beta 3: Opt-In and Test
In Beta 3, LNP is an opt-in feature. This is your chance to test your app and identify any parts that rely on local network access. Use this adb command to enable LNP restrictions for your app:
adb shell am compat enable RESTRICT_LOCAL_NETWORK <your_package_name>
After rebooting your device, your app's local network access is restricted. Test features that might interact with local devices (e.g., device discovery, media casting, connecting to IoT devices). Expect to see socket errors like EPERM or ECONNABORTED if your app tries to access the local network without the necessary permission. See the developer guide for more information, including how to re-enable local network access.
This is a significant change, and we're committed to working with you to ensure a smooth transition. By testing and providing feedback now, you can help us build a more private and secure Android ecosystem.
Get your apps, libraries, tools, and game engines ready!
If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates are needed to fully support Android 16.
Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 16 Beta 3. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply, even if you don't yet target Android 16:
- JobScheduler: JobScheduler quotas are enforced more strictly in Android 16; enforcement will occur if a job executes while the app is on top, when a foreground service is running, or in the active standby bucket. setImportantWhileForeground is now a no-op. The new stop reason STOP_REASON_TIMEOUT_ABANDONED occurs when we detect that the app can no longer stop the job.
- Broadcasts: Ordered broadcasts using priorities only work within the same process. Use other IPC if you need cross-process ordering.
- ART: If you use reflection, JNI, or any other means to access Android internals, your app might break. This is never a best practice. Test thoroughly.
- Intents: Android 16 has stronger security against Intent redirection attacks. Test your Intent handling, and only opt-out of the protections if absolutely necessary.
- 16KB Page Size: If your app isn't 16KB-page-size ready, you can use the new compatibility mode flag, but we recommend migrating to 16KB for best performance.
- Accessibility: announceForAccessibility is deprecated; use the recommended alternatives.
- Bluetooth: Android 16 improves Bluetooth bond loss handling that impacts the way re-pairing occurs.
Other changes that will be impactful once your app targets Android 16:
- User Experience: Changes include the removal of edge-to-edge opt-out, requiring migration or opt-out for predictive back, and disabling elegant font APIs.
- Core Functionality: Optimizations have been made to fixed-rate work scheduling.
- Large Screen Devices: Orientation, resizability, and aspect ratio restrictions will be ignored. Ensure your layouts support all orientations across a variety of aspect ratios.
- Health and Fitness: Changes have been implemented for health and fitness permissions.
Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.
Once you've published the Android 16-compatible version of your app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 16 and use the compatibility framework to help quickly detect issues.
Two Android API releases in 2025
This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. This Q2 major release will be the only release in 2025 that includes behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-breaking behavior changes.

We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We're putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.
There's no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.
Get started with Android 16
You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 2 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 3.
While the API and behaviors are final, we're still looking for your feedback so please report issues on the feedback page. The earlier we get your feedback, the better chance we'll be able to address it in this or a future release.
For the best development experience with Android 16, we recommend that you use the latest feature drop of Android Studio (Meerkat). Once you're set up, here are some of the things you should do:
- Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
- Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it.
We'll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you've installed a beta build, you'll automatically get future updates over-the-air for all later previews and Betas.
For complete information on Android 16 please visit the Android 16 developer site.
13 Mar 2025 6:00pm GMT
#TheAndroidShow: Multimodal for Gemini in Android Studio, news for gaming devs, the latest devices at MWC, XR and more!
Posted by Anirudh Dewani - Director, Android Developer Relations
We just dropped our Winter episode of #TheAndroidShow, on YouTube and on developer.android.com, and this time we were in Barcelona to give you the latest from Mobile World Congress and across the Android Developer world. We unveiled a big update to Gemini in Android Studio (multi-modal support, so you can translate image to code) and we shared some news for games developers ahead of GDC later this month. Plus we unpacked the latest Android hardware devices from our partners coming out of Mobile World Congress and recapped all of the latest in Android XR. Let's dive in!
Multimodality image-to-code, now available for Gemini in Android Studio
At every stage of the development lifecycle, Gemini in Android Studio has become your AI-powered companion. Today, we took the wraps off a new feature: Gemini in Android Studio now supports multimodal image to code, which lets you attach images directly to your prompts! This unlocks a wealth of new possibilities that improve collaboration and design workflows. You can try out this new feature by downloading the latest canary - Android Studio Narwal, and read more about multimodal image attachment - now available for Gemini in Android Studio.
Building excellent games with better graphics and performance
Ahead of next week's Games Developer Conference (GDC), we announced new developer tools that will help improve gameplay across the Android ecosystem. We're making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we're enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplay sessions. Learn more about how we're building excellent games with better graphics and performance.
A deep dive into Android XR
Since we unveiled Android XR in December, it's been exciting to see developers preparing their apps for the next generation of Android XR devices. In the latest episode of #TheAndroidShow we dove into this new form factor and spoke with a developer who has already been building. Developing for this new platform leverages your existing Android development skills and familiar tools like Android Studio, Kotlin, and Jetpack libraries. The Android XR SDK Developer Preview is available now, complete with an emulator, so you can start experimenting and building XR experiences immediately! Visit developer.android.com/xr for more.
New Android foldables and tablets, at Mobile World Congress
Mobile World Congress is a big moment for Android, with partners from around the world showing off their latest devices. And if you're already building adaptive apps, we wanted to share some of the cool new foldable and tablets that our partners released in Barcelona:
- OPPO: OPPO launched their Find N5, their slim 8.93mm foldable with a 8.12" large screen - making it as compact or expansive as needed.
- Xiaomi: Xiaomi debuted the Xiaomi Pad 7 series. Xiaomi Pad 7 provides a crystal-clear display and, with the productivity accessories, users get a desktop-like experience with the convenience of a tablet.
- Lenovo: Lenovo showcased their Yoga Tab Plus, the latest powerful tablet from their lineup designed to empower creativity and productivity.
These new devices are a great reason to build adaptive apps that scale across screen sizes and device types. Plus, Android 16 removes the ability for apps to restrict orientation and resizability at the platform level, so you'll want to prepare. To help you get started, the Compose Material 3 adaptive library enables you to quickly and easily create layouts across all screen sizes while reducing the overall development cost.
Watch the Winter episode of #TheAndroidShow
That's a wrap on this quarter's episode of #TheAndroidShow. A special thanks to our co-hosts for the Fall episode, Simona Milanović and Alejandra Stamato! You can watch the full show on YouTube and on developer.android.com/events/show.
Have an idea for our next episode of #TheAndroidShow? It's your conversation with the broader community, and we'd love to hear your ideas for our next quarterly episode - you can let us know on X or LinkedIn.
13 Mar 2025 5:01pm GMT
Multimodal image attachment is now available for Gemini in Android Studio
Posted by Paris Hsu - Product Manager, Android Studio
At every stage of the development lifecycle, Gemini in Android Studio has become your AI-powered companion, making it easier to build high quality apps. We are excited to announce a significant expansion: Gemini in Android Studio now supports multimodal inputs, which lets you attach images directly to your prompts! This unlocks a wealth of new possibilities that improve team collaboration and UI development workflows.
You can try out this new feature by downloading the latest Android Studio canary. We've outlined a few use cases to try, but we'd love to hear what you think as we work through bringing this feature into future stable releases. Check it out:
Image attachment - a new dimension of interaction
We first previewed Gemini's multimodal capabilities at Google I/O 2024. This technology allows Gemini in Android Studio to understand simple wireframes, and transform them into working Jetpack Compose code.
You'll now find an image attachment icon in the Gemini chat window. Simply attach JPEG or PNG files to your prompts and watch Gemini understand and respond to visual information. We've observed that images with strong color contrasts yield the best results.


We encourage you to experiment with various prompts and images. Here are a few compelling use cases to get you started:
- Rapid UI prototyping and iteration: Convert a simple wireframe or high-fidelity mock of your app's UI into working code.
- Diagram explanation and documentation: Gain deeper insights into complex architecture or data flow diagrams by having Gemini explain their components and relationships.
- UI troubleshooting: Capture screenshots of UI bugs and ask Gemini for solutions.
Rapid UI prototyping and iteration
Gemini's multimodal support lets you convert visual designs into functional UI code. Simply upload your image and use a clear prompt. It works whether you're working from your own sketches or from a designer mockup.
Here's an example prompt: "For this image provided, write Android Jetpack Compose code to make a screen that's as close to this image as possible. Make sure to include imports, use Material3, and document the code." And then you can append any specific or additional instructions related to the image.


For more complex UIs, refine your prompts to capture specific functionality. For instance, when converting a calculator mockup, adding "make the interactions and calculations work as you'd expect" results in a fully functional calculator:


Note: this feature provides an initial design scaffold. It's a good "first draft" and your edits and adjustments will be needed. Common refinements include ensuring correct drawable imports and importing icons. Consider the generated code a highly efficient starting point, accelerating your UI development workflow.
Diagram explanation and documentation
With Gemini's multimodal capabilities, you can also try uploading an image of your diagram and ask for explanations or documentation.
Example prompt: Upload the Now in Android architecture diagram and say "Explain the components and data flow in this diagram" or "Write documentation about this diagram".

UI troubleshooting
Leverage Gemini's visual analysis to identify and resolve bugs quickly. Upload a screenshot of the problematic UI, and Gemini will analyze the image and suggest potential solutions. You can also include relevant code snippets for more precise assistance.
In the example below, we used Compose UI check and found that the button is stretched too wide in tablet screens, so we took a screenshot and asked Gemini for solutions - it was able to leverage the window size classes to provide the right fix.

Download Android Studio today
Download the latest Android Studio canary today to try the new multimodal features!
As always, Google is committed to the responsible use of AI. Android Studio won't send any of your source code to servers without your consent. You can read more on Gemini in Android Studio's commitment to privacy.
We appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue and also check out known issues. Remember to also follow us on X, Medium, or YouTube for more Android development updates!
13 Mar 2025 5:00pm GMT
Making Google Play the best place to grow PC games
Posted by Aurash Mahbod - VP and GM of Games on Google Play
We're stepping up our multiplatform gaming offering with exciting news dropping at this year's Game Developers Conference (GDC). We're bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we'll be diving into all of the latest games coming to Play, plus new developer tools that'll help improve gameplay across the Android ecosystem.
Today, we're sharing a closer look at what's new from Play. We're expanding our support for native PC games with a new earnback program and making Google Play Games on PC generally available this year with major upgrades. Check out the video or keep reading below.
Google Play connects developers with over 2 billion monthly active players1 worldwide. Our tools and features help you engage these players across a wide range of devices to drive engagement and revenue. But we know the gaming landscape is constantly evolving. More and more players enjoy the immersive experiences on PC and want the flexibility to play their favorite games on any screen.
That's why we're making even bigger investments in our PC gaming platform. Google Play Games on PC was launched to help mobile games reach more players on PC. Today, we're expanding this support to native PC games, enabling more developers to connect with our massive player base on mobile.
Expanding support for native PC games
For games that are designed with a PC-first audience in mind, we've added even more helpful tools to our native PC program. Games like Wuthering Waves, Remember of Majesty, Genshin Impact, and Journey of Monarch have seen great success on the platform. Based on feedback from early access partners, we're taking the program even further, with comprehensive support across game development, distribution, and growth on the platform.
- Develop with Play Games PC SDK: We're launching a dedicated SDK for native PC games on Google Play Games, providing powerful tools, such as easier in-app purchase integration and advanced security protection.
- Distribute through Play Console: We've made it easier for developers to manage both mobile and PC game builds in one place, simplifying the process of packaging PC versions, configuring releases, and managing store listings.
- Grow with our new earnback program: Bring your PC games to Google Play Games on PC to unlock up to 15% additional earnback.2
We're opening up the program for all native PC games - including PC-only games - this year. Learn more about the eligibility requirements and how to join the program.

Making PC an easy choice for mobile developers
Bringing your game to PC unlocks a whole new audience of engaged players. To help maximize your discoverability, we're making all mobile games available3 on PC by default with the option to opt out anytime.
Games will display a playability badge indicating their compatibility with PC. "Optimized" means that a game meets all of our quality standards for a great gaming experience while "playable" means that the game meets the minimum requirements to play well on a PC. With the support of our new custom control mappings, many games can be playable right out of the box. Learn more about the playability criteria and how to optimize your games for PC today.

To enhance our PC experience, we've made major upgrades to the platform. Now, gamers can enjoy the full Google Play Games on PC catalog on even more devices, including AMD laptops and desktops. We're partnering with PC OEMs to make Google Play Games accessible right from the start menu on new devices starting this year.
We're also bringing new features for players to customize their gaming experiences. Custom controls is now available to help tailor their setup for optimal comfort and performance. Rolling out this month, we're adding a handy game sidebar for quick adjustments and enabling multi-account and multi-instance support by popular demand.

Unlocking exclusive rewards on PC with Play Points
To help you boost engagement, we're also rolling out a more seamless Play Points4 experience on PC. Play Points balance is now easier to track and more rewarding, with up to 10x points boosters5 on Google Play Games. This means more opportunities for players to earn and redeem points for in-game items and discounts, enhancing the overall PC experience.

Bringing new PC UA tools powered by Google Ads
More developers are launching games on PC than ever, presenting an opportunity to reach a rapidly growing audience on PC. We want to make it easier for developers to reach great players with Google Ads. We're working on a solution to help developers run user acquisition campaigns for both mobile emulated and native PC titles within Google Play Games on PC. We're still in the early stages of partner testing, but we look forward to sharing more details later this year.
Join the celebration!
We're celebrating all that's to come to Google Play Games on PC with players and developers. Take a look at the behind-the-scenes from our social channels and editorial features on Google Play. At GDC, you can dive into the complete gaming experience that is available on the best Android gaming devices. If you'll be there, please stop by and say hello - we're at the Moscone Center West Hall!
13 Mar 2025 3:31pm GMT
Building excellent games with better graphics and performance
Posted by Matthew McCullough - VP of Product Management, Android
We're stepping up our multiplatform gaming offering with exciting news dropping at this year's Game Developers Conference (GDC). We're bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we'll be diving into all of the latest games coming to Play, plus new developer tools that'll help improve gameplay across the Android ecosystem.
Today, we're sharing a closer look at what's new from Android. We're making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we're enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplays. Check out the video or keep reading below.
More immersive visuals built on Vulkan, now the official graphics API
These days, games require more processing power for realistic graphics and cutting-edge visuals. Vulkan is an API used for low level graphics that helps developers maximize the performance of modern GPUs, and today we're making it the official graphics API for Android. This unlocks advanced features like ray tracing and multithreading for realistic and immersive gaming visuals. For example, Diablo Immortal used Vulkan to implement ray tracing, bringing the world of Sanctuary to life with spectacular special effects, from fiery explosions to icy blasts.

For casual games like Pokémon TCG Pocket, which draws players into the vibrant world of each Pokémon, Vulkan helps optimize graphics across a broad range of devices to ensure a smooth and engaging experience for every player.

We're excited to announce that Android is transitioning to a modern, unified rendering stack with Vulkan at its core. Starting with our next Android release, more devices will use Vulkan to process all graphics commands. If your game is running on OpenGL, it will use ANGLE as a system driver that translates OpenGL to Vulkan. We recommend testing your game on ANGLE today to ensure it's ready for the Vulkan transition.
We're also partnering with major game engines to make Vulkan integration easier. With Unity 6, you can configure Vulkan per device while older versions can access this setting through plugins. Over 45% of sessions from new games on Unity* use Vulkan, and we expect this number to grow rapidly.
To simplify workflows further, we're teaming up with the Samsung Austin Research Center to create an integrated GPU profiler toolchain for Vulkan and AI/ML optimization. Coming later this year, this tool will enable developers to make graphics, memory and compute workloads more efficient.
Longer and smoother gameplay sessions with ADPF
Android Dynamic Performance Framework (ADPF) enables developers to adjust between the device and game's performance in real-time based on the thermal state of the device, and it's getting a big update today to provide longer and smoother gameplay sessions. ADPF is designed to work across a wide range of devices including models like the Pixel 9 family and the Samsung S25 Series. We're excited to see MMORPGs like Lineage W integrating ADPF to optimize performance on their core target devices.

Here's how we're enhancing ADPF with better performance and simplified integration:
- Stronger performance: Our collaboration with MediaTek, a leading chip supplier for Android devices, has brought enhanced stability to ADPF. Devices powered by MediaTek's MAGT system-on-chip solution can now fully utilize ADPF's performance optimization capabilities.
- Easier integration: Major game engines now offer built-in ADPF support with simple interfaces and default configurations. For advanced controls, developers can customize the ADPF behavior in real time.
Performance optimization with more features in Play Console
Once you've launched your game, Play Console offers the tools to monitor and improve your game's performance. We're newly including Low Memory Killers (LMK) in Android vitals, giving you insight into memory constraints that can cause your game to crash. Android vitals is your one-stop destination for monitoring metrics that impact your visibility on the Play Store like slow sessions. You can find this information next to reach and devices which provides updates on your game's user distribution and notifies developers for device-specific issues.

Bringing PC games to mobile, and pushing the boundaries of gaming
We're launching a pilot program to simplify the process of bringing PC games to mobile. It provides support starting from Android game development all the way through publishing your game on Play. Starting this month, games like DREDGE and TABS Mobile are growing their mobile audience using this program. Many more are following in their footsteps this year, including Disco Elysium. You can express your interest to join the PC to mobile program.

You can learn more about Android game development from our developer site. We can't wait to see your title join the ranks of these amazing games built for Android. And if you'll be at GDC next week, we'd love to say hello - stop by at the Moscone Center West Hall!
13 Mar 2025 3:30pm GMT
12 Mar 2025
Android Developers Blog
Jetpack WindowManager 1.4 is stable
Posted by Xiaodao Wu - Developer Relations Engineer
Jetpack WindowManager keeps getting better. WindowManager gives you tools to build adaptive apps that work seamlessly across all kinds of large screen devices. Version 1.4, which is stable now, introduces new features that make multi-window experiences even more powerful and flexible. While Jetpack Compose is still the best way to create app layouts for different screen sizes, 1.4 makes some big improvements to activity embedding, including activity stack spinning, pane expansion, and dialog full-screen dim. Multi-activity apps can easily take advantage of all these great features.
What's new in WindowManager 1.4
WindowManager 1.4 introduces a range of enhancements. Here are some of the highlights.
WindowSizeClass
We've updated the WindowSizeClass API to support custom values. We changed the API shape to make it easy and extensible to support custom values and add new values in the future. The high level changes are as follows:
- Opened the constructor to take in minWidthDp and minHeightDp parameters so you can create your own window size classes
- Added convenience methods for checking breakpoint validity
- Deprecated WindowWidthSizeClass and WindowHeightSizeClass in favor of WindowSizeClass#isWidthAtLeastBreakpoint() and WindowSizeClass#isHeightAtLeastBreakpoint() respectively
Here's a migration example:
// old val sizeClass = WindowSizeClass.compute(widthDp, heightDp) when (sizeClass.widthSizeClass) { COMPACT -> doCompact() MEDIUM -> doMedium() EXPANDED -> doExpanded() else -> doDefault() } // new val sizeClass = WindowSizeClass.BREAKPOINTS_V1 .computeWindowSizeClass(widthDp, heightDp) when { sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_EXPANDED_LOWER_BOUND) -> { doExpanded() } sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_MEDIUM_LOWER_BOUND) -> { doMedium() } else -> { doCompact() } }
Some things to note in the new API:
- The order of the when branches should go from largest to smallest to support custom values from developers or new values in the future
- The default branch should be treated as the smallest window size class
Activity embedding
Activity stack pinning
Activity stack pinning provides a way to keep an activity stack always on screen, no matter what else is happening in your app. This new feature lets you pin an activity stack to a specific window, so the top activity stays visible even when the user navigates to other parts of the app in a different window. This is perfect for things like live chats or video players that you want to keep on screen while users explore other content.
private fun pinActivityStackExample(taskId: Int) { val splitAttributes: SplitAttributes = SplitAttributes.Builder() .setSplitType(SplitAttributes.SplitType.ratio(0.66f)) .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT) .build() val pinSplitRule = SplitPinRule.Builder() .setDefaultSplitAttributes(splitAttributes) .build() SplitController.getInstance(applicationContext).pinTopActivityStack(taskId, pinSplitRule) }
Pane expansion
The new pane expansion feature, also known as interactive divider, lets you create a visual separation between two activities in split-screen mode. You can make the pane divider draggable so users can resize the panes - and the activities in the panes - on the fly. This gives users control over how they want to view the app's content.
val splitAttributesBuilder: SplitAttributes.Builder = SplitAttributes.Builder() .setSplitType(SplitAttributes.SplitType.ratio(0.33f)) .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT) if (WindowSdkExtensions.getInstance().extensionVersion >= 6) { splitAttributesBuilder.setDividerAttributes( DividerAttributes.DraggableDividerAttributes.Builder() .setColor(getColor(context, R.color.divider_color)) .setWidthDp(4) .setDragRange( DividerAttributes.DragRange.DRAG_RANGE_SYSTEM_DEFAULT) .build() ) } val splitAttributes: SplitAttributes = splitAttributesBuilder.build()
Dialog full-screen dim
WindowManager 1.4 gives you more control over how dialogs dim the background. With dialog full-screen dim, you can choose to dim just the container where the dialog appears or the entire task window for a unified UI experience. The entire app window dims by default when a dialog opens (see EmbeddingConfiguration.DimAreaBehavior.ON_TASK).To dim only the container of the activity that opened the dialog, use EmbeddingConfiguration.DimAreaBehavior.ON_ACTIVITY_STACK. This gives you more flexibility in designing dialogs and makes for a smoother, more coherent user experience. Temu is among the first developers to integrate this feature, the full-screen dialog dim has reduced screen invalid touches by about 5%.

Enhanced posture support
WindowManager 1.4 makes building apps that work flawlessly on foldables straightforward by providing more information about the physical capabilities of the device. The new WindowInfoTracker#supportedPostures API lets you know if a device supports tabletop mode, so you can optimize your app's layout and features accordingly.
val currentSdkVersion = WindowSdkExtensions.getInstance().extensionVersion val message = if (currentSdkVersion >= 6) { val supportedPostures = WindowInfoTracker.getOrCreate(LocalContext.current).supportedPostures buildString { append(supportedPostures.isNotEmpty()) if (supportedPostures.isNotEmpty()) { append(" ") append( supportedPostures.joinToString( separator = ",", prefix = "(", postfix = ")")) } } } else { "N/A (WindowSDK version 6 is needed, current version is $currentSdkVersion)" }
Other API changes
WindowManager 1.4 includes several API changes and additions to support the new features. Notable changes include:
- Stable and no longer experimental APIs:
- ActivityEmbeddingController#invalidateVisibleActivityStacks
- ActivityEmbeddingController#getActivityStack
- SplitController#updateSplitAttributes
- API added to set activity embedding animation background:
-
- SplitAttributes.Builder#setAnimationParams
- API to get updated WindowMetrics information:
-
- ActivityEmbeddingController#embeddedActivityWindowInfo
- API to finish all activities in an activity stack:
-
- ActivityEmbeddingController#finishActivityStack
How to get started
To start using Jetpack WindowManager 1.4 in your Android projects, update your app dependencies in build.gradle.kts to the latest stable version:
dependencies { implementation("androidx.window:window:1.4.0-rc01") ... // or, if you're using the WindowManager testing library: testImplementation("androidx.window:window-testing:1.4.0-rc01") }
Happy coding!
12 Mar 2025 9:00pm GMT
Unlock Deeper Health Insights: Health Connect Jetpack SDK is now in beta and new feature updates
Posted by Brenda Shaw - Health & Home Partner Engineering Technical Writer
At Google, we are committed to empowering developers as they build exceptional health and fitness experiences. Core to that commitment is Health Connect, an Android platform that allows health and fitness apps to store and share the same on-device data. Android devices running Android 14 or that have the pre-installed APK will automatically have Health Connect by default in Settings. For pre-Android 14 devices, Health Connect is available for download from the Play Store.
We're excited to announce significant Health Connect updates like the Jetpack SDK Beta, new datatypes and new permissions that will enable richer, more insightful app functionalities.
Jetpack SDK is now in Beta
We are excited to announce the beta release of our Jetback SDK! Since its initial release, we've dedicated significant effort to improving data completeness, with a particular focus on enriching the metadata associated with each data point.
In the latest SDK, we're introducing two key changes designed to ensure richer metadata and unlock new possibilities for you and your users:
Make Recording Method Mandatory
To deliver more accurate and insightful data, the Beta introduces a requirement to specify one of four recording methods when writing data to Health Connect. This ensures increased data clarity, enhanced data analysis and improved user experience:
If your app currently does not set metadata when creating a record:
Before
StepsRecord( count = 888, startTime = START_TIME, endTime = END_TIME, ) // error: metadata is not provided
After
StepsRecord(
count = 888,
startTime = START_TIME,
endTime = END_TIME,
metadata = Metadata.manualEntry()
)
If your app currently calls Metadata constructor when creating a record:
Before
StepsRecord( count = 888, startTime = START_TIME, endTime = END_TIME, metadata = Metadata( clientRecordId = "client id", recordingMethod = RECORDING_METHOD_MANUAL_ENTRY, ), // error: Metadata constructor not found )
After
StepsRecord( count = 888, startTime = START_TIME, endTime = END_TIME, metadata = Metadata.manualEntry(clientRecordId = "client id"), )
Make Device Type Mandatory
You will be required to specify device type when creating a Device object. A device object will be required for Automatically (RECORDING_METHOD_AUTOMATICALLY_RECORDED) or Actively (RECORDING_METHOD_ACTIVELY_RECORDED) recorded data.
Before
Device() // error: type not provided
After
Device(type = Device.Companion.TYPE_PHONE)
We believe these updates will significantly improve the quality of data within your applications and empower you to create more insightful user experiences. We encourage you to explore the Jetpack SDK Beta and review the updated Metadata page and familiarize yourself with these changes.
New background reads permission
To enable richer, background-driven health and fitness experiences while maintaining user trust, Health Connect now features a dedicated background reads permission.
This permission allows your app to access Health Connect data while running in the background, provided the user grants explicit consent. Users retain full control, with the ability to manage or revoke this permission at any time via Health Connect settings.
Let your app read health data even in the background with the new Background Reads permission. Declare the following permission in your manifest file:
<application>
<uses-permission android:name="android.permission.health.READ_HEALTH_DATA_IN_BACKGROUND" />
...
</application>
Use the Feature Availability API to check if the user has the background read feature available, according to the version of Health Connect they have on their devices.
Allow your app to read historic data
By default, when granted read permission, your app can access historical data from other apps for the preceding 30 days from the initial permission grant. To enable access to data beyond this 30-day window, Health Connect introduces the PERMISSION_READ_HEALTH_DATA_HISTORY permission. This allows your app to provide new users with a comprehensive overview of their health and wellness history.
Users are in control of their data with both background reads and history reads. Both capabilities require developers to declare the respective permissions, and users must grant the permission before developers can access their data. Even after granting permission, users have the option of revoking access at any time from Health Connect settings.
Additional data access and types
Health Connect now offers expanded data types, enabling developers to build richer user experiences and provide deeper insights. Check out the following new data types:
- Exercise Routes allows users to share exercise routes with other apps for a seamless synchronized workout. By allowing users to share all routes or one route, their associated exercise activities and maps for their workouts will be synced with the fitness apps of their choice.

- The skin temperature data type measures peripheral body temperature unlocking insights around sleep quality, reproductive health, and the potential onset of illness.
- Health Connect also provides a planned exercise data type to enable training apps to write training plans and workout apps to read training plans. Recorded exercises (workouts) can be read back for personalized performance analysis to help users achieve their training goals. Access granular workout data, including sessions, blocks, and steps, for comprehensive performance analysis and personalized feedback.
These new data types empower developers to create more connected and insightful health and fitness applications, providing users with a holistic view of their well-being.
To learn more about all new APIs and bug fixes, check out the full release notes.
Get started with the Health Connect Jetpack SDK
Whether you are just getting started with Health Connect or are looking to implement the latest features, there are many ways to learn more and have your voice heard.
- Subscribe to our newsletter: Stay up-to-date with the latest news, announcements, and resources from Google Health and Fitness. Subscribe to our Health and Fitness Google Developer Newsletter and get the latest updates delivered straight to your inbox.
- Check out our Health Connect developer guide: The Health and Fitness Developer Center is your one-stop-shop for building health and fitness apps on Android - including a robust guide for getting started with Health Connect.
- Report an issue: Encountered a bug or technical issue? Report it directly to our team through the Issue Tracker so we can investigate and resolve it. You can also request a feature or provide feedback with Issue Tracker.
We can't wait to see what you create!
12 Mar 2025 4:00pm GMT
06 Mar 2025
Android Developers Blog
Widgets on lock screen: FAQ
Posted by Tyler Beneke - Product Manager, and Lucas Silva - Software Engineer
Widgets are now available on your Pixel Tablet lock screens! Lock screen widgets empower users to create a personalized, always-on experience. Whether you want to easily manage smart home devices like lights and thermostats, or build dashboards for quick access and control of vital information, this blog post will answer your key questions about lock screen widgets on Android. Read on to discover when, where, how, and why they'll be on a lock screen near you.

Q: When will lock screen widgets be available?
A: Lock screen widgets will be available in AOSP for tablets and mobile starting with the release after Android 16 (QPR1). This update is scheduled to be pushed to AOSP in late Summer 2025. Lock screen widgets are already available on Pixel Tablets.
Q: Are there any specific requirements for widgets to be allowed on the lock screen?
A: No, widgets allowed on the lock screen have the same requirements as any other widgets. Widgets on the lock screen should follow the same quality guidelines as home screen widgets including quality, sizing, and configuration. If a widget launches an activity from the lock screen, users must authenticate to launch the activity, or the activity should declare android:showWhenLocked="true" in its manifest entry.
Q: How can I test my widget on the lock screen?
A: Currently, lock screen widgets can be tested on Pixel Tablet devices. You can enable lock screen widgets and add your widget.
Q: Which widgets can be displayed in this experience?
A: All widgets are compatible with the lock screen widget experience. To prioritize user choice and customization, we've made all widgets available. For the best experience, please make sure your widget supports dynamic color and dynamic resizing. Lock screen widgets are sized to approximately 4 cells wide by 3 cells tall on the launcher, but exact dimensions vary by device.
Q: Can my widget opt-out of the experience?
A:Important: Apps can choose to restrict the use of their widgets on the lock screen using an opt-out API. To opt-out, use the widget category "not_keyguard" in your appwidget info xml file. Place this file in an xml-36 resource folder to ensure backwards compatibility.
Q: Are there any CDD requirements specifically for lock screen widgets?
A: No, there are no specific CDD requirements solely for lock screen widgets. However, it's crucial to ensure that any widgets and screensavers that integrate with the framework adhere to the standard CDD requirements for those features.
Q: Will lock screen widgets be enabled on existing devices?
A: Yes, lock screen widgets were launched on the Pixel Tablet in 2024 Other device manufacturers may update their devices as well once the feature is available in AOSP.
Q: Does the device need to be docked to use lock screen widgets?
A: The mechanism that triggers the lock screen widget experience is customizable by the OEM. For example, OEMs can choose to use charging or docking status as triggers. Third-party OEMs will need to implement their own posture detection if desired.
Q: Can OEMs set their own default widgets?
A: Yes! Hardware providers can pre-set and automatically display default widgets.
Q: Can OEMs customize the user interface for lock screen widgets?
A: Customization of the lock screen widget user interface by OEMs is not supported in the initial release. All lock screen widgets will have the same developer experience on all devices.
Lock screen widgets are poised to give your users new ways to interact with your app on their devices. Today you can leverage your existing widget designs and experiences on the lock screen with Pixel Tablets. To learn more about building widgets, please check out our resources on developer.android.com
- Widgets: https://developer.android.com/develop/ui/views/appwidgets/overview
- Widget Design: https://developer.android.com/design/ui/widget
- Jetpack Glance: https://developer.android.com/develop/ui/compose/glance
This blog post is part of our series: Spotlight Week on Widgets, where we provide resources-blog posts, videos, sample code, and more-all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.
06 Mar 2025 5:00pm GMT
05 Mar 2025
Android Developers Blog
Common media processing operations with Jetpack Media3 Transformer
Posted by Nevin Mital - Developer Relations Engineer, and Kristina Simakova - Engineering Manager
Android users have demonstrated an increasing desire to create, personalize, and share video content online, whether to preserve their memories or to make people laugh. As such, media editing is a cornerstone of many engaging Android apps, and historically developers have often relied on external libraries to handle operations such as Trimming and Resizing. While these solutions are powerful, integrating and managing external library dependencies can introduce complexity and lead to challenges with managing performance and quality.
The Jetpack Media3 Transformer APIs offer a native Android solution that streamline media editing with fast performance, extensive customizability, and broad device compatibility. In this blog post, we'll walk through some of the most common editing operations with Transformer and discuss its performance.
Getting set up with Transformer
To get started with Transformer, check out our Getting Started documentation for details on how to add the dependency to your project and a basic understanding of the workflow when using Transformer. In a nutshell, you'll:
- Create one or many MediaItem instances from your video file(s), then
- Apply item-specific edits to them by building an EditedMediaItem for each MediaItem,
- Create a Transformer instance configured with settings applicable to the whole exported video,
- and finally start the export to save your applied edits to a file.
Aside: You can also use a CompositionPlayer to preview your edits before exporting, but this is out of scope for this blog post, as this API is still a work in progress. Please stay tuned for a future post!
Here's what this looks like in code:
val mediaItem = MediaItem.Builder().setUri(mediaItemUri).build() val editedMediaItem = EditedMediaItem.Builder(mediaItem).build() val transformer = Transformer.Builder(context) .addListener(/* Add a Transformer.Listener instance here for completion events */) .build() transformer.start(editedMediaItem, outputFilePath)
Transcoding, Trimming, Muting, and Resizing with the Transformer API
Let's now take a look at four of the most common single-asset media editing operations, starting with Transcoding.
Transcoding is the process of re-encoding an input file into a specified output format. For this example, we'll request the output to have video in HEVC (H265) and audio in AAC. Starting with the code above, here are the lines that change:
val transformer =
Transformer.Builder(context)
.addListener(...)
.setVideoMimeType(MimeTypes.VIDEO_H265)
.setAudioMimeType(MimeTypes.AUDIO_AAC)
.build()
Many of you may already be familiar with FFmpeg, a popular open-source library for processing media files, so we'll also include FFmpeg commands for each example to serve as a helpful reference. Here's how you can perform the same transcoding with FFmpeg:
$ ffmpeg -i $inputVideoPath -c:v libx265 -c:a aac $outputFilePath
The next operation we'll try is Trimming.
Specifically, we'll set Transformer up to trim the input video from the 3 second mark to the 8 second mark, resulting in a 5 second output video. Starting again from the code in the "Getting set up" section above, here are the lines that change:
// Configure the trim operation by adding a ClippingConfiguration to // the media item val clippingConfiguration = MediaItem.ClippingConfiguration.Builder() .setStartPositionMs(3000) .setEndPositionMs(8000) .build() val mediaItem = MediaItem.Builder() .setUri(mediaItemUri) .setClippingConfiguration(clippingConfiguration) .build() // Transformer also has a trim optimization feature we can enable. // This will prioritize Transmuxing over Transcoding where possible. // See more about Transmuxing further down in this post. val transformer = Transformer.Builder(context) .addListener(...) .experimentalSetTrimOptimizationEnabled(true) .build()
With FFmpeg:
$ ffmpeg -ss 00:00:03 -i $inputVideoPath -t 00:00:05 $outputFilePath
Next, we can mute the audio in the exported video file.
val editedMediaItem = EditedMediaItem.Builder(mediaItem) .setRemoveAudio(true) .build()
The corresponding FFmpeg command:
$ ffmpeg -i $inputVideoPath -c copy -an $outputFilePath
And for our final example, we'll try resizing the input video by scaling it down to half its original height and width.
val scaleEffect = ScaleAndRotateTransformation.Builder() .setScale(0.5f, 0.5f) .build() val editedMediaItem = EditedMediaItem.Builder(mediaItem) .setEffects( /* audio */ Effects(emptyList(), /* video */ listOf(scaleEffect)) ) .build()
An FFmpeg command could look like this:
$ ffmpeg -i $inputVideoPath -filter:v scale=w=trunc(iw/4)*2:h=trunc(ih/4)*2 $outputFilePath
Of course, you can also combine these operations to apply multiple edits on the same video, but hopefully these examples serve to demonstrate that the Transformer APIs make configuring these edits simple.
Transformer API Performance results
Here are some benchmarking measurements for each of the 4 operations taken with the Stopwatch API, running on a Pixel 9 Pro XL device:
(Note that performance for operations like these can depend on a variety of reasons, such as the current load the device is under, so the numbers below should be taken as rough estimates.)
Input video format: 10s 720p H264 video with AAC audio
- Transcoding to H265 video and AAC audio: ~1300ms
- Trimming video to 00:03-00:08: ~2300ms
- Muting audio: ~200ms
- Resizing video to half height and width: ~1200ms
Input video format: 25s 360p VP8 video with Vorbis audio
- Transcoding to H265 video and AAC audio: ~3400ms
- Trimming video to 00:03-00:08: ~1700ms
- Muting audio: ~1600ms
- Resizing video to half height and width: ~4800ms
Input video format: 4s 8k H265 video with AAC audio
- Transcoding to H265 video and AAC audio: ~2300ms
- Trimming video to 00:03-00:08: ~1800ms
- Muting audio: ~2000ms
- Resizing video to half height and width: ~3700ms
One technique Transformer uses to speed up editing operations is by prioritizing transmuxing for basic video edits where possible. Transmuxing refers to the process of repackaging video streams without re-encoding, which ensures high-quality output and significantly faster processing times.
When not possible, Transformer falls back to transcoding, a process that involves first decoding video samples into raw data, then re-encoding them for storage in a new container. Here are some of these differences:
Transmuxing
- Transformer's preferred approach when possible - a quick transformation that preserves elementary streams.
- Only applicable to basic operations, such as rotating, trimming, or container conversion.
- No quality loss or bitrate change.

Transcoding
- Transformer's fallback approach in cases when Transmuxing isn't possible - Involves decoding and re-encoding elementary streams.
- More extensive modifications to the input video are possible.
- Loss in quality due to re-encoding, but can achieve a desired bitrate target.

We are continuously implementing further optimizations, such as the recently introduced experimentalSetTrimOptimizationEnabled setting that we used in the Trimming example above.
A trim is usually performed by re-encoding all the samples in the file, but since encoded media samples are stored chronologically in their container, we can improve efficiency by only re-encoding the group of pictures (GOP) between the start point of the trim and the first keyframes at/after the start point, then stream-copying the rest.
Since we only decode and encode a fixed portion of any file, the encoding latency is roughly constant, regardless of what the input video duration is. For long videos, this improved latency is dramatic. The optimization relies on being able to stitch part of the input file with newly-encoded output, which means that the encoder's output format and the input format must be compatible.
If the optimization fails, Transformer automatically falls back to normal export.
What's next?
As part of Media3, Transformer is a native solution with low integration complexity, is tested on and ensures compatibility with a wide variety of devices, and is customizable to fit your specific needs.
To dive deeper, you can explore Media3 Transformer documentation, run our sample apps, or learn how to complement your media editing pipeline with Jetpack Media3. We've already seen app developers benefit greatly from adopting Transformer, so we encourage you to try them out yourself to streamline your media editing workflows and enhance your app's performance!
05 Mar 2025 8:00pm GMT
04 Mar 2025
Android Developers Blog
Generate Stunning Visuals in Your Android Apps with Imagen 3 via Vertex AI in Firebase
Posted by Thomas Ezan Sr. - Android Developer Relation Engineer (@lethargicpanda)
Imagen 3, our most advanced image generation model, is now available through Vertex AI in Firebase, making it even easier to integrate it to your Android apps.
Designed to generate well-composed images with exceptional details, reduced artifacts, and rich lighting, Imagen 3 represents a significant leap forward in image generation capabilities.


Imagen 3 unlocks exciting new possibilities for Android developers. Generated visuals can adapt to the content of your app, creating a more engaging user experience. For instance, your users can generate custom artwork to enhance their in-app profile. Imagen can also improve your app's storytelling by bringing its narratives to life with delightful personalized illustrations.
You can experiment with image prompts in Vertex AI Studio, and learn how to improve your prompts by reviewing the prompt and image attribute guide.
Get started with Imagen 3
The integration of Imagen 3 is similar to adding Gemini access via Vertex AI in Firebase. Start by adding the gradle dependencies to your Android project:
dependencies { implementation(platform("com.google.firebase:firebase-bom:33.10.0")) implementation("com.google.firebase:firebase-vertexai") }
Then, in your Kotlin code, create an ImageModel instance by passing the model name and optionally, a model configuration and safety settings:
val imageModel = Firebase.vertexAI.imagenModel( modelName = "imagen-3.0-generate-001", generationConfig = ImagenGenerationConfig( imageFormat = ImagenImageFormat.jpeg(compresssionQuality = 75), addWatermark = true, numberOfImages = 1, aspectRatio = ImagenAspectRatio.SQUARE_1x1 ), safetySettings = ImagenSafetySettings( safetyFilterLevel = ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE personFilterLevel = ImagenPersonFilterLevel.ALLOW_ADULT ) )
Finally generate the image by calling generateImages:
val imageResponse = imageModel.generateImages( prompt = "An astronaut riding a horse" )
Retrieve the generated image from the imageResponse and display it as a bitmap as follow:
val image = imageResponse.images.first() val uiImage = image.asBitmap()
Next steps
Explore the comprehensive Firebase documentation for detailed API information.
Access to Imagen 3 using Vertex AI in Firebase is currently in Public Preview, giving you an early opportunity to experiment and innovate. For pricing details, please refer to the Vertex AI in Firebase pricing page.
Start experimenting with Imagen 3 today! We're looking forward to seeing how you'll leverage Imagen 3's capabilities to create truly unique, immersive and personalized Android experiences.
04 Mar 2025 10:00pm GMT