18 Feb 2025
TalkAndroid
Amusement Arcade Toaplan Brings Retro Gaming to Mobile
Relive arcade nostalgia with Toaplan classics that are pocket-sized and on Google Play Store. Customize your virtual game room and them now.
18 Feb 2025 7:23pm GMT
Board Kings Free Rolls – Updated Every Day!
Run out of rolls for Board Kings? Find links for free rolls right here, updated daily!
18 Feb 2025 4:56pm GMT
Coin Tales Free Spins – Updated Every Day!
Tired of running out of Coin Tales Free Spins? We update our links daily, so you won't have that problem again!
18 Feb 2025 4:54pm GMT
Avatar World Codes – February 2025 – Updated Daily
Find all the latest Avatar World Codes right here in this article! Read on for more!
18 Feb 2025 4:53pm GMT
Coin Master Free Spins & Coins Links
Find all the latest Coin Master free spins right here! We update daily, so be sure to check in daily!
18 Feb 2025 4:52pm GMT
Monopoly Go Events Schedule Today – Updated Daily
Current active events are Pawfect Match Event, and Teddy Snatch Tournament Event. Special Event - Peg-E - Prize Drop
18 Feb 2025 4:50pm GMT
Monopoly Go – Free Dice Links Today (Updated Daily)
If you keep on running out of dice, we have just the solution! Find all the latest Monopoly Go free dice links right here!
18 Feb 2025 4:43pm GMT
Family Island Free Energy Links (Updated Daily)
Tired of running out of energy on Family Island? We have all the latest Family Island Free Energy links right here, and we update these daily!
18 Feb 2025 4:40pm GMT
Crazy Fox Free Spins & Coins (Updated Daily)
If you need free coins and spins in Crazy Fox, look no further! We update our links daily to bring you the newest working links!
18 Feb 2025 4:38pm GMT
Match Masters Free Gifts, Coins, And Boosters (Updated Daily)
Tired of running out of boosters for Match Masters? Find new Match Masters free gifts, coins, and booster links right here! Updated Daily!
18 Feb 2025 4:16pm GMT
Solitaire Grand Harvest – Free Coins (Updated Daily)
Get Solitaire Grand Harvest free coins now, new links added daily. Only tested and working links, complete with a guide on how to redeem the links.
18 Feb 2025 4:13pm GMT
Dice Dreams Free Rolls – Updated Daily
Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.
18 Feb 2025 4:12pm GMT
Truly Unlimited: Mint Mobile Unlimited Plan Loses Its Data Cap
Networks will say they have unlimited data, but then limit it when you use too much. Good thing that doesn't apply to Mint anymore.
18 Feb 2025 4:00pm GMT
The Vivo V50 Is Official, With Zeiss-Fueled Portrait Power
The Vivo V50 comes with a pretty sensible spec sheet, but the OEM wants you to focus on those Zeiss lenses.
18 Feb 2025 2:02pm GMT
Spotify’s Hi-Res “Music Pro” Plan Could Finally Come This Year
This could be the year Spotify actually gives us higher-quality audio. It won't be the first time it's coming "soon".
18 Feb 2025 12:17pm GMT
Tarisland Free Redeem Codes (February 2025)
Find all the latest Tarisland free redeem codes right here! Use these codes to gain awesome in-game freebies!
18 Feb 2025 6:51am GMT
13 Feb 2025
Android Developers Blog
The Second Beta of Android 16
Posted by Matthew McCullough - VP of Product Management, Android Developer
Today we're releasing the second beta of Android 16, continuing our work to build a platform that enables creative expression. You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air.
This build adds new support for professional camera experiences, graphical effects, extends our performance framework, and continues the evolution of features related to privacy, security, and background tasks. We're looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone.
Media and camera updates
Android 16 enhances support for professional camera users, allowing for hybrid auto exposure along with precise color temperature and tint adjustments. It's easier than ever to capture motion photos with new Intent actions, and we're continuing to improve UltraHDR images, with support for HEIC encoding and new parameters from the ISO 21496-1 draft standard.
Hybrid auto-exposure
Android 16 adds new hybrid auto-exposure modes to Camera2, allowing you to manually control specific aspects of exposure while letting the auto-exposure (AE) algorithm handle the rest. You can control ISO + AE, and exposure time + AE, providing greater flexibility compared to the current approach where you either have full manual control or rely entirely on auto-exposure.
fun setISOPriority() { // ... val availablePriorityModes = mStaticInfo.characteristics.get( CameraCharacteristics.CONTROL_AE_AVAILABLE_PRIORITY_MODES ) // ... // Turn on AE mode to set priority mode reqBuilder[CaptureRequest.CONTROL_AE_MODE] = CameraMetadata.CONTROL_AE_MODE_ON reqBuilder[CaptureRequest.CONTROL_AE_PRIORITY_MODE] = CameraMetadata.CONTROL_AE_PRIORITY_MODE_SENSOR_SENSITIVITY_PRIORITY reqBuilder[CaptureRequest.SENSOR_SENSITIVITY] = TEST_SENSITIVITY_VALUE val request: CaptureRequest = reqBuilder.build() // ... }
Precise color temperature and tint adjustments
Android 16 adds camera support for fine color temperature and tint adjustments to better support professional video recording applications. White balance settings are currently controlled through CONTROL_AWB_MODE, which contains options limited to a preset list, such as Incandescent, Cloudy, and Twilight. The COLOR_CORRECTION_MODE_CCT enables the use of COLOR_CORRECTION_COLOR_TEMPERATURE and COLOR_CORRECTION_COLOR_TINT for precise adjustments of white balance based on the correlated color temperature.
fun setCCT() { // ... (Your existing code before this point) ... val colorTemperatureRange: Range<Int> = mStaticInfo.characteristics[CameraCharacteristics.COLOR_CORRECTION_COLOR_TEMPERATURE_RANGE] // Set to manual mode to enable CCT mode reqBuilder[CaptureRequest.CONTROL_AWB_MODE] = CameraMetadata.CONTROL_AWB_MODE_OFF reqBuilder[CaptureRequest.COLOR_CORRECTION_MODE] = CameraMetadata.COLOR_CORRECTION_MODE_CCT reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TEMPERATURE] = 5000 reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TINT] = 30 val request: CaptureRequest = reqBuilder.build() // ... (Your existing code after this point) ... }

Motion photo capture intent actions
Android 16 adds standard Intent actions - ACTION_MOTION_PHOTO_CAPTURE, and ACTION_MOTION_PHOTO_CAPTURE_SECURE - which request that the camera application capture a motion photo and return it.

You must either pass an extra EXTRA_OUTPUT to control where the image will be written, or a Uri through Intent setClipData. If you don't set a ClipData, it will be copied there for you when calling Context.startActivity.
UltraHDR image enhancements
Android 16 continues our work to deliver dazzling image quality with UltraHDR images. It adds support for UltraHDR images in the HEIC file format. These images will get ImageFormat type HEIC_ULTRAHDR and will contain an embedded gainmap similar to the existing UltraHDR JPEG format. We're working on AVIF support for UltraHDR as well, so stay tuned.
In addition, Android 16 implements additional parameters in UltraHDR from the ISO 21496-1 draft standard, including the ability to get and set the colorspace that gainmap math should be applied in, as well as support for HDR encoded base images with SDR gainmaps.
Custom graphical effects with AGSL
Android 16 adds RuntimeColorFilter and RuntimeXfermode, allowing you to author complex effects like Threshold, Sepia, and Hue Saturation and apply them to draw calls. Since Android 13, you've been able to use AGSL to create custom RuntimeShaders that extend Shaders. The new API mirrors this, adding an AGSL-powered RuntimeColorFilter that extends ColorFilters, and a Xfermode effect that allows you to implement AGSL-based custom compositing and blending between source and destination pixels.
private val thresholdEffectString = """ uniform half threshold; half4 main(half4 c) { half luminosity = dot(c.rgb, half3(0.2126, 0.7152, 0.0722)); half bw = step(threshold, luminosity); return bw.xxx1 * c.a; }""" fun setCustomColorFilter(paint: Paint) { val filter = RuntimeColorFilter(thresholdEffectString) filter.setFloatUniform(0.5) paint.colorFilter = filter }
Behavior changes
With every Android release, we seek to make the platform more efficient, privacy conscious, internationalization friendly, and robust, balancing the needs of apps against hardware support, system performance, user privacy, and battery life. This can result in behavior changes that impact compatibility.
Edge to edge opt-out going away
Android 15 enforced edge-to-edge for apps targeting Android 15 (SDK 35), but your app could opt-out by setting R.attr#windowOptOutEdgeToEdgeEnforcement to true. Once your app targets Android 16 (Baklava), R.attr#windowOptOutEdgeToEdgeEnforcement is deprecated and disabled and your app cannot opt-out of going edge-to-edge. To be compatible with Android 16 Beta 2, ensure your app supports edge-to-edge and remove any use of R.attr#windowOptOutEdgeToEdgeEnforcement. To support edge-to-edge, see the Compose and Views guidance. Please let us know about concerns in our tracker on the feedback page.
Health and fitness permissions
For apps targeting Android 16 or higher, BODY_SENSORS permissions are transitioning to the granular permissions under android.permissions.health also used by Health Connect. Any API previously requiring BODY_SENSORS or BODY_SENSORS_BACKGROUND will now require the corresponding android.permissions.health permission. This affects the following data types, APIs, and foreground service types:
- HEART_RATE_BPM from Wear Health Services
- Sensor.TYPE_HEART_RATE from Android Sensor Manager
- heartRateAccuracy and heartRateBpm from Wear ProtoLayout
- FOREGROUND_SERVICE_TYPE_HEALTH where the respective android.permission.health permission is needed in place of BODY_SENSORS
If your app uses these APIs, it should now request the respective granular permissions:
- For while-in-use monitoring of Heart Rate, SpO2, or Skin Temperature, request the granular permission under android.permissions.health, such as READ_HEART_RATE instead of BODY_SENSORS.
- For background sensor access, request READ_HEALTH_DATA_IN_BACKGROUND instead of BODY_SENSORS_BACKGROUND.
These permissions are the same as those that guard access to reading data from Health Connect, the Android datastore for health, fitness, and wellness data.
Abandoned empty jobs stop reason
An abandoned job occurs when the JobParameters object associated with the job has been garbage collected, but jobFinished has not been called to signal job completion. This indicates that the job may be running and being rescheduled without the application's awareness.
Applications in Android 16 that rely on JobScheduler without maintaining a strong reference to the JobParameters object will now be granted the new job stop reason STOP_REASON_TIMEOUT_ABANDONED on timeout, instead of STOP_REASON_TIMEOUT.
If there are frequent occurrences of the new abandoned stop reason, the system will take mitigation steps to reduce job frequency. Please use the new stop reason to detect and reduce abandoned jobs.
Note: If you're using WorkManager, you're not expected to be impacted by this change - one nice side effect of using Android Jetpack to schedule your work.
Intent redirect changes
Android 16 introduces default security hardening against Intent redirection attacks regardless of your app's targetSDK version. The removeLaunchSecurityProtection API allows you to opt-out of this protection if your testing reveals issues.
Note: Opting out of security protections should be done with caution and only when absolutely necessary, as it can increase the risk of security vulnerabilities.
val iSublevel = intent.getParcelableExtra("sub_intent", Intent::class.java) iSublevel?.let { it.removeLaunchSecurityProtection() startActivity(it) }
Elegant font APIs deprecated and disabled
Apps targeting Android 15 (API level 35) have the elegantTextHeight TextView attribute set to true by default, replacing the compact font with one that is much more readable. You could override this by setting the elegantTextHeight attribute to false.
Android 16 deprecates the elegantTextHeight attribute, and the attribute will be ignored once your app targets Android 16. The "UI fonts" controlled by these APIs are being discontinued, so you should adapt any layouts to ensure consistent and future proof text rendering in Arabic, Lao, Myanmar, Tamil, Gujarati, Kannada, Malayalam, Odia, Telugu or Thai.


16 KB page size compatibility mode
Android 15 introduced support for 16KB memory pages to optimize performance of the platform. Android 16 adds a compatibility mode, allowing some apps built for 4K memory pages to run on a device configured for 16KB memory pages.
If Android detects that your app has 4KB aligned memory pages, it will automatically use compatibility mode and display a notification dialog to the user. Setting the android:pageSizeCompat property in the AndroidManifest.xml to enable the backwards compatibility mode will prevent the display of the dialog when your app launches. For best performance, reliability, and stability, your app should still be 16KB aligned. Read our recent blog post about updating your apps to support 16KB memory pages for more details.

Measurement system customization
Users can now customize their measurement system in regional preferences within Settings. The user preference is included as part of the locale code, so you can register a BroadcastReceiver on ACTION_LOCALE_CHANGED to handle locale configuration changes when regional preferences change.
Using formatters can help match the local experience. For example, "0.5 in" in English (United States), is "12,7 mm" for a user who has set their phone to English (Denmark) or who uses their phone in English (United States) with the metric system as the measurement system preference.
To find these settings in Android 16 Beta 2, open the Settings app and navigate to System > Languages & region.
Content handling for live wallpapers
In Android 16, the live wallpaper framework is gaining a new content API to address the challenges of dynamic, user-driven wallpapers. Currently, live wallpapers incorporating user-provided content require complex, service-specific implementations. Android 16 introduces WallpaperDescription and WallpaperInstance. WallpaperDescription allows you to identify distinct instances of a live wallpaper from the same service. For example, a wallpaper that has instances on both the home screen and on the lock screen may have unique content in both places. The wallpaper picker and WallpaperManager use this metadata to better present wallpapers to users, streamlining the process for you to create diverse and personalized live wallpaper experiences.
Headroom APIs in ADPF
The SystemHealthManager introduces the getCpuHeadroom and getGpuHeadroom APIs, designed to provide games and resource-intensive apps with estimates of available CPU and GPU resources. These methods offer a way for you to gauge how your app or game can best improve system health, particularly when used in conjunction with other Android Dynamic Performance Framework (ADPF) APIs that detect thermal throttling. By using CpuHeadroomParams and GpuHeadroomParams on supported devices, you will be able to customize the time window used to compute the headroom and select between average or minimum resource availability. This can help you reduce your CPU or GPU resource usage accordingly, leading to better user experiences and improved battery life.
Key sharing API
Android 16 adds APIs that support sharing access to Android Keystore keys with other apps. The new KeyStoreManager class supports granting and revoking access to keys by app uid, and includes an API for apps to access shared keys.
Standardized picture and audio quality framework for TVs
The new MediaQuality package in Android 16 exposes a set of standardized APIs for access to audio and picture profiles and hardware-related settings. This allows streaming apps to query profiles and apply them to media dynamically:
- Movies mastered with a wider dynamic range require greater color accuracy to see subtle details in shadows and adjust to ambient light, so a profile that prefers color accuracy over brightness may be appropriate.
- Live sporting events are often mastered with a narrow dynamic range, but are often watched in daylight, so a profile that gives preference to brightness over color accuracy can give better results.
- Fully interactive content wants minimal processing to reduce latency, and wants higher frame rates, which is why many TV's ship with a game profile.
The API allows apps to switch between profiles and users to enjoy the benefits of tuning supported TVs to best suit their content.
Accessibility
Android 16 adds additional APIs to enhance UI semantics that help improve consistency for users that rely on accessibility services, such as TalkBack.
Duration added to TtsSpan
Android 16 extends TtsSpan with a TYPE_DURATION, consisting of ARG_HOURS, ARG_MINUTES, and ARG_SECONDS. This allows you to directly annotate time duration, ensuring accurate and consistent text-to-speech output with services like TalkBack.
Support elements with multiple labels
Android currently allows UI elements to derive their accessibility label from another, and now offers the ability for multiple labels to be associated, a common scenario in web content. By introducing a list-based API within AccessibilityNodeInfo, Android can directly support these multi-label relationships. As part of this change, we've deprecated AccessibilityNodeInfo setLabeledBy and getLabeledBy in favor of addLabeledBy, removeLabeledBy, and getLabeledByList.
Improved support for expandable elements
Android 16 adds accessibility APIs that allow you to convey the expanded or collapsed state of interactive elements, such as menus and expandable lists. By setting the expanded state using setExpandedState and dispatching TYPE_WINDOW_CONTENT_CHANGED AccessibilityEvents with a CONTENT_CHANGE_TYPE_EXPANDED content change type, you can ensure that screen readers like TalkBack announce state changes, providing a more intuitive and inclusive user experience.
Indeterminate ProgressBars
Android 16 adds RANGE_TYPE_INDETERMINATE, giving a way for you to expose RangeInfo for both determinate and indeterminate ProgressBar widgets, allowing services like TalkBack to more consistently provide feedback for progress indicators.
Tri-state CheckBox
The new AccessibilityNodeInfo getChecked and setChecked(int) methods in Android 16 now support a "partially checked" state in addition to "checked" and "unchecked." This replaces the deprecated boolean isChecked and setChecked(boolean).
Two Android API releases in 2025
This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-impacting behavior changes.
We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We're putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.
There's no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.
How to get ready
In addition to performing compatibility testing on this next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing.
App compatibility
The Android 16 Preview program runs from November 2024 until the final public release in Q2 of 2025. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website.
We're targeting March of 2025 for our Platform Stability milestone. At this milestone, we'll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. From that time you'll have several months before the final release to complete your testing. Learn more by checking the release timeline details.
Get started with Android 16
You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 1 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 2.
We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.
For the best development experience with Android 16, we recommend that you use the latest preview of Android Studio (Meerkat). Once you're set up, here are some of the things you should do:
- Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
- Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it.
We'll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you've installed a beta build, you'll automatically get future updates over-the-air for all later previews and Betas.
For complete information, visit the Android 16 developer site.
13 Feb 2025 6:58pm GMT
12 Feb 2025
Android Developers Blog
Meet the Android Studio Team: A Conversation with Staff Developer Programs Engineer, Trevor Johns
Posted by Ashley Tschudin - Social Media Specialist, MTP at Google
Android Studio isn't just code and algorithms - it's built by real people with fascinating stories. Our "Meet the Android Studio Team" series gives you a glimpse into the lives and passions of the talented individuals who craft the tools you use every day. Tune in each month to meet new team members and discover their unique journey.
Trevor Johns: Building Android Studio for You
Meet Trevor Johns, a seasoned Staff Developer Programs Engineer at Google.
Reflecting on his journey, Trevor sheds light on the most impactful advancements in the Android ecosystem and offers a glimpse into his vision for the future where AI plays a pivotal role in streamlining development workflows.
Trevor discusses the Android Studio team's dedication to enhancing developer productivity through AI, highlighting their focus on understanding and addressing developer needs, and reflects on the dynamic journey of Android development while sharing valuable insights.
Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?
I've been at Google in various roles since Google since 2007, and transferred to Android team in 2009 shortly after the launch of the HTC G1 - the first publicly available Android phone. Even in those early days it was clear that mobile computing was a unique opportunity to reimagine many of the limitations of desktop computers and how users interact with the digital world.
Among my first projects were helping developers optimize their apps for the MyTouch 3G and Motorola Droid, as well as creating developer resources for Android's 1.6 Donut release.
Over the years, I've worked on various parts of the Android OS including our first tablet devices, Android Wear, helping develop the original Android support libraries (which later became Jetpack), and the migration to Kotlin.
Recently I joined the Android Studio team to help improve developer productivity, using AI to streamline common developer tasks and help developers have more time to focus on creativity.
How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?
Like the rest of Android, we approach development of new features by listening to our developer community. We hold regular listening sessions with publishers, work with our UX research team to conduct case studies, and participate in online discussions to get a sense for where developers face the most friction - and then try to find ways to reduce that friction.
For example, we developed Gemini in Android Studio's integration with Play Vitals and Firebase Crashlytics based on feedback from members of the developer community who commented to let us know where they would find AI most useful across their developer workflow.
Speaking of, if you'd like to provide us with feedback, you can always file a bug or feature request on the Android Studio issue tracker.
How does the Studio team contribute to Google's broader vision for the Android platform?
In addition to listening to the Android community, we also keep an eye on what's being developed across the rest of the Android team and make sure that Android Studio has the right tools to help developers quickly migrate between Android versions and adopt those new platform features.
Beyond that, the Studio team provides leading edge editing tools to make sure that Android remains one of the easiest computing platforms to develop for - unlocking this unique computing platform for millions of developers.
In your opinion, what is the most impactful feature or improvement the Android team has introduced in recent years, and why?
For developers, my answer would have to be the migration to Kotlin. This language has modernized the Android developer experience - letting developers write apps with less code and fewer errors. It's also the foundation for Jetpack Compose, which is the future of Android UI development.
If you could wave a magic wand and add one dream feature to the Android universe, what would it be and why?
I'd love to see Gemini be able to not just autocomplete code for me, but generate scaffolds for new projects. That way I can focus on building features rather than worrying about basic structure when starting a new project.
Develop Android Apps with Kotlin
Follow Trevor's lead and embrace the power of Kotlin for modern Android development. Enhance your skills and write better Android apps faster with Kotlin.
Stay tuned!
Get ready for another inspiring story! The "Meet the Android Studio Team" series continues next week with a new team member in the spotlight. Don't miss their unique insights and journey.
Find Trevor Johns on LinkedIn, X, Bluesky, and Medium.
12 Feb 2025 10:00pm GMT
TrustedTime API: Introducing a reliable approach to time keeping for your apps
Posted by Kanyinsola Fapohunda - Software Engineer, and Geoffrey Boullanger - Technical Lead
Accurate time is crucial for a wide variety of app functionalities, from scheduling and event management to transaction logging and security protocols. However, a user can change the device's time, so a more accurate source of time than the device's local system time may be required. That's why we're introducing the TrustedTime API that leverages Google's infrastructure to deliver a trustworthy timestamp, independent of the device's potentially manipulated local time settings.
How does TrustedTime work?
The new API leverages Google's secure infrastructure to provide a trusted time source to your app. TrustedTime periodically syncs its clock to Google's servers, which have access to a highly accurate time source, so that you do not need to make a server request every time you want to know the current network time. Additionally, we've integrated a unique model that calculates the device's clock drift. This will inform you when the time may be inaccurate between network synchronizations.
Why is an accurate source of time important?
Many apps rely on the device's clock for various features. However, users can change their device's time settings, either intentionally or unintentionally, therefore changing the time that your app gets. This can lead to problems such as:
- Data Inconsistency: Apps relying on chronological event ordering are vulnerable to data corruption if users manipulate device time. TrustedTime mitigates this risk by providing a trustworthy time source.
- Security Gaps: Time-based security measures, like one-time passwords or timed access controls require an unaltered time source to be effective.
- Unreliable Scheduling: Apps that depend on accurate scheduling, like calendar or reminder apps, can malfunction if the device clock (i.e. Unix timestamp) is incorrect.
- Inaccurate Time: The device's internal clock can drift due to various factors, such as temperature, doze mode, battery level, etc. This can lead to problems in applications that require more precision. The TrustedTime API also provides the estimated error with the timestamps, so that you can ensure your app's time-sensitive operations are performed correctly.
- Lack of Consistency Between Devices: Inconsistent time across devices can cause problems in multi-device scenarios, such as gaming or collaborative applications. The TrustedTime API helps ensure that all devices have a consistent view of time, improving the user experience.
- Unnecessary Power and Data Consumption: TrustedTime is designed to be more efficient than calling an NTP server every time an app needs the current time. It avoids the overhead of repeated network requests by periodically syncing its clock with time servers. This synced time is then used as a reference point, and the TrustedTime API calculates the current time based on the device's internal clock. This approach reduces network usage and improves performance for apps that need frequent time checks.
TrustedTime Use Cases
The TrustedTime API opens up a range of possibilities for enhancing the reliability and security of your apps, with use cases in areas such as:
- Financial Applications: Ensure the accuracy of transaction timestamps even when the device is offline, preventing fraud and disputes.
- Gaming: Implement fair play by preventing users from manipulating the game clock to gain an unfair advantage.
- Limited-Time Offers: Guarantee that promotions and offers expire at the correct time, regardless of the user's device settings.
- E-commerce: Accurately track order processing and delivery times.
- Content Licensing: Enforce time-based restrictions on digital content, like rentals or subscriptions.
- IoT Devices: Synchronize clocks across multiple devices for consistent data logging and control.
- Productivity apps: Accurately record the time of any changes made to cloud documents while offline.
Getting started with the TrustedTime API
The TrustedTime API is built on top of Google Play services, making integration seamless for most Android developers.
The simplest way to integrate is to initialize the TrustedTimeClient early in your app lifecycle, such as in the onCreate() method of your Application class. The following example uses dependency injection with Hilt to make the time client available to components throughout the app.
[Optional] Setup dependency injection
// TrustedTimeClientAccessor.kt import com.google.android.gms.tasks.Task import com.google.android.gms.time.TrustedTimeClient interface TrustedTimeClientAccessor { fun createClient(): Task<TrustedTimeClient> } // TrustedTimeModule.kt @Module @InstallIn(SingletonComponent::class) class TrustedTimeModule { @Provides fun provideTrustedTimeClientAccessor( @ApplicationContext context: Context ): TrustedTimeClientAccessor { return object : TrustedTimeClientAccessor { override fun createClient(): Task<TrustedTimeClient> { return TrustedTime.createClient(context) } } } }
Initialize early in your app's lifecycle
// TrustedTimeDemoApplication.kt @HiltAndroidApp class TrustedTimeDemoApplication : Application() { @Inject lateinit var trustedTimeClientAccessor: TrustedTimeClientAccessor var trustedTimeClient: TrustedTimeClient? = null private set override fun onCreate() { super.onCreate() trustedTimeClientAccessor.createClient().addOnCompleteListener { task -> if (task.isSuccessful) { // Stash the client trustedTimeClient = task.result } else { // Handle error, maybe retry later val exception = task.exception } } // To use Kotlin Coroutine, you can use the await() method, // see https://developers.google.com/android/guides/tasks#kotlin_coroutine for more info. } } NOTE: If you don't use dependency injection in your app. You can simply call `TrustedTime.createClient(context)` instead of using a TrustedTimeClientAccessor.
Use TrustedTimeClient anywhere in your app
// Retrieve the TrustedTimeClient from your application class val myApp = applicationContext as TrustedTimeDemoApplication // In this example, System.currentTimeMillis() is used as a fallback if the // client is null (i.e. client creation task failed) or when there is no time // signal available. You may not want to do this if using the system clock is // not suitable for your use case. val currentTimeMillis = myApp.trustedTimeClient?.computeCurrentUnixEpochMillis() ?: System.currentTimeMillis() // trustedTimeClient.computeCurrentInstant() can be used if Instant is // preferred to long for Unix epoch times and you are able to use the APIs.
Use in short-lived components like Activity
@AndroidEntryPoint class MainActivity : AppCompatActivity() { @Inject lateinit var trustedTimeAccessor: TrustedTimeAccessor private var trustedTimeClient: TrustedTimeClient? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) ... trustedTimeAccessor.createClient().addOnCompleteListener { task -> if (task.isSuccessful) { // Stash the client trustedTimeClient = task.result } else { // Handle error, maybe retry later or use another time source. val exception = task.exception } } } private fun getCurrentTimeInMillis() : Long? { return trustedTimeClient?.computeCurrentUnixEpochMillis() } }
TrustedTime API availability and limitations
The TrustedTime API is available on all devices running Google Play services on Android 5 (Lollipop) and above. You need to add the dependency com.google.android.gms:play-services-time:16.0.1 (or above) to access the new API. No additional permission is required to use this API. However, TrustedTime needs an internet connection after the device starts up to provide timestamps. If the device hasn't connected to the internet since booting, the TrustedTime APIs won't return timestamps.
It's important to note that the device's internal clock can drift due to factors like temperature, doze mode, and battery level. TrustedTime doesn't prevent this drift, but its APIs provide an error estimate for each timestamp. Use this estimate to determine if the timestamp's accuracy meets your application's requirements. While TrustedTime makes it more difficult for users to manipulate the time accessed by your app, it does not guarantee complete safety. Advanced techniques can still be used to tamper with the device's time.
Next steps
To learn more about the TrustedTime API, check out the following resources:
12 Feb 2025 5:00pm GMT
11 Feb 2025
Android Developers Blog
Get ready for Google I/O May 20–21
Google I/O is back
Google I/O returns May 20 - 21! Join us online as we share our vision for the future of technology, along with updates across Android, AI, web, cloud, and more.
Tune in to learn how the latest AI models can help you build innovative apps and transform development workflows. We'll also share how we're making Android development even easier, and empowering you to build richer, more engaging web experiences.
Register now and tune in live
Head to the Google I/O website and register to receive updates. The livestreamed keynotes kick off on May 20th at 10 AM PT, and new this year, we'll be streaming developer product keynotes live from Shoreline across both days!
Stay tuned for details about I/O Connect events this summer, and test your skills at solving the #GoogleIO puzzle to unlock bonus worlds and earn badges.
11 Feb 2025 8:43pm GMT
07 Feb 2025
Android Developers Blog
Timeline update: third-party autofill services support on Chrome on Android
Posted by Eiji Kitamura - Developer Advocate (@agektmr)
In October 2024, we announced that Chrome 131 will allow third-party autofill services on Android (like password managers) to natively autofill forms on websites. Reflecting on feedback from autofill service developers, we've decided to shift the schedule and allow the third-party autofill services from Chrome 135.
Native Chrome support for third-party autofill services on Android means that users will be able to use their preferred password manager or autofill service directly in Chrome, without having to rely on workarounds or extensions. This change is expected to improve the user experience and security for Android users who use third-party autofill services.
Based on developer feedback, we've fixed bugs, and have been working to make the new setting easier to discover. To support those goals, we've added the following capabilities:
- An ability to query Chrome settings and learn whether the user wishes to use a third party autofill service
- An ability to deep link to the Chrome settings page where users can enable third-party autofill services.
Read Chrome settings
Any app can read whether Chrome uses the 3P autofill mode that allows it to use Android Autofill. Chrome uses Android's ContentProvider to communicate that information. Declare in your Android manifest which channels you want to read settings from, e.g.:
<uses-permission android:name="android.permission.READ_USER_DICTIONARY"/> <queries> <!-- To Query Chrome Beta: --> <package android:name="com.chrome.beta" /> <!-- To Query Chrome Stable: --> <package android:name="com.android.chrome" /> </queries>
Then, use Android's ContentResolver to request that information by building the content URI as in this example code:
final String CHROME_CHANNEL_PACKAGE = "com.android.chrome"; // Chrome Stable. final String CONTENT_PROVIDER_NAME = ".AutofillThirdPartyModeContentProvider"; final String THIRD_PARTY_MODE_COLUMN = "autofill_third_party_state"; final String THIRD_PARTY_MODE_ACTIONS_URI_PATH = "autofill_third_party_mode"; final Uri uri = new Uri.Builder() .scheme(ContentResolver.SCHEME_CONTENT) .authority(CHROME_CHANNEL_PACKAGE + CONTENT_PROVIDER_NAME) .path(THIRD_PARTY_MODE_ACTIONS_URI_PATH) .build(); final Cursor cursor = getContentResolver().query( uri, /*projection=*/new String[] {THIRD_PARTY_MODE_COLUMN}, /*selection=*/ null, /*selectionArgs=*/ null, /*sortOrder=*/ null); if (cursor == null) { // Terminate now! Older versions of Chromium don't provide this information. } cursor.moveToFirst(); // Retrieve the result; int index = cursor.getColumnIndex(THIRD_PARTY_MODE_COLUMN); if (0 == cursor.getInt(index)) { // 0 means that the third party mode is turned off. Chrome uses its built-in // password manager. This is the default for new users. } else { // 1 means that the third party mode is turned on. Chrome uses forwards all // autofill requests to Android Autofill. Users have to opt-in for this. }
Deep-link to Chrome settings
To deep-link to the Chrome settings page where users can enable third-party autofill services, use an Android Intent. Ensure to configure the action and categories exactly as in this example code:
Intent autofillSettingsIntent = new Intent(Intent.ACTION_APPLICATION_PREFERENCES); autofillSettingsIntent.addCategory(Intent.CATEGORY_DEFAULT); autofillSettingsIntent.addCategory(Intent.CATEGORY_APP_BROWSER); autofillSettingsIntent.addCategory(Intent.CATEGORY_PREFERENCE); // Invoking the intent with a chooser allows users to select the channel they want to // configure. If only one browser reacts to the intent, the chooser is skipped. Intent chooser = Intent.createChooser(autofillSettingsIntent, "Pick Chrome Channel"); startActivity(chooser); // If the caller knows which Chrome channel they want to configure, // they can instead add a package hint to the intent, e.g. autofillSettingsIntent.setPackage("com.android.chrome"); startActivity(autofillSettingsInstent);
Updated timeline
To reflect the feedback and to leave time for autofill service developers to make relevant changes, we are shifting the plan. Users must select Autofill using another service in Chrome settings to ensure their autofill experience is unaffected. The new setting will become available in Chrome 135. Autofill services should encourage their users to toggle the setting, to ensure they have the best autofill experience possible with their service and Chrome on Android. Chrome plans to stop supporting the compatibility mode in summer 2025.
- March 5th, 2025: Chrome 135 beta is available
- April 1st, 2025: Chrome 135 is in stable
- Summer 2025: Compatibility mode will no longer be available on Chrome
07 Feb 2025 5:00pm GMT
06 Feb 2025
Android Developers Blog
Meet the Android Studio Team: A Conversation with Director of Product Management, Jamal Eason
Posted by Ashley Tschudin - Social Media Specialist, MTP at Google
Dive into the world of Android Studio and meet the masterminds behind your favorite development tools! In our recurring blog series, "Meet the Android Studio Team," we'll introduce you to the brilliant engineers, designers, product managers, and more who are shaping the future of Android development.
Join us each week to uncover the unique perspectives and stories of the people who make Android Studio the best it can be.
Jamal Eason: Building better Android apps - insights on Gemini, Crashlytics, and App Quality
Meet Jamal Eason, a Director of Product Management at Google, whose passion for empowering developers shines through in his work on Android Studio.
His journey, from studying computer science at West Point to developing Android hardware at Intel (including contributions to the Motorola Razr i), showcases a deep understanding of the developer experience. From attending the very first Android Studio unveiling at Google I/O to now shaping its future, Jamal brings a unique perspective to the team.
Jamal shares his insights on the evolution of Android Studio, the importance of a strong developer community, and the features he's most proud of.
Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?
I have had an interest in programming at an early age especially since studying computer science in undergrad at the United States Military Academy (West Point), and in that time I have had an interest not just in the creation of software but also in the tools developers use to make software.
My interest in Android development came when I was preparing for my first job after my telecommunications & computer networks military career when I was joining a team at the Intel Corporation that worked with Google to build Android hardware products. I thought the best way to understand Google and mobile was to download the Android SDK and create my own app end to end. My first taste of Android was Froyo 2.2 using the Eclipse based Android Developer Tools IDE.
At Intel, I worked on creating the x86 based version of the Android Emulator and Emulator system image, and also a new Hypervisor that would accelerate the performance of the Android Emulator on x86 based laptops. After helping ship the Motorola Razr i (xt890) Android phone with Intel technology inside and x86 optimized apps on the device, I made the move to the Android team at Google. With my experience in developing Android apps, and shipping Android developer tools, the Android developer tools team was a natural fit.
Interestingly, I attended the Google I/O the year Android Studio was first revealed as an attendee, and the following year I was working on the team to bring Android Studio to its Beta release at the following years Google I/O.
What unique perspective or experience do you bring to the Android Studio team, and how does it influence your work?
Unique experiences I bring include:
- Technical Translation - In my prior roles, I worked with highly technical teams, and learned how to take absurd technical concepts and present them to different audiences of different technical skill levels. And in the reverse, I worked with many non-technical customers and colleagues and learned how to translate their pain points into product opportunities solved with technical solutions and innovation.
- User Empathy - Previously, I was a software developer, and I regularly like to code on small side projects, and really enjoy spending time with developers who use Android Studio. From first-hand experience and user engagement, I regularly bring in the voice of the user into the discussion from the inception of a product idea to the final stages of the release process.
- UX Design Sense - In a previous career, I designed and created websites, and user interfaces for software. I developed an eye for good UX design and flows particularly in technical software products. These skills aid in complementing the dedicated UX design team in Android Studio, and aids in avoiding productivity pitfalls with poor product and UX flows.
In your opinion, what is the most impactful feature or improvement the Android team has introduced in recent years, and why?
It's hard to nail down just one, but the top three are:
1) product quality
2) integration of Gemini and
3) integrations with Crashlytics and Play with App Quality Insights.
The most impactful feature we worked on is product quality. We treat quality, especially the core code editing experience as a feature. If a developer can't write a line of code and deploy it to a device, then everything else is secondary. Since Android is always evolving, it is an on-going effort but critical for the team to stay focused on.
On top of quality, thoughtful integration of Gemini into Android Studio is a real accelerate for app development. Our focus with AI is to make Android developers more productive, and make the harder tasks and toil easier. So from AI powered code completion, or built-in Gemini chat for Android app development, to enhancing existing tools with AI such as using Gemini to generate Jetpack Compose UI Previews, we are just at the beginning of leveraging AI to make Android app developers more productive.
Lastly, with App Quality Insights, it is now much easier for app developers to address the performance and quality issues found with Firebase Crashlytics and Android Vitals from Google Play. Surfacing these issues right next to source code and source control, make resolving issues much faster and intuitive.
How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?
First step, the Android Studio team works hand-in-hand with the Android OS team so we strive to deliver developer tools in concert with new Android OS and API changes so developers are ready to adopt new Android platform capability into their apps. Then, we constantly review and prioritize developer feedback received via our issue tracker or via our bi-annaul developer survey we post on the Android Developers site. When we can, we sometimes engage with developers via various social media channels. And lastly, we regularly interview developers at various experience levels, and regions around the world in targeted User Research studies.
What advice would you give to aspiring Android developers who are just starting their journey?
- Start with a robust set of code labs and tutorials.
- Get inspired on the possibilities of Android and what you can build.
- Join the Android developer community:
Deploy with Confidence
Inspired by Jamal's journey and dedication to empowering developers? Explore the latest Android Studio features, including App Quality Insights, to improve your app's performance and address issues quickly.
Stay tuned
Don't miss the next installment of our "Meet the Android Studio Team" series, where we'll introduce you to another amazing member of our team and share their unique journey. Stay tuned for more!
Find Jamal Eason on LinkedIn and X.
06 Feb 2025 9:15pm GMT
30 Jan 2025
Android Developers Blog
Meet the Android Studio Team: A Conversation with Product Manager, Paris Hsu
Posted by Ashley Tschudin - Social Media Specialist, MTP at Google
Welcome to "Meet the Android Studio Team"; a short blog series where we pull back the curtain and introduce you to the passionate people who build your favorite Android development tools. Get to know the talented minds - engineers, designers, product managers, and more - who pour their hearts into crafting the best possible experience for Android developers.
Join us each week to meet a new member of the team and explore their unique perspectives.
Paris Hsu: Empowering Android developers with Compose tools
Meet Paris Hsu, a Product Manager at Google passionate about empowering developers to build incredible Android apps.
Her journey to the Android Studio team started with a serendipitous internship at Microsoft, where she discovered the power of developer tools. Now, as part of the UI Tools team, Paris champions intuitive solutions that streamline the development process, like the innovative Compose Tools suite.
In this installment of "Meet the Android Studio Team," Paris shares insights into her work, the importance of developer feedback, and her dream Android feature (hint: it involves acing that forehand).
Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?
Honestly, I joined a bit by chance! The summer before my last year of grad school, I was in the Microsoft's Garage incubator internship program. Our project, InkToCode, turned handwritten designs into code. It was my first experience building developer tools and made me realize how powerful developer tools can be, which led me to the Android Studio team. Now, after 6 years, I'm constantly amazed by what Android developers create - from innovative productivity apps to immersive games. It's incredibly rewarding to build tools that empower developers to create more.
In your opinion, what is the most impactful feature or improvement the Android Studio team has introduced in recent years, and why?
As part of the UI Tools team in Android Studio, I'm biased towards Compose Tools! Our team spent a lot of time rethinking how we can take a code-first approach for tools as we transition the community for XML to Compose. Features like the Compose Preview and its submodes (Interactive, Animation, Deploy preview) enable fast UI iteration, while features such as Layout Inspector or Compose UI Check helps find and diagnose UI issues with ease. We are also exploring ways to apply multimodal AI into these tools to help developers write more high quality, adaptive, and inclusive Compose code quicker.
How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?
We are constantly engaging and listening to developer feedback to ensure we are meeting their needs! some examples:
- Direct feedback: UXR studies, Annual developer surveys, and Buganizer reports provide valuable insights.
- Early access: We release Early Access Programs (EAPs) for new features, allowing developers to test them and provide feedback before official launch.
- Community engagement: We have advisory boards with experienced Android developers, gather feedback from Google Developer Experts (GDEs), and attend conferences to connect directly with the community.
How does the Studio team contribute to Google's broader vision for the Android platform?
I think Android Studio contributes to Google's broader mission by providing Android developers with powerful and intuitive tools. This way, developers are empowered to create amazing apps that bring the best of Google's services and information to our users. Whether it's accessing knowledge through Search, leveraging Gemini, staying connected with Maps, or enjoying entertainment on YouTube, Android Studio helps developers build the experiences that connect people to what matters most.
If you could wave a magic wand and add one dream feature to the Android universe, what would it be and why?
Anyone who knows me knows that I am recently super obsessed with tennis. I would love to see more coaching wearables (e.g. Pixel Watch, Pixel Racket?!). I would love real-time feedback on my serve and especially forehand stroke analysis.
Learn more about Compose Tools
Inspired by Paris' passion for empowering developers to build incredible Android apps? To learn more about how Compose Tools can streamline your app development process, check out the Compose Tools documentation and get started with the Jetpack Compose Tutorial.
Stay tuned
Keep an eye out for the next installment in our "Meet the Android Studio Team" series, where we'll shine the spotlight on another team member and delve into their unique insights.
Find Paris Hsu on LinkedIn, X, and Medium.
30 Jan 2025 9:00pm GMT
29 Jan 2025
Android Developers Blog
Production-ready generative AI on Android with Vertex AI in Firebase
Posted by Thomas Ezan - Sr. Developer Relation Engineer (@lethargicpanda)
Gemini can help you build and launch new user features that will boost engagement and create personalized experiences for your users.
The Vertex AI in Firebase SDK lets you access Google's Gemini Cloud models (like Gemini 1.5 Flash and Gemini 1.5 Pro) and add GenAI capabilities to your Android app. It became generally available last October which means it's now ready for production and it is already used by many apps in Google Play.
Here are tips for a successful deployment to production.
Implement App Check to prevent API abuse
When using the Vertex AI in Firebase API it is crucial to implement robust security measures to prevent unauthorized access and misuse.
Firebase App Check helps protect backend resources (like Vertex AI in Firebase, Cloud Functions for Firebase, or even your own custom backend) from abuse. It does this by attesting that incoming traffic is coming from your authentic app running on an authentic and untampered Android device.

To get started, add Firebase to your Android project and enable the Play Integrity API for your app in the Google Play console. Back in the Firebase console, go to the App Check section of your Firebase project to register your app by providing its SHA-256 fingerprint.
Then, update your Android project's Gradle dependencies with the App Check library for Android:
dependencies { // BoM for the Firebase platform implementation(platform("com.google.firebase:firebase-bom:33.7.0")) // Dependency for App Check implementation("com.google.firebase:firebase-appcheck-playintegrity") }
Finally, in your Kotlin code, initialize App Check before using any other Firebase SDK:
Firebase.initialize(context) Firebase.appCheck.installAppCheckProviderFactory( PlayIntegrityAppCheckProviderFactory.getInstance(), )
To enhance the security of your generative AI feature, you should implement and enforce App Check before releasing your app to production. Additionally, if your app utilizes other Firebase services like Firebase Authentication, Firestore, or Cloud Functions, App Check provides an extra layer of protection for those resources as well.
Once App Check is enforced, you'll be able to monitor your app's requests in the Firebase console.

You can learn more about App Check on Android in the Firebase documentation.
Use Remote Config for server-controlled configuration
The generative AI landscape evolves quickly. Every few months, new Gemini model iterations become available and some models are removed. See the Vertex AI in Firebase Gemini models page for details.
Because of this, instead of hardcoding the model name in your app, we recommend using a server-controlled variable using Firebase Remote Config. This allows you to dynamically update the model your app uses without having to deploy a new version of your app or require your users to pick up a new version.
You define parameters that you want to control (like model name) using the Firebase console. Then, you add these parameters into your app, along with default "fallback" values for each parameter. Back in the Firebase console, you can change the value of these parameters at any time. Your app will automatically fetch the new value.
Here's how to implement Remote Config in your app:
// Initialize the remote configuration by defining the refresh time val remoteConfig: FirebaseRemoteConfig = Firebase.remoteConfig val configSettings = remoteConfigSettings { minimumFetchIntervalInSeconds = 3600 } remoteConfig.setConfigSettingsAsync(configSettings) // Set default values defined in your app resources remoteConfig.setDefaultsAsync(R.xml.remote_config_defaults) // Load the model name val modelName = remoteConfig.getString("model_name")
Read more about using Remote Config with Vertex AI in Firebase.
Gather user feedback to evaluate impact
As you roll out your AI-enabled feature to production, it's critical to build feedback mechanisms into your product and allow users to easily signal whether the AI output was helpful, accurate, or relevant. For example, you can incorporate interactive elements such as thumb-up and thumb-down buttons and detailed feedback forms within the user interface. The Material Icons in Compose package provides ready to use icons to help you implement it.
You can easily track the user interaction with these elements as custom analytics events by using Google Analytics logEvent() function:
Row { Button ( onClick = { firebaseAnalytics.logEvent("model_response_feedback") { param("feedback", "thumb_up") } } ) { Icon(Icons.Default.ThumbUp, contentDescription = "Thumb up") }, Button ( onClick = { firebaseAnalytics.logEvent("model_response_feedback") { param("feedback", "thumb_down") } } ) { Icon(Icons.Default.ThumbDown, contentDescription = "Thumb down") } }
Learn more about Google Analytics and its event logging capabilities in the Firebase documentation.
User privacy and responsible AI
When you use Vertex AI in Firebase for inference, you have the guarantee that the data sent to Google won't be used by Google to train AI models (see Vertex AI documentation for details).
It's also important to be transparent with your users when they're engaging with generative AI technology. You should highlight the possibility of unexpected model behavior.
Finally, users should have control within your app over how their activity related to AI model interactions is stored and deleted.
You can learn more about how Google is approaching Generative AI responsibly in the Google Cloud documentation.
29 Jan 2025 5:00pm GMT
28 Jan 2025
Android Developers Blog
Helping users find trusted apps on Google Play
Posted by JJ Zou - Product Manager, and Scott Lin - Product Manager
At Google Play, we're committed to empowering you with the tools and resources you need to build successful and secure apps that users can rely on. That's why we're introducing a new way to recognize VPN apps that go above and beyond to protect their users: a "Verified" badge for consumer-facing VPN apps.
This new badge is designed to highlight apps that prioritize user privacy and safety, help users make more informed choices about the VPN apps they use, and build confidence in the apps they ultimately download. This badge complements existing features such as the Google Play Store banner for VPNs and Data Safety section declaration in the Play Store.

Build user trust with more transparency
Earning the VPN badge isn't just about checking a box- it's proof that your VPN app invests in app safety. This badge signifies that your app has gone above and beyond, adhering to the Play safety and security guidelines and successfully completed a Mobile Application Security Assessment (MASA) Level 2 validation.
The VPN badge helps your app stand out in a crowded marketplace. Once awarded, the badge is prominently displayed on your app's details page and in search results. Additionally, we have built new surfaces to showcase verified VPN applications.
Demonstrating commitment to security and safety
We're excited to share insights from some of our partners who have already earned the VPN badge and are leading the way in building a safe and trusted Google Play ecosystem. Learn how partners like NordVPN, hide.me, and Aloha are using the badge and implementing best practices for user security:
NordVPN

"We're excited that the new 'Verified' badge will help users easily identify VPNs that meet high standards for security and privacy. In a market where trust is key, this badge not only provides reassurance to customers, but also highlights the integrity of developers committed to delivering secure and reliable products."
hide.me

"Privacy and user safety are fundamental to our VPN's architecture. The MASA program has been valuable in validating our security practices and maintaining high standards. This accreditation provides independent verification of our commitment to protecting user privacy."
Aloha Browser

"The certification process is well-organized and accessible to any company. If your product is developed with security as a core focus, passing the required audits should not pose any difficulty. We regularly conduct third-party audits and have been active participants in the MASA program since its inception. Additionally, it fosters discipline in your development practices, knowing that regular re-certification is required. Ultimately, it's the end user who benefits the most-a secure and satisfied user is the ultimate goal for every app developer."
Getting your App Badge-Ready
To take advantage of this opportunity to enhance your app's profile and attract more users, learn more about the specific criteria and start the validation process today.
To be considered for the "Verified" badge, your VPN app needs to:
- Complete a Mobile Application Security Assessment (MASA) Level 2 validation
- Have an Organization developer account type
- Meet target API level requirements for Google Play apps
- Have at least 10,000 installs and 250 reviews
- Be published on Google Play for at least 90 days
- Submit a Data Safety section declaration, opting into:
- Independent security review, under 'Additional badges'
- Encryption in transit
Note: This list is not exhaustive and doesn't fully represent all the criteria used to display the badge. While other factors contribute to the evaluation, fulfilling these requirements significantly increases your chances of seeing your VPN app "Verified."
Join us in our mission to create a safer and more transparent Google Play ecosystem. We're here to support you with the tools and resources you need to build trusted apps.
28 Jan 2025 6:00pm GMT
24 Jan 2025
Android Developers Blog
Android Studio’s 10 year anniversary
Posted by Tor Norbye - Engineering Director, Jamal Eason - Director of Product Management, and Xavier Ducrohet - Tech Lead | Android Studio
Android Studio provides you an integrated development environment (IDE) to develop, test, debug, and package Android apps that can reach billions of users across a diverse set of Android devices. Last month we reached a big milestone for the product: 10 years since the Android Studio 1.0 release reached the stable channel. You can hear a bit more about its history in the most recent episode of Android Developers Backstage, or watch some of the team's favorite moments: 🎉
When we set out to develop Android Studio we started with these three principles:
First, we wanted to build and release a complete IDE, not just a plugin. Before Android Studio, users had to go download a JDK, then download Eclipse, then configure it with an update center to point to Android, install the Eclipse plugin for Android, and then configure that plugin to point to an Android SDK install. Not only did we want everything to work out-of-the-box, but we also wanted to be able to configure and improve everything: from having an integrated dependency management system to offering code inspections that were relevant to Android app developers to having a single place to report bugs.
Second, we wanted to build it on top of an actively maintained, open-sourced, and best-of-breed Java programing language IDE. Not too long before releasing Android Studio, we had all used IntelliJ and felt it was superior from a code editing perspective.
And third, we wanted to not only provide a build system that was better suited for Android app development, but to also enable this build system to work consistently from both from the command line and from inside the IDE. This was important because in the previous tool chain, we found that there were discrepancies in behavior and capability between the in-IDE builds with Eclipse, and CI builds with Ant.
This led to the release of Android Studio, including these highlights:
Here are some nostalgic screenshots from that first version of Android Studio:



Android Studio has come a long way since those early days, but our mission of empowering Android developers with excellent tools continues to be our focus.
Let's hear from some team members across Android, JetBrains, and Gradle as they reflect on this milestone and how far the ecosystem has come since then.
Android Studio team
"Inside the Android team, engineers who didn't work on apps had the choice between using Eclipse and using IntelliJ, and most of them chose IntelliJ. We knew that it was the gold standard for Java development (and still is, all these years later.) So we asked ourselves: if this is what developers prefer when given a choice, wouldn't this be for our users as well?
And the warm reception when we unveiled the alpha at I/O in 2013 made it clear that it was the right choice."
- Tor Norbye, Engineering Director of Android Studio at Google
"We had a vision of creating a truly Integrated Development Environment for Android app development instead of a collection of related tools. In our previous working model, we had contributions of Android tools from a range of frameworks and UX flows that did not 100% work well end-to-end. The move to the open-sourced JetBrains IntelliJ platform enabled the Google team to tie tools together in a thoughtful way with Android Studio, plus it allowed others to contribute in a more seamless way. Lastly, looking back at the last 10 years, I'm proud of the partnership with Jetbrains and Gradle, plus the community of contributors to bring the best suite of tools to Android app developers."
- Jamal Eason, Director of Product Management of Android Studio at Google
JetBrains
"Google choosing IntelliJ as the platform to build Android Studio was a very exciting moment for us at JetBrains. It allowed us to strengthen and build on the platform even further, and paved the way for further collaboration in other projects such as Kotlin."
- Hadi Hariri, VP of Program Management at JetBrains
Gradle
"Android Studio's 10th anniversary marks a decade of incredible progress for Android developers. We are proud that Gradle Build Tool has continued to be a foundational part of the Android toolchain, enabling millions of Android developers to build their apps faster, more elegantly, and at scale."
- Hans Dockter, creator of Gradle Build Tool and CEO/Founder of Gradle Inc.
"Our long-standing strategic partnership with Google and our mutual commitment to improving the developer experience continues to impact millions of developers. We look forward to continuing that journey for many years to come."
- Piotr Jagielski, VP of Engineering, Gradle Build Tool
Last but not least, we want to thank you for your feedback and support over the last decade. Android Studio wouldn't be where it is today without the active community of developers who are using it to build Android apps for their communities and the world and providing input on how we can make it better each day.
As we head into this new year, we'll be bringing Gemini into more aspects of Android Studio to help you across the development lifecycle to build quality apps faster. We'll strive to make it easier and more seamless to build, test, and deploy your apps with Jetpack Compose across the range of form factors. We are proud of what we launch, but we always have room to improve in the evolving mobile ecosystem. Therefore, quality and stability of the IDE is our top priority so that you can be as productive as possible.
We look forward to continuing to empower you with great tools and improvements as we take Android Studio forward into the next decade. 🚀 We also welcome you to be a part of our developer community on LinkedIn, Medium, YouTube, or X.
24 Jan 2025 6:00pm GMT
23 Jan 2025
Android Developers Blog
The First Beta of Android 16
Posted by Matthew McCullough - VP of Product Management, Android Developer
The first beta of Android 16 is now available, which means it's time to open the experience up to both developers and early adopters. You can now enroll any supported Pixel device here to get this and future Android Beta updates over-the-air.
This build includes support for the future of app adaptivity, Live Updates, the Advanced Professional Video format, and more. We're looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone.
Android adaptive apps
Users expect apps to work seamlessly on all their devices, regardless of display size and form factor. To that end, Android 16 is phasing out the ability for apps to restrict screen orientation and resizability on large screens. This is similar to features OEMs have added over the last several years to large screen devices to allow users to run apps at any window size and aspect ratio.
On screens larger than 600dp wide, apps that target API level 36 will have app windows that resize; you should check your apps to ensure your existing UIs scale seamlessly, working well across portrait and landscape aspect ratios. We're providing frameworks, tooling, and libraries to help.

Key changes:
- Manifest attributes and APIs that restrict orientation and resizing will be ignored for apps - but not games - on large screens.
Timeline:
- Android 16 (2025): Changes apply to large screens (600dp in width) for apps targeting API level 36 (developers can opt-out)
- Android release in 2026: Changes apply to large screens for apps targeting API level 37 (no opt-out)
- It's a great time to make your app adaptive! You can test these overrides without targeting using the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag. Learn more about changes to orientation and resizability APIs in Android 16.
Live Updates
Live Updates are a new class of notifications that help users monitor and quickly access important ongoing activities.
The new ProgressStyle notification template provides a consistent user experience for Live Updates, helping you build for these progress-centric user journeys: rideshare, delivery, and navigation. It includes support for custom icons for the start, end, and current progress tracking, segments and points, user journey states, milestones, and more.
ProgressStyle notifications are suggested only for ride sharing, food delivery, and navigation use cases.
@Override protected Notification getNotification() { return new Notification.Builder(mContext, CHANNEL_ID) .setSmallIcon(R.drawable.ic_app_icon) .setContentTitle("Ride requested") .setContentText("Looking for nearby drivers") .setStyle( new Notification.ProgressStyle() .addProgressSegment( new Notification.ProgressStyle.Segment(100) .setColor(COLOR_ORANGE) ).setProgressIndeterminate(true) ).build(); }
Camera and media updates
Android 16 advances support for the playback, creation, and editing of high-quality media, a critical use case for social and productivity apps.
Advanced Professional Video
Android 16 introduces support for the Advanced Professional Video (APV) codec which is designed to be used for professional level high quality video recording and post production.
The APV codec standard has the following features:
- Perceptually lossless video quality (close to raw video quality)
- Low complexity and high throughput intra-frame-only coding (without pixel domain prediction) to better support editing workflows
- Support for high bit-rate range up to a few Gbps for 2K, 4K and 8K resolution content, enabled by a lightweight entropy coding scheme
- Frame tiling for immersive content and for enabling parallel encoding and decoding
- Support for various chroma sampling formats and bit-depths
- Support for multiple decoding and re-encoding without severe visual quality degradation
- Support multi-view video and auxiliary video like depth, alpha, and preview
- Support for HDR10/10+ and user-defined metadata
A reference implementation of APV is provided through the OpenAPV project. Android 16 will implement support for the APV 422-10 Profile that provides YUV 422 color sampling along with 10-bit encoding and for target bitrates of up to 2Gbps.
Camera night mode scene detection
To help your app know when to switch to and from a night mode camera session, Android 16 adds EXTENSION_NIGHT_MODE_INDICATOR. If supported, it's available in the CaptureResult within Camera2.
This is the API we briefly mentioned as coming soon in the "How Instagram enabled users to take stunning low light photos" blogpost. That post is a practical guide on how to implement night mode together with a case study that links higher-quality, in-app, night mode photos with an increase in the number of photos shared from the in-app camera.
Vertical Text
Android 16 adds low-level support for rendering and measuring text vertically to provide foundational vertical writing support for library developers. This is particularly useful for languages like Japanese that commonly use vertical writing systems. A new flag, VERTICAL_TEXT_FLAG, has been added to the Paint class. When this flag is set using Paint.setFlags, Paint's text measurement APIs will report vertical advances instead of horizontal advances, and Canvas will draw text vertically.
Note: Current high level text APIs, such as Text in Jetpack Compose, TextView, Layout classes and their subclasses do not support vertical writing systems, and do not support using the VERTICAL_TEXT_FLAG.
val text = "「春は、曙。」" Box(Modifier .padding(innerPadding) .background(Color.White) .fillMaxSize() .drawWithContent { drawIntoCanvas { canvas -> val paint = Paint().apply { textSize = 64.sp.toPx() } // Draw text vertically paint.flags = paint.flags or VERTICAL_TEXT_FLAG val height = paint.measureText(text) canvas.nativeCanvas.drawText( text, 0, text.length, size.width / 2, (size.height - height) / 2, paint ) } }) {}
Accessibility
Android 16 adds new accessibility APIs to help you bring your app to every user.
Supplemental descriptions
When an accessibility service describes a ViewGroup, it combines content labels from its child views. If you provide a contentDescription for the ViewGroup, accessibility services assume you are also overriding the content of non-focusable child views. This can be problematic if you want to label things like a drop down (e.g. "Font Family") while preserving the current selection for accessibility (e.g. "Roboto"). Android 16 adds setSupplementalDescription so you can provide text that provides information about a ViewGroup without overriding information from its children.
Required form fields
Android 16 adds setFieldRequired to AccessibilityNodeInfo so apps can tell an accessibility service that input to a form field is required. This is an important scenario for users filling out many types of forms, even things as simple as a required terms and conditions checkbox, helping users to consistently identify and quickly navigate between required fields.
Generic ranging APIs
Android 16 includes the new RangingManager, which provides ways to determine the distance and angle on supported hardware between the local device and a remote device. RangingManager supports the usage of a variety of ranging technologies such as BLE channel sounding, BLE RSSI-based ranging, Ultra-Wideband, and WiFi round trip time.
Behavior changes
With every Android release, we seek to make the platform more efficient and robust, balancing the needs of your apps against things like system performance and battery life. This can result in behavior changes that impact compatibility.
ART internal changes
Code that leverages internal structures of the Android Runtime (ART) may not work correctly on devices running Android 16 along with earlier Android versions that update the ART module through Google Play system updates. These structures are changing in ways that help improve the Android Runtime's (ART's) performance.
Impacted apps will need to be updated. Relying on internal structures can always lead to compatibility problems, but it's particularly important to avoid relying on code (or libraries containing code) that leverages internal ART structures, since ART changes aren't tied to the platform version the device is running on; they go out to over a billion devices through Google Play system updates.
For more information, see the Android 16 changes affecting all apps and the restrictions on non-SDK interfaces.
Migration or opt-out required for predictive back
For apps targeting Android 16 or higher and running on an Android 16 or higher device, the predictive back system animations (back-to-home, cross-task, and cross-activity) are enabled by default. Additionally, the deprecated onBackPressed is not called and KeyEvent.KEYCODE_BACK is no longer dispatched.
If your app intercepts the back event and you haven't migrated to predictive back yet, update your app to use supported back navigation APIs or temporarily opt out by setting the android:enableOnBackInvokedCallback attribute to false in the <application> or <activity> tag of your app's AndroidManifest.xml file.
Predictive back support for 3-button navigation
Android 16 brings predictive back support to 3-button navigation for apps that have properly migrated to predictive back. Long-pressing the back button initiates a predictive back animation, giving users a preview of where the back button takes them.
This behavior applies across all areas of the system that support predictive back animations, including the system animations (back-to-home, cross-task, and cross-activity).
Fixed rate work scheduling optimization
Prior to targeting Android 16, when scheduleAtFixedRate missed a task execution due to being outside a valid process lifecycle, all missed executions will immediately execute when app returns to a valid lifecycle.
When targeting Android 16, at most one missed execution of scheduleAtFixedRate will be immediately executed when the app returns to a valid lifecycle. This behavior change is expected to improve app performance. Please test the behavior to ensure your application is not impacted. You can also test by using the app compatibility framework and enabling the STPE_SKIP_MULTIPLE_MISSED_PERIODIC_TASKS compat flag.
Ordered broadcast priority scope no longer global
In Android 16, broadcast delivery order using the android:priority attribute or IntentFilter#setPriority() across different processes will not be guaranteed. Broadcast priorities for ordered broadcasts will only be respected within the same application process rather than across all system processes.
Additionally, broadcast priorities will be automatically confined to the range (SYSTEM_LOW_PRIORITY + 1, SYSTEM_HIGH_PRIORITY - 1).
Your application may be impacted if it does either of the following:
1. Your application has declared multiple processes that have set broadcast receiver priorities for the same intent.
2. Your application process interacts with other processes and has expectations around receiving a broadcast intent in a certain order.
If the processes need to coordinate with each other, they should communicate using other coordination channels.
Gemini Extensions
Samsung just launched new Gemini Extensions on the S25 series, demonstrating new ways Android apps can integrate with the power of Gemini. We're working to make this functionality available on even more form factors.
Two Android API releases in 2025
This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include planned behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; it will not include any app-impacting behavior changes.
We'll continue to have quarterly Android releases. The Q1 and Q3 updates, which will land in-between the Q2 and Q4 API releases, will provide incremental updates to ensure continuous quality. We're putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.
There's no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.
How to get ready
In addition to performing compatibility testing on this next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing.
App compatibility
The Android 16 Preview program runs from November 2024 until the final public release in Q2 of 2025. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website.
We're targeting March of 2025 for our Platform Stability milestone. At this milestone, we'll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. From that time you'll have several months before the final release to complete your testing. The release timeline details are here.
Get started with Android 16
Now that we've entered the beta phase, you can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.
If you are currently on Android 16 Developer Preview 2 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 1.
If you are in Android 25Q1 Beta and would like to take the final stable release of 25Q1 and exit Beta, you need to ignore the over-the-air update to 25Q2 Beta 1 and wait for the release of 25Q1.
We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.
For the best development experience with Android 16, we recommend that you use the latest preview of Android Studio (Meerkat). Once you're set up, here are some of the things you should do:
- Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
- Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it.
We'll update the preview/beta system images and SDK regularly throughout the Android 16 release cycle. Once you've installed a beta build, you'll automatically get future updates over-the-air for all later previews and Betas.
For complete information, visit the Android 16 developer site.
23 Jan 2025 7:30pm GMT
The future is adaptive: Changes to orientation and resizability APIs in Android 16
Posted by Maru Ahues Bouza - Director, Product Management
With 3+ billion Android devices in use globally, the Android ecosystem is more vibrant than ever. Android mobile apps run on a diverse range of devices, from phones and foldables to tablets, Chromebooks, cars, and most recently XR. Users buy into an entire device ecosystem and expect their apps to work across all devices. To thrive in this multi-device environment, your apps need to adapt seamlessly to different screen sizes and form factors.
Many Android apps rely on user interface approaches that work in a single orientation and/or restrict resizability. However, users want apps to make full use of their large screens, so Android device manufacturers added well-received features that override these app restrictions.
With this in mind, Android 16 is removing the ability for apps to restrict orientation and resizability at the platform level, and shifting to a consistent model of adaptive apps that seamlessly adjust to different screen sizes and orientations. This change will reduce fragmentation with behavior that better meets user expectations, and improves accessibility by respecting the user's preferred orientation. We're building tools, libraries, and platform APIs to help you do this to provide a consistently excellent user experience across the entire Android ecosystem.
What's changing?
Starting with Android 16, we're phasing out manifest attributes and runtime APIs used to restrict an app's orientation and resizability, enabling better user experiences for many apps across devices.
These changes will initially apply when the app is running on a large screen, where "large screen" means that the smaller dimension of the display is greater than or equal to 600dp. This includes:
- Inner displays of large screen foldables
- Tablets, including desktop windowing
- Desktop environments, including Chromebooks
The following manifest attributes and APIs will be ignored for apps targeting Android 16 (SDK 36) on large screens:
Manifest attributes/API | Ignored values |
screenOrientation | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
setRequestedOrientation() | portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape |
resizeableActivity | all |
minAspectRatio | all |
maxAspectRatio | all |
There are some exceptions to these changes for controlling orientation, aspect ratio, and resizability:
- As mentioned before, these changes won't apply for screens that are smaller than sw600dp (e.g. most phones, flippables, outer displays on large screen foldables)
- Games will be excluded from these changes, based on the android:appCategory flag
Also, users have control. They can explicitly opt-in to using the app's default behavior in the aspect ratio settings.

Get ready for this change, by making your app adaptive
Apps will need to support landscape and portrait layouts for window sizes in the full range of aspect ratios that users can choose to use apps in, as there will no longer be a way to restrict the aspect ratio and orientation to portrait or to landscape.
To test if your app will be impacted by these changes, use the Android 16 Beta 1 developer preview with the Pixel Tablet and Pixel Fold series emulators in Android Studio, and either set targetSdkPreview = "Baklava" or use the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag.
For existing apps that restrict orientation and aspect ratio, these changes may result in problems like overlapping layouts. To solve these issues and meet user expectations, our vision is that apps are built to be adaptive, to provide an optimal experience whether someone is using the app on a phone, foldable, tablet, Chromebook, XR or in a car.
Resolving common problems
- Avoid stretched UI components: If layouts were designed and built with the assumption of phone screens, then app functionality may break for other aspect ratios. For example, if a layout was built assuming a portrait aspect ratio, then UI elements that fill the max width of the window will appear stretched in landscape-oriented windows. If layouts aren't built to scroll, then users may not be able to click on buttons or other UI elements that are offscreen, resulting in confusing or broken behavior. Add a maximum width to components to avoid stretching, and add scrolling to ensure all content is reachable.
- Ensure camera compatibility in both orientations: Camera viewfinder previews might assume a specific aspect ratio and orientation relative to the camera sensor, resulting in stretching or flipped previews when those assumptions are broken. Ensure viewfinders rotate properly and account for the UI aspect ratio differing from the sensor aspect ratio.
- Preserve state across when window size changes: Removing orientation and aspect ratio restrictions also means that the window sizes of apps will change more frequently in response to how the user prefers to use an app, such as by rotating, folding, or resizing an app in multi-window or free-form windowing modes. Orientation changes and resizing will result in Activity recreation by default. To ensure a good user experience, it is critical that app state is preserved through these configuration changes so that users don't lose their place in the app when changing posture or changing windowing modes.
To account for different window sizes and aspect ratios, use window size classes to drive layout behavior in a way that doesn't require device-specific customizations. Apps should also be built with the assumption that window sizes will frequently change. It's not necessary to build duplicate orientation-specific layouts - instead, ensure your existing UIs can re-layout well no matter what the window size is. If you have a landscape- or portrait-specific layout, those layouts will still be used.
Optimizing for window sizes by building adaptive
If you're already building adaptive layouts and supporting all orientations, you're set up for success as your app will be prepared for each of the device types and windowing modes your users want to use your app in and these changes should have minimal impact.
We've also got a range of testing resources to help you guarantee reliability. You can automate testing with tools like the Espresso testing framework and Jetpack Compose testing APIs.
FlipaClip is a great example of why building for multiple form-factors matters: they saw 54% growth in tablet users in the four months after they optimized their app to be adaptive.
Timeline
We understand that the changes are significant for apps that have traditionally only supported portrait orientation. UI issues like buttons going off screen, overlapping content, or screens with camera viewfinders may need adjustments.
To help you plan ahead and make the necessary adjustments, here's the planned timeline outlining when these changes will take effect:
- Android 16 (2025): Changes described above will be the baseline experience for large screen devices (smallest screen width > 600dp) for apps that target API level 36, with the option for developers to opt-out.
- Android release in 2026: Changes described above will be the baseline experience for large screen devices (smallest screen width >600dp) for apps that target API level 37. Developers will not have an option to opt-out.
Target API level | Applicable devices | Developer opt-out allowed |
36 (Android 16) | Large screen devices (smallest screen width >600dp) | Yes |
37 (Anticipated) | Large screen devices (smallest screen width >600dp) | No |
The deadlines for targeting a specific API level are app store specific. For Google Play, the plan is that targeting API 36 will be required in August 2026 and targeting API 37 will be required in August 2027.
Preparing for Android 16
Refer to the Android 16 changes page for all changes impacting apps in Android 16, as well as additional resources for updating your apps if you are impacted. To test your app, download the Android 16 Beta 1 developer preview and update to targetSdkPreview = "Baklava" or use the app compatibility framework to enable specific changes.
We're committed to helping developers embrace this new era of adaptive apps and unlock the full potential of their apps across the diverse Android ecosystem. Check out the do's and don'ts for designing and building across multiple window sizes and form factors, as well how to test across the variety of devices that your app will be used in.
Stay tuned for more updates and resources as we approach the release of Android 16!
23 Jan 2025 5:00pm GMT
22 Jan 2025
Android Developers Blog
Build kids app experiences for Wear OS
Posted by John Zoeller - Developer Relations Engineer, and Caroline Vander Wilt - Group Product Manager
New Wear OS features enable 'standalone' watches for kids, unlocking new possibilities for Wear OS app developers
In collaboration with Samsung, Wear OS is introducing Galaxy Watch for Kids, a new kids experience enabling kids to explore while staying connected with their families from their smartwatch, no phone necessary. This launch unlocks new opportunities for Wear OS developers to reach younger audiences.
Galaxy Watch for Kids is rolling out to Galaxy Watch7 LTE models , with features including:
- No phone ownership required: This experience enables the watch and its associated apps to operate on a fully standalone basis using LTE, and when available, Wifi connectivity. This includes calling, texting, games, and more.
- Selection of kid-friendly apps: From gaming to health, kids can browse and request installs of Teacher Approved apps and watch faces onGoogle Play. In addition to approving and blocking apps, parents can also monitor app usage from Google Family Link.
- Stay in touch with parent-managed contacts: Parents can ensure safer communications by limiting text and calling to approved contacts.
- Location sharing: Offers peace of mind with location sharing and geofencing notifications when kids leave or arrive at designated areas.
- School time: Limits watch functionality during scheduled hours of the day, so kids can focus while in school or studying.
Building kids experiences with standalone functionality enables you to reach both standalone and tethered watches for kids. Apps like Math Tango have already created great Wear OS experiences for kids. Check out the video below to learn how they built a rich and engaging Wear OS app.
Our new kids-focused design and content principles and developer guidance are also available today. Check out some of the highlights in the next section.
New principles and guidelines for development
We've created new design principles and guidelines to help developers take advantage of this opportunity to build and improve apps and watch faces for kids.
Design principle: Active and fun
Build engaging healthy experiences for children by including activity-based features.
A great example of this is the Odd Squad Time Unit app from PBS KIDS that encourages children to get up and be physically active. By using the on-device sensors and power-efficient platform APIs, the app is able to provide a fun experience all day and still maintain battery life of the watch from wakeup to bed time.

Note that while experiences should be catered to kids, they must also follow the Wear OS quality requirements related to the visual experience of your app, especially when crafting touch targets and font sizes.
Content principle: Thoughtfully crafted
Consider adjusting your content to make it not only appropriate, but also consumable and intuitive for younger kids (including those as young as 6). This includes both audio and visual app components.
Tinkercast's Two Whats?! And a Wow! app uses age-appropriate vocabulary and fun characters to aid in their teaching. It's a great example of how a developer should account for reading comprehension.

Development guidelines
New Wear OS kids apps must adhere to the Wear OS app quality guidelines, the guidelines for standalone apps, and the new Kids development guide.
Minimize impact on device battery
Minimize events that affect battery life over the course of one session. Kids use watches that provide important safety features for their parents or guardians, which depend on the device having enough battery life. Below are best practices for reducing battery impact.
✅ DO design for offline use cases so that kids can play without incurring network-related battery costs
✅ DO minimize tasks that require an internet or GPS connection
✅ DO use power efficient APIs for all day activity tracking as well as tracking exercises
🚫 DO NOT use direct sensor tracking as this will significantly reduce the battery life
🚫 DO NOT include long-running animations
Choose a development environment
To develop kid-friendly apps and games you can use Compose for Wear OS, our recommended approach for building UI for Wear OS, as well as Unity for Android.
We recommend Unity for developing games on Wear OS if you're familiar and comfortable with its workflows and capabilities. However, for games with only a few animations, Compose Animation should be sufficient and is better supported within the Android environment.
Be sure to consider that some Wear OS quality requirements may require custom Unity implementations, such as support for Rotary Input.
Originator's MathTango showcases the flexibility and richness of developing with Unity:

Creating Watch Faces
Developing watch faces for kids requires the use of Watch Face Format. Watch faces should adhere to our content and design principles mentioned above, as well as our quality standards, including our ambient mode requirement.
The following examples demonstrate our Content Principle: Appealing. The content is relevant, engaging, and fun for kids, sparking their interest and imagination.
The Crayola Pets Watch Face comes with a great variety of customization options, and demonstrates an informative and pleasant watch face:

The Marvel Watch Faces (Captain America shown) provide a fun and useful step tracking feature:

Kids experience publishing requirements
Developers looking to get started on a new kids experience will need to keep a few things in mind when publishing on the Play Store.
- Age and Content Rating: Kids apps should be configured in the Play Store to meet the age and content requirements appropriate to their functionality
- Standalone Functionality: Apps must have 'standalone' defined in their manifest and meet all associated requirements, which will apply when the watch is set up with a child account
- Using Watch Face Format: Only watch faces which are built with Watch Face Format will be made available for kids
Expand your reach with Wear OS
Get ready to reach a new generation of Wear OS users! We've created all-new guidelines to help you build engaging experiences for kids. Here's a quick recap:
- Continue to use the baseline set of Wear OS development resources, including Get started with Wear OS and Wear OS app quality
- Focus on enrichment and age-tailoring
- Make sure it works with Standalone, and keep an eye on the battery
With the Wear for Kids experience, developers can reach an entirely new audience of users and be part of the next generation of learning and enrichment on Wear OS.
Check out all of the new experiences on the Play Store!
22 Jan 2025 4:00pm GMT
10 Jan 2025
Android Developers Blog
Apps adopt Transformer to support more reliable and performant media editing use cases
Posted by Caren Chang - Developer Relations Engineer
The Jetpack Media3 library enables Android apps to build high quality media apps. As part of the Media3 library, the Transformer module aims to provide easy to use, reliable, and performant APIs for transcoding and editing media.
For example, apps can use Transformer to apply editing operations such as trimming a long piece of media file, or applying effects to video tracks. Transformer can also be used to convert media files from one format to another, such as adjusting the resolution or encoding of the media file.
Developing Transformer APIs
As part of the process to introduce new APIs, our engineering team works closely with Google apps such as Google Photos to test and experiment the new APIs. Experimental flags are first introduced to enable performance improvements. Once the results are successful and conclusive, these experimental features are then built into the default API implementations or promoted to public APIs for all apps to use. This approach allows Transformer APIs to be tested on a wide variety of devices.
Transformer Adoption in apps
Apps that have been using Transformer in production observed in-app performance improvements, less code to maintain, and better developer experience. Let's take a closer look at how Transformer has helped apps for their media-editing use cases.
One of users' favorite features in Google Photos is memory sharing, where snippets of your life story that are curated and presented as Google Photos memories can now be shared as videos to social media and chat apps. However, the process of combining media items to create a video on device is resource intensive and subject to significant latency, especially on low-end devices. To reduce this latency and enable the feature on a wider range of devices, Photos adopted Transformer in their media creation pipeline. Along with other improvements made, the team found that Transformer played a part in reducing the median user latency for creating memory videos by 41% on high-end devices and 27% on mid-range devices.
The Photos app also enables users to perform media edits such as trimming or rotating a video. By adopting Transformer APIs for rotating videos, median save latency was reduced by 79% for applicable videos. The app also adopted Transformer's API for optimizing video trimming, and observed video save latency decrease by 64%.
1 Second Everyday is a personal video journal that helps you create captivating montages and timelapses. One of the app's main user journeys is sequentially combining short videos to create a meaningful movie. After adopting Transformer for this use case, the app observed that video encoding performance was up to 5x faster, allowing them to explore enabling 4k and HDR support. The Transformer adoption also helped decrease relevant code by 30%, making it easier for the developers to maintain the code base.
BandLab is the next-generation music creation platform used by millions around the world to make and share their music. The app originally used MediaCodecs for their video creation use cases, but found that the low level implementation resulted in native crashes that were difficult to debug. After researching more on Transformer, the team made the decision to migrate from MediaCodecs to Transformer. Overall, it only took the team 12 working days for the migration, and this resulted in a simpler codebase and more maintainable pipeline for their media creation use cases. In addition, the app observed that all previously observed native crashes were no longer occurring anymore.
What's next for Transformers?
We're excited to see Transformer's adoption in the developer community, and will continue adding new features to support more media-editing use cases for the Android ecosystem including:
- Better support for previewing media edits
- Improving the performance and developer experience for video frame extraction
- Easier integration with AI effects
- and much more
Keep an eye out on what we're working on in the Media3 Github, and file feature requests to help shape the future of Transformer!
10 Jan 2025 5:00pm GMT
09 Jan 2025
Android Developers Blog
Android Studio Ladybug Feature Drop is Stable!
Posted by Steven Jenkins - Product Manager, Android Studio
Today, we are thrilled to announce the stable release of Android Studio Ladybug 🐞 Feature Drop (2024.2.2)!
Accelerate your productivity with Gemini in Android Studio, Animation Preview support for Wear Tiles, App Links Assistant and much more. All of these new features are designed to help you build high-quality Android apps faster.
Read on to learn more about all the updates, quality improvements, and new features across your key workflows in Android Studio Ladybug Feature Drop, and download the latest stable version today to try them out!
Gemini in Android Studio
Gemini Code Transforms
Gemini Code Transforms can help you modify, optimize, or add code to your app with AI assistance. Simply right-click in your code editor and select "Gemini > Generate code" or highlight code and select "Gemini > Transform selected code." You can also use the keyboard shortcut Ctrl+\ (⌘+\ on macOS) to bring up the Gemini prompt. Describe the changes you want to make to your code, and Gemini will suggest a code diff, allowing you to easily review and accept only the suggestions you want.
With Gemini Code Transforms, you can simplify complex code, perform specific code transformations, or even generate new functions. You can also refine the suggested code to iterate on the code suggestions with Gemini. It's an AI coding assistant right in your editor, helping you write better code more efficiently.

Rename
Gemini in Android Studio enhances your workflow with intelligent assistance for common tasks. When renaming a single variable, class, or method from the code editor, the "Refactor > Rename" action uses Gemini to suggest contextually appropriate names, making it smoother and more efficient to refactor names as you're coding in the editor.

Rethink
For larger renaming refactors, Gemini can "Rethink variable names" across your whole file. This feature analyzes your code and suggests more intuitive and descriptive names for variables and methods, improving readability and maintainability.

Commit Message
Gemini now assists with commit messages. When committing changes to version control, it analyzes your code modifications and suggests a detailed commit message.

Generate Documentation
Gemini in Android Studio makes documenting your code easier than ever. To generate clear and concise documentation, select a code snippet, right-click in the editor and choose "Gemini > Document Function" (or "Document Class" or "Document Property", depending on the context). Gemini will generate a draft that you can then refine and perfect before accepting the changes. This streamlined process helps you create informative documentation quickly and efficiently.

Debug
Animation Preview support for Wear OS Tiles
Animation Preview support for Wear OS Tiles helps you visualize and debug tile animations with ease. It provides a real-time view of your animations, allowing you to preview them, control playback with options like play, pause, and speed adjustment, and inspect key properties such as initial/end states and animation curves. You can even dynamically modify animation code and instantly observe the results within the inspector, streamlining the debugging and refinement process.

Wear Health Services
The Wear Health Services feature in Android Studio simplifies the process of testing health and fitness apps by enabling Wear Health Services within the emulator. You can now easily customize various parameters for a given exercise such as heart rate, distance, and speed without needing a physical device or performing the activity itself. This streamlines the development and testing workflow, allowing for faster iteration and more efficient debugging of health-related features.

Optimize
App Links Assistant
App Links Assistant simplifies the process of implementing app links by serving valid JSON syntax that resolves broken deep links for your app. You can review the JSON file and then upload it to your website, resolving issues quickly. This eliminates the manual creation of the JSON file, saving you time and effort. The tool also allows you to compare existing JSON files with newly generated ones to easily identify any discrepancies.

Google Play SDK Insights Integration
Android Studio now provides enhanced lint warnings for public SDKs from the Google Play SDK Index and the Google Play SDK Console, helping you identify and address potential issues. These warnings alert you if an SDK is outdated, violates Google Play policies, or has known security vulnerabilities. Furthermore, Android Studio provides helpful quick fixes and recommended version ranges whenever possible, making it easier to update your dependencies and keeping your app more secure and compliant.

Quality improvements
Beyond new features, we also continued to improve the overall quality and stability of Android Studio. In fact, the Android Studio team addressed over 770 bugs during the Ladybug Feature Drop development cycle.
IntelliJ platform update
Android Studio Ladybug Feature Drop (2024.2.2) includes the IntelliJ 2024.2 platform release, which has many new features such as more intuitive full line code completion suggestions, a preview in the Search Everywhere dialog and improved log management for the Java** and Kotlin programming languages.
See the full IntelliJ 2024.2 release notes.
Summary
To recap, Android Studio Ladybug Feature Drop includes the following enhancements and features:
Gemini in Android Studio
- Gemini Code Transforms
- Rename
- Rethink
- Commit Message
- Generate Documentation
Debug
- Animation Preview support for Wear OS Tiles
- Wear Health Services
Optimize
- App Links Assistant
- Google Play SDK Insights Integration
Quality Improvements
- 770+ bugs addressed
IntelliJ Platform Update
- More intuitive full line code completion suggestions
- Preview in the Search Everywhere dialog
- Improved log management for Java and Kotlin programming languages
Getting Started
Ready for next-level Android development? Download Android Studio Ladybug Feature Drop and unlock these cutting-edge features today. As always, your feedback is important to us - check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!
**Java is a trademark or registered trademark of Oracle and/or its affiliates.
09 Jan 2025 7:00pm GMT
08 Jan 2025
Android Developers Blog
Performance Class helps Google Maps deliver premium experiences
Posted by Nevin Mital - Developer Relations Engineer, Android Media
The Android ecosystem features a diverse range of devices, and it can be difficult to build experiences that take advantage of new or premium hardware features while still working well for users on all devices. With Android 12, we introduced the Media Performance Class (MPC) standard to help developers better understand a device's capabilities and identify high-performing devices. For a refresher on what MPC is, please see our last blog post, Using performance class to optimize your user experience, or check out the Performance Class documentation.
Earlier this year, we published the first stable release of the Jetpack Core Performance library as the recommended solution for more reliably obtaining a device's MPC level. In particular, this library introduces the PlayServicesDevicePerformance class, an API that queries Google Play Services to get the most up-to-date MPC level for the current device and build. I'll get into the technical details further down, but let's start by taking a look at how Google Maps was able to tailor a feature launch to best fit each device with MPC.
Performance Class unblocks premium experience launch for Google Maps
Google Maps recently took advantage of the expanded device coverage enabled by the Play Services module to unblock a feature launch. Google Maps wanted to update their UI by increasing the transparency of some layers. Consequently, this meant they would need to render more of the map, and found they had to stop the rollout due to latency increases on many devices, especially towards the low-end. To resolve this, the Maps team started by slicing an existing key metric, "seconds to UI item visibility", by MPC level, which revealed that while all devices had a small increase in this latency, devices without an MPC level had the largest increase.

With these results in hand, Google Maps started their rollout again, but this time only launching the feature on devices that report an MPC level. As devices continue to get updated and meet the bar for MPC, the updated Google Maps UI will be available to them as well.
The new Play Services module
MPC level requirements are defined in the Android Compatibility Definition Document (CDD), then devices and Android builds are validated against these requirements by the Android Compatibility Test Suite (CTS). The Play Services module of the Jetpack Core Performance library leverages these test results to continually update a device's reported MPC level without any additional effort on your end. This also means that you'll immediately have access to the MPC level for new device launches without needing to acquire and test each device yourself, since it already passed CTS. If the MPC level is not available from Google Play Services, the library will fall back to the MPC level declared by the OEM as a build constant.

As of writing, more than 190M in-market devices covering over 500 models across 40+ brands report an MPC level. This coverage will continue to grow over time, as older devices update to newer builds, from Android 11 and up.
Using the Core Performance library
To use Jetpack Core Performance, start by adding a dependency for the relevant modules in your Gradle configuration, and create an instance of DevicePerformance. Initializing a DevicePerformance should only happen once in your app, as early as possible - for example, in the onCreate() lifecycle event of your Application. In this example, we'll use the Google Play services implementation of DevicePerformance.
// Implementation of Jetpack Core library. implementation("androidx.core:core-ktx:1.12.0") // Enable APIs to query for device-reported performance class. implementation("androidx.core:core-performance:1.0.0") // Enable APIs to query Google Play Services for performance class. implementation("androidx.core:core-performance-play-services:1.0.0")
import androidx.core.performance.play.services.PlayServicesDevicePerformance class MyApplication : Application() { lateinit var devicePerformance: DevicePerformance override fun onCreate() { // Use a class derived from the DevicePerformance interface devicePerformance = PlayServicesDevicePerformance(applicationContext) } }
Then, later in your app when you want to retrieve the device's MPC level, you can call getMediaPerformanceClass():
class MyActivity : Activity() { private lateinit var devicePerformance: DevicePerformance override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) // Note: Good app architecture is to use a dependency framework. See // https://developer.android.com/training/dependency-injection for more // information. devicePerformance = (application as MyApplication).devicePerformance } override fun onResume() { super.onResume() when { devicePerformance.mediaPerformanceClass >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE -> { // MPC level 34 and later. // Provide the most premium experience for the highest performing devices. } devicePerformance.mediaPerformanceClass == Build.VERSION_CODES.TIRAMISU -> { // MPC level 33. // Provide a high quality experience. } else -> { // MPC level 31, 30, or undefined. // Remove extras to keep experience functional. } } } }
Strategies for using Performance Class
MPC is intended to identify high-end devices, so you can expect to see MPC levels for the top devices from each year, which are the devices you're likely to want to be able to support for the longest time. For example, the Pixel 9 Pro released with Android 14 and reports an MPC level of 34, the latest level definition when it launched.
You should use MPC as a complement to any existing Device Clustering solutions you already use, such as querying a device's static specs or manually blocklisting problematic devices. An area where MPC can be a particularly helpful tool is for new device launches. New devices should be included at launch, so you can use MPC to gauge new devices' capabilities right from the start, without needing to acquire the hardware yourself or manually test each device.
A great first step to get involved is to include MPC levels in your telemetry. This can help you identify patterns in error reports or generally get a better sense of the devices your user base uses if you segment key metrics by MPC level. From there, you might consider using MPC as a dimension in your experimentation pipeline, for example by setting up A/B testing groups based on MPC level, or by starting a feature rollout with the highest MPC level and working your way down. As discussed previously, this is the approach that Google Maps took.
You could further use MPC to tune a user-facing feature, for example by adjusting the number of concurrent video playbacks your app attempts based on the MPC level's concurrent codec guarantees. However, make sure to still query a device's runtime capabilities when using this approach, as they may differ depending on the environment and state the device is in.
Get in touch!
If MPC sounds like it could be useful for your app, please give it a try! You can get started by taking a look at our sample code or documentation. We welcome you to share any questions or feedback you have in this short form.
This blog post is a part of Camera and Media Spotlight Week. We're providing resources - blog posts, videos, sample code, and more - all designed to help you uplevel the media experiences in your app.
To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.
08 Jan 2025 5:00pm GMT